Generically, Ethernet is used as a synonym for local area network. Formally, the first widely used version was version 2 of DIX Ethernet, DIX representing the companies that worked together to create it: Digital Equipment Corporation, Intel, and Xerox. DIX Ethernet was a de facto standard, but did not come from any recognized standards body, which caused problems with some organizational purchasing rules.
So, a proposal was made to the what was then the Institute of Electrical and Electronic Engineers, now IEEE, to set up a formal standardization effort. This was done in February 1980, and the IEEE Project 802 is not a sequential project number, but commemorates the year and month of its formation. As it was being created, however, there were technical and commercial reasons to believe that Ethernet may not be the only way to build local area networks, so the standardization of Ethernet was put into the IEEE 802.3 subcommittee of Project 802.
In the process of standardization, some improvements, generally backward compatible with DIX Ethernet, were made in the specification. The 802.3 committee, however, has remained active, building on the original Ethernet work but creating literally dozens of standards for communications systems undreamed-of by the original inventors. That which is called "Wireless Ethernet" actually comes from IEEE 802.11, with the "WiMax" variant from IEEE 802.16.
Physical medium aspects
Originally, the medium was a specified coaxial cable with a maximum length of 500 meters. A resistor terminator was connected at each end.
This main cable was semirigid, and really could not be bent into a sufficiently flexible shape to connect directly to the computers. To allow the necessary flexibility in computer connection, there were two means of making the actual cable connection:
- T-connector, where the cable was cut at the desired point of attachment, a connector placed on each of the cut ends, and the two ends and a drop cable were connected to a T-shaped connector that gave a common path to the center and coaxial shield conductors of all the cables. Inserting the T-connector and making the three connections to it restored cable operation.
- Vampire tap, in which the cable was placed in one half of a pair of mechanical connectors that had a half-cylinder groove to accept the cable. The other half was put over the cable, encircling it complely, and the cable holder was fastened. Next, a nut was tightened that drove a "vampire" insulation-piercing tap, at right angles to the restrained cable, such that the inner "fang" made contact with the center conductor of the cable, and an outer "fang" made contact with the coaxial shield. A drop cable was then attached to the outside connector of the vampire assembly. In principle, but not always practice, vampire taps allowed continued Ethernet operation while it was being attached, because the cable was never cut.
From the tap, for which the more modern term is medium-dependent interface (MDI), a coaxial drop cable ran to another box called a transceiver. The transceiver had two connectors, one for the coaxial drop cable, and the other a 15-pin "D-subminiature" type connected to an attachment unit interface (AUI) cable made up of twisted pairs of copper wire, not coaxial cable.
The D-subminiature connector had two rows of pins arranged in a trapezoidal form, and a means of fastening it to the transceiver and to the computer's AUI interface. While the original connector called for a "slide latch" that required no tools to fasten, the slide latch was extremely unreliable in practice, and probably received almost as many foul oaths from installation engineers than it received bits from the computer. While the standard never changed, the usual fasteners were machine screws.
Medium access control
Contention for the medium was minimized, and resolved when it occurred, using carrier sense multiple access with collision detection (CSMA/CD) technology. In very simplified terms, the transceiver would not transmit as long as it detected a transmission in progress. If it sensed a clear line, it would transmit, but continued to monitor the line to detect if another device had simultaneously sensed a clear line and started to transmit, causing a collision. CSMA/CD then provided mechanisms for detecting the collision and breaking a tie among the devices waiting to transmit.
Once bits could be transmitted, the Ethernet frame was sent, which had several fields; all lengths of which are specified in 8-bit bytes:
|Preamble||8 bytes||Physical layer overhead|
|Destination address||6 bytes||Station on the medium to receive the frame|
|Source address||6 bytes||Station on the medium that sent the frame|
|Ethertype||2 bytes||Type of protocol in the data field|
|Data||Up to 1500 bytes||Information payload|
|Frame check sequence||4 bytes||Computation on the other frame bits to detect transmission errors|
The Ethertype field was redefined in the medium access control part of the first IEEE 802.3 specification, becoming a length field to solve a problem in DIX, which revealed that it was unwise to send a frame with a data field shorter than 64 bytes. The length field allowed the data field in the frame to be no shorter than 64 bytes, but that the actual payload of the field could be shorter, and padded to 64 bytes.
There was still a need for payload type identification, so the IEEE 802.2 Logical Link Control protocol was defined to be the first few bytes of the data field, without changing the hardware-defined header. Much later, the header was later extended, for virtual local area networks and quality of service, by the IEEE 802.1Q standard.