There seems to be more standards every day, and some have rumored that there is a movement to merge or consolidate some existing standards. We will examine some of the standards and how they came to be. For starters, we will limit ourselves to three main types of standards in the industry: on-board or internal, input/output (I/O), and networking. I/O and networking are similar, but networking standards typically apply to longer distance links than I/O standards. There is, as often happens, overlap between these types, and some, such as PCI-express, have standards that apply to more than one of them.
Long before there were networks and personal computers, computers needed a mechanism to handle the transfer of data from the central processing unit (CPU) to various interface devices such as memory, external devices such as card readers, disk and tape drives, and the human interfaces like keyboards and displays (such as they were). It was known even then that tying up the CPU for these tasks reduced the overall efficiency of the system.
One solution was introduced by IBM in 1957 in its System 7091 general purpose computer, in order to free up the main CPU from having to service I/O requests. It was called channel architecture and was comprised of nine input signals, nine output signals, and a number of control signals. Special commands were added to the operating system to call this processor to handle I/O operations. A separate, dedicated processor offloaded I/O tasks from the main CPU and enabled it to spend most of its resources on computing, maximizing system efficiency and throughput.2 At that time, any connections between systems were proprietary. Even attachment of third party I/O devices to computers themselves was discouraged. The IBM 709 channel was superseded by IBM’s System/360 Channel, which was also seen as a key, proprietary feature.
There was no multivendor network technology until ARPANET in the late 1960s, and then Ethernet in the 1970s. Ethernet was first described in a memo in 1973 by Dr. Robert Metcalfe at Xerox’s Palo Alto Research Center (PARC).3 IBM introduced its Systems Network Architecture in 1974, which facilitated communications between its computers, and was widely used in large corporate installations.
The first multivendor Ethernet network standard was published in 1980 by the DEC- Intel-Xerox consortium to allow computers from multiple companies to communicate (“interoperate”) between each other.4 It ran at 10 Mbps, using Carrier Sense Multiple Access/ Collision Detection protocol. IBM introduced its competing Token Ring technology soon thereafter, which IBM claimed gave it superior performance in the presence of a large number of users. Token Ring Local Area Networks (LANs) used IBM transceiver ICs and different wiring technology than Ethernet, but the market favored Ethernet over Token Ring technology, and it ultimately was displaced by Ethernet. Both were adopted by the Institute of Electrical and Electronics Engineers (IEEE), as its 802.3 and 802.5 standards, respectively.
In 1984, the International Organization for Standardization (ISO) published the Open Systems Interconnect (OSI) seven layer model,5 which provided a framework for interoperable networks. Its Layer 1, the physical layer, defines the electrical interface that ensures compatibility of signals between systems. OSI-compliant versions of Ethernet have since then spawned many different network technologies including Metropolitan Area Networks (MANs) and LANs with many varied implementations. These are specified in a large number of IEEE 802 standards with various speeds and topologies, including 802.3 (Ethernet LANs), 802.5 (Token Ring LAN), 802.11 (Wireless LANs), 802.14 (MANs), and 802.15.
Internal Computer Interface Standards
In the late 1970s and early 1980s when smaller, “personal” computers first became available, I/O operations were just another function performed by various plug-in cards that were typically connected by a parallel bus inside the computer chassis. Until that time, “microcomputers” used proprietary internal parallel buses. The devices on the bus included memory, disk drives, serial and parallel communications interfaces, display controllers, and eventually network adapters for, among others, Ethernet.
IBM broke with tradition when it announced its personal computer in 1981 and published the specification for the XT bus in hope of promoting its acceptance in the industry. The PC and its “clones” proved to be wildly popular, and whole companies were created whose sole purpose seemed to be the development and sales of compatible peripheral ;adapters and devices.
The 8-bit IBM PC XT bus was adopted by others and dubbed the Industry Standard (ISA) bus. IBM released a 16-bit version with its PC AT (a motherboard from which is shown in Figure 1) in 1984, which was also adopted by the industry, followed by 16- and 32-bit versions ;of IBM’s MicroChannel architecture in its Personal System 2 systems in 1987. MicroChannel, however, did not see as wide acceptance as it predecessors had, for a number of reasons including licensing issues.
The industry instead largely used the 32-bit EISA bus, a follow-on to the AT bus, and then migrated to the Peripheral Component Interconnect (PCI) bus, which was developed by the PCI Special Interest Group (PCI SIG) and released in 1992.6 Other special purpose interfaces such as the VESA (Video Electronics Standards Association) Local Bus video interface (VLB),7 Compact PCI, and Accelerated Graphics Port (AGP) were also developed for specific applications. The current PCI-express is the latest incarnation of these interfaces, has been widely adopted, and is continuing to evolve. Its specification is also developed and controlled by the PCI SIG. Special high-speed interfaces, such as PCI-X and QPI, were also developed for high-performance server implementations, but they are too numerous to discuss here. The characteristics of these various interfaces are listed in Table 1.8
PCI-express actually includes multiple standards, one for base boards and adapter cards, and a separate cabling standard for links outside the computer cabinet. Each has multiple link-width implementations, and each “Generation” beyond Gen. 1 has multiple link data rates per lane. Links are unidirectional, so a “x1” link consists of two data “lanes,” one lane in each direction. There are also multiple variants of the PCI bus for industrial (PCIMG), telecom (Compact PCI), and test (PXI) applications that are beyond the scope of this article.
In addition to these interfaces, the storage world (primarily disk drives until recently) has seen development of many families of interfaces specifically for connecting its devices. Some of these are used internally to a computer, while some apply to external devices. Some of these standards were enabled by the integration of the drive controller into the drive package itself; ultimately, their development was due to a desire for lower cost and higher performance.
These standards include the Enhanced Small Disk Interface (ESDI), Integrated Drive Electronics (IDE), Fibre Channel, ATA, and SCSI (Small Computer System Interface), which were published initially by various companies or industry groups. ATA has been used primarily in the consumer computer arena, while the more robust (and expensive) SCSI dominates in commercial computing. The parallel form of the ATA interface is a multi-device bus, while parallel SCSI is point to point. Some of these storage standards, e. g., Fibre Channel, SAS, and SCSI, were later adopted by the American National Standards Institute (ANSI), which merged into the International Committee for Information Technology Standards (INCITS). Fibre Channel is somewhat of a special case, in that it could be considered both a storage and a networking standard due to its longdistance capabilities. A partial list of these interfaces is included in Table 2.
There are many variants of the SCSI interface such as Fast, Wide, Fast Wide, Ultra, LVD, and others, each with different bus widths and speeds. Devices of one data width can be mixed with others with a different data width. Some implementations of parallel SCSI use single-ended signaling while others are differential, some of the latter use High Voltage Differential Signaling (HVDS), while others use Low Voltage (LVDS), etc.
As time and technology progressed, the parallel forms of ATA and SCSI were supplanted by serial versions, also listed in Table 2. These continue in wide usage today, and provide significantly higher throughput than their parallel predecessors. Compliance to these standards avoids problems due to incompatible electrical interfaces, and it facilitates the use of multiple vendors’ disk drives with specificationcompliant systems. The serial interfaces also provide lower cost, as well as mechanical advantages including less airflow blockage inside the computer cabinet, since the connecting cables are much narrower than the parallel cables they replace. SAS is more robust than S-ATA, but also faster, being full duplex rather than S-ATA’s half duplex.
A number of other interfaces were developed to supplant the traditional serial and parallel I/O interfaces used to attach external devices such as printers, communications devices, and displays. A number of these are also used in non-computer consumer devices such as digital televisions, audio systems, cameras, and gaming systems. In general, new standards are developed in response to a need for better performance: higher bit rate, higher resolution, more colors, etc. than the existing standard allows. Some standards were developed as proprietary interfaces by one company and then released for general usage. One example of this is FireWire, which was later adopted by the IEEE as its 1394 standard. The most ubiquitous serial interface of these at this writing is the Universal Serial Bus (USB). Table 3 lists some of those interfaces.
The standards specify the electrical interfaces (signal levels, bit rates, impedances, etc.) as well as the connectors and cables, to ensure mechanical compatibility. As with disk drives, the use of standard-complaint interfaces facilitates interoperability between specification-compliant devices. There is another whole family of I/O interfaces that is used for computer and video displays. Table 4 lists a number of those interfaces. Television has its own set of standards including analog and digital TV, which are beyond the scope of this article. Cell phones? Sure, there are standards for them
too. However, we will not discuss them here.
A major shift in the networking world occurred when the first networking switch ICs were introduced in the 1990s. This followed the digitalization of the public telephone network and the creation of the public switched telephone network (PSTN). Prior to that, networks had been constructed of shared media, in which only one device could be communicating at a given time. Simultaneous requests to send were managed by a collision detection scheme (such as CSMA/CD, used in Ethernet) or passing of a “token” (IBM’s Token Ring or Token Bus).
With this new switch technology, all packetized digital data, whether originating as voice, music, video, or the contents of computer memory, could be handled in the same manner, as serial bit streams. The switch IC architectures evolved to include “nonblocking” types, so multiple bit streams from different sources could be transferred simultaneously. This, coupled with the increased density and speed associated with ever smaller transistor feature sizes and the increased function and reduced power capabilities, drove the link speed capabilities of network interfaces rapidly upwards, going from 10 Mbps in 1973 to 56 Gb/s per lane at this writing.
These trends have enabled whole new network architectures and accompanying communications standards, some of which are listed in Table 5. InfiniBand™,13 for example, was initially developed for large supercomputer clusters, which require low latency. A sample topology of a switched system is shown in Figure 2.
There are a number of parameters that identify similarities or distinguish between network standards. Some of the criteria for selecting a standard to be used are desired data rate, latency (packet processing delay), acceptable error rate, tolerance to dropped packets, maximum length of links, and others. Other considerations include the physical medium used for data transport, material (copper wire or fiber-optic cable), and availability of infrastructure hardware such as switches. Detailed discussions of these are well beyond the scope of this article. However, some of the key technical differences will be summarized and discussed to provide the reader with a basic understanding.
As network speeds have increased, it has become increasingly difficult to maintain acceptably low link error rates without technical innovation. Some of the techniques involved include equalization to compensate for high frequency channel signal loss and reflections, noise cancellation or mitigation. Various standards have adopted one or more of these techniques as needed, and similar techniques may be used across multiple standards but with different implementation details. For instance, two standards might require the same type of equalization circuitry, but have different equalizer lengths. In addition, the implementation details are not always specified in great detail, to allow for design innovation. Some of these implementation details are discussed in the following sections.
Latency and Error Correction
Error correction is often used at high data rates to prevent the introduction of errors in the data stream due to a low signal to noise ratio. Reed-Solomon error correction is a commonly used method. Unfortunately, error correction also introduces latency into the channel, and can therefore be undesirable in High Performance Computing (HPC) environments.
Scrambling or encoding of individual symbols and/or data blocks is often used at high data rates, to prevent problems due to long runs of logical ones or zeroes and resulting energy spectrum. Some typical encoding methods are 8b/10b or 64b/66b.
Traditional computer interfaces have used non-return to zero (NRZ) signaling, in which only two signal voltages are used, corresponding to logical “1” and “0,” also known as PAM-2 (2 level Pulse Amplitude Modulation). Due to the severe high-frequency losses being encountered at speeds of 25 Gb/s/lane and higher, many standards are moving to multi-level signaling, most prominently PAM-4. With PAM-4 signaling, four distinct voltage levels are allowed, and transitions between consecutive bits are allowed from any of the four levels to any level. The power spectrum of PAM-4 is such that the majority of the energy is contained within half of the bandwidth of NRZ signals at the same data rate, making PAM-4 more tolerant of high frequency losses in the channel. However, since the levels are closer together in voltage than NRZ, PAM-4 is more prone to crosstalk and other noise.
Equalization can be in many forms, but the most common are digital filters in the transmitter and receiver and continuous time linear (analog) equalization in the receiver. Transmit equalization is typically in the form of a feed forward equalizer (FFE), while receivers may employ a FFE, a continuous time linear equalizer (CTLE, a peaking amplifier), a decision feedback equalizer (DFE), or some combination of them. Each type has its strengths, and is able to compensate for different types of losses in the channel, which is the transmission path for the signal from the transmitter to the receiver. This may consist of a transmitter module package, printed circuit board wiring, connectors, cables (likely a combination of several of them), and the receiver module package.
In the event that equalization is required, the relevant specification may dictate not only the number of equalizer taps, but also the range of allowable values or range of values of those taps, the number of allowable combinations, and other details. In the receiver, the required number of taps may be a minimum and not be a fixed value, to allow for innovation and differentiation by the designers. The CTLE gain range and/or values may be specified, along with the frequency of the gain peak. A more detailed discussion of these topics may be covered in a future article.
Serial link standards started with an emphasis on time domain parameters such as impedance, skew, and time domain crosstalk, in addition to frequency domain insertion loss. As time went on and the channel impairments became more critical, there was a movement towards more emphasis on frequency domain quantities such as insertion loss deviation (ILD) and insertion loss to crosstalk ratio (ICR), in addition to insertion and return loss. In spite of that, there were some situations in which links met the specification requirements but still failed in operation.
In an attempt to more thoroughly capture the effects of impairments and interactions between parameters such as insertion loss and crosstalk, some recent standards have adopted additional conformance tests such as channel operating margin (COM) and effective return loss (ERL).
Table 6 lists a few interconnect standards and their characteristics relative to these various features. As one can see, the requirements vary significantly between standards, and even within a given standards family as transmission speeds increase.
Standards Sponsors and Development
Some standards are developed by government or quasigovernment organizations. Some of these organizations include the ISO, International Electrotechnical Commission (IEC), European Committee for Electrotechnical Standardization (CENELEC), ETSI, Canadian Standard Association (CSA), Underwriters’ Laboratories (UL), International Telecommunications Union (ITU), and the ANSI. Other standards are developed by technical, industry, or marketing organizations such as EIA/ECIA, JEDEC, and IEEE.
In some cases, the electronics industry has chosen not to wait until the IEEE gets around to adopting a specific standard. Instead, consortia, SIGs, or industry trade associations are formed to develop and publish a standard. Some examples of these organizations are the Fibre Channel Industry Association, the PCI-SIG, the InfiniBand Trade Association, the USB Implementers’ Forum, the SCSI Trade Association, the HDMI Implementers’ Forum, the Optical Internetworking Forum (OIF), and the Small Form Factor Committee (SFF). Individual companies also may develop a standard, and then release it to the industry to promote adoption by IEEE or others.
In order to facilitate interoperability between devices, it is not only important to ensure electrical compatibility (signaling levels, bit rates, etc.), but it is also essential to use mechanically compatible interfaces when connecting printed circuit boards together, or cables to boards. Connectors are used for that purpose. There are many connector manufacturers, all of whom claim to have the best connector for a given application.
The adoption of connector standards ensures that the mating interfaces and mechanical features are compatible from one manufacturer to another for the same application. Beyond that, some connectors are used for multiple applications, and therefore must be distinguished by some special features, such as mechanical keys or EEPROMs built into the connector shell that can be read electronically. These features are typically defined in the relevant interface standard. Table 7 lists some common connector standards and their applications.
Note that there are also variants of the SFP and QSFP connectors that are being manufactured and used in various copper and optical applications. These variants include SFPDD, OSFP, QSFP-DD, and provide smaller size and therefore higher I/O panel port density, better cooling for high-power applications, etc.
Test and Environmental Standards
You thought that was enough standards, right? There are plenty more. Specific countries or trading blocs (e. g., China, the IEC in Europe, the ECIA in the U.S.) have their own standards, some of which have different requirements for the same products covered in others’ standards. There are standards that define how to calibrate test equipment, how to test components, systems, and networks, and materials and environmental requirements. Otherwise, there would be little confidence in test results because one lab’s procedures, settings, and methods might be different from another’s. The individual interface and network standards typically define their own test parameters, but validity of results depends on these base assumptions as well. A very few of some of the relevant standards are listed in Table 8.
This article provides only a miniscule snapshot of the standards that exist. If your favorite standard is not listed here, please do not take offense. The bottom line is this: standards promote compatibility and interoperability. In the electronic world, they ensure that systems will “play well together,” and that test results from different sources can be trusted.
One might ask why we need so many standards, or will there ever be consolidation of standards? This author doubts that significant consolidation will occur. Certainly some standards have and will fade away as they are replaced by new ones that offer better performance or function. Differences between standards often arise out of a desire to eke out the maximum performance or reliability for a specific application, which sometimes precludes commonality. Adoption of a new standard by producers of systems and devices eventually decreases usage of the one(s) that came before. Meanwhile, some of us ride the old technology horse as long as it continues to do the job we need to accomplish. Maybe it is nostalgia, who knows—remember vinyl records and film cameras? Some things never go away.
The author would like to thank Cristian Filip of Mentor Graphics for the information on equalizer and usage in various serial links. The other information in this article is believed by the author to be accurate at the time of writing. However, given the obviously complex nature of the subject, some errors may have been committed.
1. F. M. Fisher, J. W. McKie, and R. B. Mancke, “IBM and the U.S. Data Processing Industry: An Economic History,” Praeger, p. 37, https://en.wikipedia.
2. IBM publication GC20-1667, www.bitsavers.org/pdf/ibm/360/GC20-1667- 1_intro360arch.pdf.
3. C. E. Spurgeon, “Ethernet: The Definitive Guide,” O’Reilly Media, Inc., February 2000, www.oreilly.com/library/view/ethernet-the-definitive/1565926609/ch01.html.
5. K. Shaw, “The OSI Model Explained: How To Understand (And Remember) The 7 Layer Network Model,” Network World from IDG, October 22, 2018,
6. Wikipedia, “History, Conventional PCI,” https://en.wikipedia.org/wiki/Conventional_PCI#History.
7. P. Kamau, “Types of Computer Buses,” TurboFuture, January 12, 2019, https://turbofuture.com/computers/buses.
8. Wikipedia, “Main Buses, List of Interface Bit Rates,” https://en.wikipedia.org/wiki/List_of_interface_bit_rates#Main_buses.
9. Wikipedia, “PCI-X,” https://en.wikipedia.org/wiki/PCI-X.
10. Wikipedia, “USB,” https://en.wikipedia.org/wiki/USB.
11. Wikipedia, “Super VGA,” https://en.wikipedia.org/wiki/Super_VGA.
12. “Digital Visual Interface, Revision 1.0,” Digital Display Working Group, April 2, 1999, https://web.archive.org/web/20120813201146/http:/www.ddwg.org/lib/dvi_10.pdf.
13. InfiniBand is a trademark of the InfiniBand Trade Association.
14. Length dependent.