Network Interface Cards: A Brief History

Feb. 2, 2017

In the embedded market, there has always been a need to have lots of Ethernet connectivity to maximize data communications among different single board computers (SBCs), digital signal processors etc. in a system. The connections are typically either across a backplane via an Ethernet switch, or by use of mezzanine cards installed on single board computers or carrier cards to provide external communications to outside networks.   

Twenty years ago, it seems that SBCs offered only one or two Ethernet connections—and thus limited the number of different networks that these products could connect to. However, once the PMC (PCI Mezzanine Card) concept arrived, this became an easy path for customers to increase the number of network connections supported via a PMC-based Network Interface Card, or NIC.  

Vendors began providing products with one-, two- or even four ports on a single card and their popularity grew with significant volumes of products being sold. Initial NICs offered only a choice of 10/100TX (copper) or 100FX fiber speed for front panel connection and 10/100TX for rear connections. The size of the physical connectors for 100FX fiber (SC or ST) limited the number of front panel ports to two, while up to four 10/100TX connections could be provided for front or rear I/O.  

There was a substantial number of operating systems being used in the embedded industry—VxWorks, LynxOS, Windows NT, HP UX, QNX—and each one required a special driver to support the Ethernet controller being used. This created a need for software engineers expert in Ethernet driver development.

Significant advancements

Over the years, there were significant advancements in Ethernet technology and performance, leading to PMCs and SBCs with support for Gigabit speeds with both copper and fiber connections. The physical connectors for fiber also reduced in size to an LC form factor and, all of a sudden, you could now get four fiber Ethernet ports with front I/O on a PMC.    

The Ethernet controller silicon also saw major changes, incorporating up to two controllers with integrated PHYs in one chip, thus reducing the number of components needed for a design and saving significant board space and power. The only limitation that appeared with Gigabit technology on PMCs was that only two rear I/O connections could be supported now, due to the limited number of pins available on the PMC connector to route the signals.   

Easier to incorporate

On the software side, Operating Systems were starting to incorporate driver support natively for some of the more popular Ethernet controllers on the market such as the Intel 8254X family. This reduced the need to have special drivers built for a given operating system, making it easier to incorporate the technology into an overall system design.

With the more recent development of the XMC (switched mezzanine card) standard and support for XMCs on SBCs and carrier cards, customers could now take advantage of Ethernet performance up to 10Gbps. Initially, products offering 10Gbps offered only fiber connection, but with the popularity of 10Gbps over copper increasing, you can now find boards offering 10GBASE-T connectivity with front or rear I/O, along with a PCI Express interface. In addition, with the greater number of pins available on XMC connectors, four Gigabit ports for rear I/O is now achievable.  

Increasing network connections

PMC and XMC products remain very popular today, even though the number of Ethernet ports on many SBCs and carrier cards has increased. It’s still the case that many customers continue to look to increase the number of network connections for a given application. Of course, there are cases where the bottleneck between the mezzanine and the processor might limit the raw total throughput of the Ethernet ports—but that’s not a simple relationship. For example, some applications might need multiple 10GbE ports not purely for throughput, but for the reduced network latency. Future XMCs may be able to support Ethernet speeds up to 40Gbps, as there are numerous embedded products on the market today that support this level of performance and connectivity. This would be a natural evolution for the XMC NIC concept, and provide a path to increase the number of network connections a customer may require. The Ethernet controller silicon exists today, so it’s just a matter of time before these types of products hit the market.

At Abaco Systems, we’ve been designing and offering Ethernet NICs on PMCs and XMCs for close to 20 years, and our current NICs offer a range of different Ethernet performance levels and media types (copper or fiber). We also have extensive hardware and software expertise in Ethernet designs for NICs which we have been able to leverage to help customers solve their unique problems. Check out what’s available from Abaco here.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!