Choices, choices, choices

April 1, 2003
Much like a presidential primary for the party that is out of office, the race to win industrywide acceptance for a switched-interconnect standard has many candidates with different personalities and different goals. And like candidates in an election many of the proposed interconnects are running more on promise than product.

By John McHale

The switched-interconnect fabric race has begun, yet it will be some time before a winner emerges, and even longer before the military backs one or another.

Much like a presidential primary for the party that is out of office, the race to win industrywide acceptance for a switched-interconnect standard has many candidates with different personalities and different goals. And like candidates in an election many of the proposed interconnects are running more on promise than product.

The military, just as in a presidential election, patiently sits on the sidelines waiting for a winner. Once that winner is chosen, military designers will do their best to accommodate the accepted and most efficient interconnect fabric just as the Army, Navy, Air Force, and Marine Corps strive to work with a new president.

Switched-network fabrics are extremely fast serial data interconnects that offer to replace bus-based I/O architectures, such as VME and PCI. Relative to the parallel databuses they would replace, the benefits of switched fabrics include higher performance, reliability, availability, and scalability, as well as the ability to create modular networks of servers and shared I/O devices.

The need to increase I/O speeds in computing is causing single-board computer designers to embrace switched-interconnect technology. The speed revolution, still in its infancy, has many participants and it will be some time before the dust clears and high-speed serial standards are recognized — and even longer before they are deployed in military systems.

The new interconnect speeds have the potential to provide dramatic enhancements to radar and signals-intelligence systems. Still, it is very early in the game.

There are several compelling reasons for the interest in switched-fabric technology, says Ray Alderman, executive director of the VME International Trade Association in Fountain Hills, Ariz.:

  • processor speeds are outperforming I/O speeds;
  • the need to work over greater distances; and
  • scalability.

Conventional databuses are not designed to work over substantial distances, Alderman points out. Plus, "You can't get scalability on a bus," he says.

Which switched-fabric technology will receive the blessing of military systems designers is anyone's guess today, experts say. "It's kind of a wait and see attitude with the military," says Jeff Harris director of research and systems architecture at Motorola Computer Group in Tempe, Ariz. "When we talk to customers and we ask them to pick a fabric, they say, 'oh no we don't like that.' The military is still waiting to take a leadership position on this," Harris adds.

The military's choice will be felt once designers at major programs such as the future F-35 Joint Strike Fighter or U.S. Air Force F/A-22 Raptor strike fighter choose an interconnect. The clout that goes with being chosen for a major military program would create a domino effect among other military designers, influencing them to make the same choice as the prime contractor on the large program.

The economy is also a factor in the lack of a clear winner in the race, Harris points out. "Many companies big and small are going bankrupt and business are hesitant to take a risk on a new technology," he explains.

However, industry experts see the field of competitors starting to shrink.

"The switched-fabric wars have really just started," Alderman says. In terms of the military commercial-off-the-shelf, or COTS, market, the four main contenders are Infiniband, Rapid I/O, StarFabric, and PCI Express, he says.

Gigabit Ethernet along with Fibre Channel have a large installed base, but do not have the performance advantages that the four above promise.


StarFabric, from experts at StarGen in Marlborough, Mass., is the only switched fabric currently on a device that is being used for a military application.

Engineers at Dy4 Systems in Kanata, Ontario, have designed a switch-fabric interconnect PCI mezzanine card (PMC) known as StarLink that targets multi-computer architectures for radar, sonar, and image processing, says Duncan Young, director of marketing at Dy4.

The StarLink device is deployed in a towed array sonar application in Europe that is used to detect underwater mines, Young says. The digital signal processors in the sonar take advantage of StarFabric's relatively high I/O speeds, which helps the system filter out insignificant noise that might distract system operators from actual mines, he adds.

Young declined to discuss the customer in detail due to contractual obligations.

StarFabric is a scalable and universal switch fabric for communications systems engineers who design advanced data, voice, and video networks, StarGen officials say. It enables designers to build many different combinations of scalable platforms on a common architecture, from a few endpoints to thousands of endpoints, while providing hundreds of gigabits per second of switching capacity, company officials say.

Leaders of Radstone Technology in Towcester, England, are launching a range of rugged products based on StarGen's StarFabric technology for their company's quad PowerPC digital signal processor (DSP) family, the G4DSP. The company's first StarFabric product, the PMC-StarLite, is a PMC aimed at adapting the G4DSP's architecture across several different board sets. The module will have four externally available StarFabric ports.

"StarFabric switching technology provides a wealth of excellent features such as multi-casting, quality of service functionality, and high availability features, making it an ideal choice for inter-board interconnectivity" says Stuart Heptonstall, Radstone's DSP and Analog I/O product manager. "StarFabric benefits from being incorporated in the PICMG 2.17 backplane standard and is a zero protocol technology. It maps straight in to PCI address space and inter-board transfers are implemented in the same way as inter-node transfers on the G4DSP," he says.

This StarFabric PCI mezzanine card from Dy4 Systems is deployed in a towed array sonar application in Europe to detect underwater mines.
Click here to enlarge image


"Radstone sees a transition from a shared bus, such as PCI, to switched fabric. Quite recently there were 40 or so viable technologies in the embedded market space, and that's now settled down to just four or six. The main ones are Rapid I/O, Infiniband, StarFabric, and PCI Express," Heptonstall says.

For more information on StarFabric contact StarGen on the World Wide Web at


While StarFabric and other switched fabrics work primarily on a chip-to-chip level on boards and inside of enclosures, InfiniBand does that plus connects servers and workstations; in fact, supporters say it one day may connect the electronics on U.S. Navy warships.

InfiniBand is a serial, point-to-point interconnect that uses a 2.5-gigabit-per-second wire speed connection with one, four, or twelve wire link widths. The technology also supports copper and optical-fiber specifications. The architecture offers advantages in scalability, flexibility, reliability, low latency, built-in-security, and cost savings, experts say. The technology provides faster data rates than bus technology, and potentially could move data as fast as 10 gigabits per second.

"A switched, serial I/O technology such as InfiniBand provides the performance required for networking tomorrow's avionics platforms together today," says Kent English, research and development engineer at the Boeing Phantom Works in Seal Beach, Calif. "InfiniBand is a powerful and efficient technology leveraging the best of existing technologies while incorporating feature-rich capabilities such as quality of service, security, and power management. Collaborating technologies such as VME form factor and InfiniBand communication definitely has a place in the mission-critical real-time embedded market," he says.

"Infiniband is fabulous for clustering processors," and its supporters are ahead of Rapid I/O's in producing silicon, Alderman says.

"InfiniBand should be well suited for embedded computing," says Eric Gulliksen, embedded hardware program director and analyst for Venture Development Corp. in Natick, Mass. Gulliksen points out the proposed collaboration under the umbrella of the InfiniBand Trade Association (IBTA) and the VMEbus International Trade Association (VITA), which he says "is timely as we believe that, in the fairly near future, computing will become dominated by fabrics and high-speed interconnects. The shared perspectives and skills embodied in these two groups will be extremely valuable in guiding technological development and driving acceptance in the marketplace."

The latest news involving Infiniband is an effort by Mellanox Technologies in Santa Clara, Calif., SBS Technologies in Albuquerque, N.M., and Sky Computers in Chelmsford, Mass., to establish the InfiniBand architecture as the premier interconnect fabric in embedded computing. The co-sponsoring companies announced they would lead a newly established Embedded InfiniBand Subgroup within the IBTA. Officials of these companies also announced a special interest group for embedded InfiniBand within VITA. SBS Technologies and SKY Computers are co-chairs for the IBTA subgroup. Mellanox Technologies will have representation on this subgroup and is a member of the IBTA steering committee.

SBS engineers have also designed the EIS-4008-CU data communications switch for Infiniband applications.

"Our OEM customers, particularly those designing data center applications, increasingly require products that offer higher bandwidth, high availability and reliability, as well as scalability," says Clarence Peckham, president of the SBS Technologies Commercial and Government Groups. "EIS-4008-CU, the first member of our growing family of InfiniBand products, delivers true 10 gigabit per second transport capabilities and an abundant feature set at a reasonable price point."

The switch has redundant, hot-swappable supplies and uses a non-blocking internal switch architecture that supports cut-through, and store and forward switch algorithms, SBS officials say. The switch supports unicast and multicast packet types, as well as a sleep/wake from console function via an out-of-band message, company officials say.

Each bi-directional 4x port supports bit rates up to 10 gigabits per second in each direction enabling a 16-gigabit-per-second full-duplex data rate per port. Based on the RedSwitch HDMP-2840 8-port 4x InfiniBand switch fabric chip, the switching element provides an overall aggregate bandwidth of 160 gigabits per second. Integrators and user can access redundant circuits, hot-swappable power supplies, and fans from the front of the chassis, and can configure the fans for front-to-back or back-to-front airflow.

Depending on how often users must reconfigure their equipment, they can set up their data-center equipment racks in one of two ways: connector wires facing toward the front or, facing toward the back, SBS officials say. Therefore, SBS engineers designed the EIS-4008-CU with a unique fan system that allows users to configure the switch to face either direction with equal ease, company officials claim.

The SBS switch provides an on-board microprocessor management subsystem and three management agents. Based on the Motorola MPC855T PowerPC microprocessor, the management subsystem includes 64 megabytes of SDRAM, 64 megabytes of flash memory, a 10/100 megabit-per-second Ethernet port with RJ45 connector, and an RS-232 port with DB9 connector. To ensure interoperability, EIS-4008-CU is compatible with InfiniBand Architecture Specification Volume 1, Release 1.0.a and Volume 2, Release 1.0.a.

For more information on InfiniBand contact the InfiniBand Trade Association in Portland, Ore., on the World Wide Web at

Serial and parallel Rapid I/O

Serial RapidIO is a high-performance, low-pin-count, switch fabric serial interconnect for applications such as digital signal processor farms and newly emerging serial backplane applications. Serial RapidIO borrows industry-standard signaling technology found in Fibre Channel, 10 Gigabit Ethernet XAUI interfaces, and Infiniband, and includes a low-power transmission mode not found in other standards. It operates at 1.25, 2.5, and 3.125 gigabits, providing the bandwidth necessary for signal processors and backplane applications.

The serial specification defines one differential link in each direction between devices and support for ganging four links together for higher throughput applications. On the system level, designers can connect parallel and serial RapidIO devices through switches without using special bridging functions.

"Serial Rapid I/O will be ideal for embedded applications but everybody's wondering where's the silicon," Alderman says. It will also compete with Infiniband on the lower end, he adds.

The parallel RapidIO interconnect architecture is an electronic data communication standard for interconnecting chips on a circuit board, as well as for interconnecting circuit boards mounted in backplanes. The RapidIO interconnect provides high bus speeds that enable chip-to-chip and board-to-board communications at faster than 10 gigabits per second.

The first military implementations of RapidIO will involve radar systems, which need higher bandwidth than sonar, says Richard Jaenicke, director of marketing at Mercury Computer Systems in Chelmsford, Mass. RapidIO will find most of its success in embedded applications, while InfiniBand will have a niche in server applications, he adds.

"RapidIO has a standards-based structure that allows us to enhance or modify the specification without affecting the underlying technology," says Sam Fuller president of the RapidIO Trade Association in San Francisco. "As a result, silicon developers working on serial chips have found they can recycle much of their RapidIO parallel interface designs and system developers don't need to worry about the cost or complexity of adding special bridging functions to their products."

The latest news regarding Rapid I/O is Mercury's announcement of a high-performance, multiprocessor system called the ImpactRT 3100, which is based on the RapidIO switch fabric that uses software compatible with existing RACE++ applications.

"The ImpactRT 3100 system employs next generation PowerPC G4+ processors for the compute power and RapidIO switches and interface chips for the communication, Jaenicke says. "The RapidIO switch fabric connects the compute nodes and I/O nodes both on-board and between boards in the system. This new system scales to 480 gigabit floating point operations per second of processing and over 400 gigabits per second of communication bandwidth in one 6U CompactPCI chassis.

"This first RapidIO-based system is targeted imaging applications in ground benign environments where high compute-density is required," Jaenicke continues. "Available with up to 40 fiber I/O interfaces running at 2.5 gigabits per second each, the ImpactRT 3100 can also handle extreme I/O requirements. The system is shipping to early access customers this spring, with more general availability later this year. Subsequent RapidIO-based systems in VME and custom form-factors will target rugged deployments for signal, image, and data processing applications."

For more information on parallel and serial RapidIO contact the RapidIO Trade Association on the World Wide Web at

PCI Express

The PCI Express architecture is a general-purpose serial I/O interconnect that seeks to provide a unifying standard for consolidating several different I/O solutions within a platform. For example, PCI Express can replace existing PCI, AGP, and core logic interconnects. PCI Express provides an open specification designed from the start to address the varying requirements of several different market segments in the computing and communications industries and bandwidth scalability as fast as 8 gigabytes per second. Future signaling improvements may provide even greater bandwidth headroom.

Officials at PCI-SIG, the San Jose, Calif.-based Special Interest Group responsible for PCI Express Architecture, released candidate 1.0 PCI Express Bridge specification and candidate 1.0 Mini PCI Express Card specification for member review. The PCI Express Bridge specification enables PCI-SIG members to deliver PCI Express products using existing PCI technology, extending the investment in existing PCI technology while accelerating time to market for PCI Express solutions.

The MiniCard specification complements the PCI Express Card Electromechanical form factor and come in a variety of wired and wireless communication peripherals meeting requirements of the build-to-order and configure-to-order business model for mobile computers, PCI-SIG officials say. The specification provides an alternate solution to the existing Mini PCI form factor serving these applications.

"The PCI Express Bridge specification provides the industry a method to kick-start development plans while mitigating risk in technology transition," says Ajay Bhatt, chair of the PCI Express Technical Working Group. "With the delivery of the two specifications, the PCI-SIG is continuing on its promise."

For more information on PCI Express contact PIC-SIG on the World Wide Web at

VME switched-serial standard ratified

Leaders of the VMEbus International Trade Association (VITA) in Fountain Hills, Ariz., ratified the VMEbus Switched Serial Standard (VXS) or VITA 41, which will provide original equipment manufacturers with as much as 50 times more bandwidth than the VME64 parallel bus on individual board-to-board transfers, for a total of as much as 900 times more aggregate bandwidth in a maximum VXS configuration.

Manufacturers can now extend the life of their relatively old VMEbus-based systems while increasing bandwidth and following an easy migration path from parallel bus to switched-serial fabrics, as both will coexist in a VMEbus system at the same time.

This means that manufacturers will be able to develop products incorporating switched-serial fabrics such as InfiniBand 4X, Serial RapidIO4X, Fibre Channel, and 10 Gigabit Ethernet, while they continue to benefit from investments made over the years in VME.

"The great thing about VXS is that it takes the risk out of incorporating new technologies," says Ray Alderman, executive director of VITA. "OEMs can experiment with switched-serial fabrics at their own pace, before transitioning their systems and products from the parallel bus. And those who love the parallel bus win too, because the standard ensures the form factor will be here for many years to come as the switched-serial fabrics carry VME into the future."

The VXS Standard will provide the military with an evolutionary path to switched fabrics, the next generation of embedded system architectures, while making the most of the existing expertise and investment the military has made in VME," says James Thompson, senior engineer, Commercial Technology Management Branch, Naval Surface Warfare Center, Crane Division.

Officials at the Motorola Computer Group in Tempe, Ariz., originally proposed the VXS standard as part of Motorola's VME Renaissance strategy to extend the life and improve the performance of VME.

The emergence of switched fabrics does not spell the end of VME and even PCI, says Jeff Harris, director of research and system architecture at Motorola Computer Group. VXS will enable military designers to retain reliability of VME and still access the fast I/O speeds of switched interconnects, Harris explains.

The old P1, P2, and P0 connectors do not have the bandwidth to accommodate the new switched fabrics like Infiniband or RapidIO, he continues. They work with StarFabric because it operates at less than a 1,000 megabits per second, Harris says.

The new standard would use the P-0 connector as an interface for the switched fabrics, Harris says.

VXS adds a switched-serial interconnect to VMEbus coincident with the VMEbus parallel bus; employs standard open technology for the switched-serial links; accommodates several different standard open technologies for the links, but not necessarily at the same time; maintains backward compatibility with the VMEbus ecosystem; and brings more DC power onto each VMEbus card.

New VITA 41 initiatives recently announced include increasing user I/O pins, rear I/O options, defining of rear transition module (RTM) options, and the accommodating of conduction cooling, Harris says. Accommodating conduction cooling will require the existing power connector on the switch board to be changed to a new connector, he adds.

For more information on VXS contact the Motorola Computer Group and VITA on the World Wide Web at and

StarGen to develop StarXpress family of switches and bridges for PCI Express

Designers at StarGen in Marlborough, Mass., are announcing a new product line, StarXpress, which comprises bridges and switches based on PCI Express and PCI Express Advanced Switching (AS).

The product line, set for delivery beginning in 2004, is composed of StarXpress switches and StarXpress bridges and provides high-bandwidth serial switched-interconnect solutions for next-generation communication, storage, server, and embedded system designs, StarGen officials say.

The StarXpress switch family is based on a scalable core architecture enabling rapid deployment products with various port counts and lane widths to satisfy the requirements of a set of applications from motherboard implementations to multi-rack chassis-based systems, company officials say.

The StarXpress product line is for communication, storage, blade server, and embedded applications. StarXpress bridges will provide interfaces to PCI Express AS fabrics for general-purpose processors, network processors, and control processors, as well as digital signal processors, framers, and protocol-specific end-points.

StarXpress enables flexible system architectures from classic decentralized to emerging centralized architectures, StarGen officials say. Centralized architectures feature low-cost network interface line cards in conjunction with optimized processing blades that are easily upgraded as more processing power is required, company officials say.

StarGen is in production with the SG1010 StarFabric switch, a high-speed serial switch, and the SG2010 PCI-to-StarFabric bridge and is sampling the SG3010 TDM-to-StarFabric bridge. PCI Express-based products under the StarXpress name will be made available beginning in 2004.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!