Switched fabrics: the next revolution in I/O speed
The need to increase I/O speeds in computing is causing single-board computer designers to embrace switched interconnect technology.
The need to increase I/O speeds in computing is causing single-board computer designers to embrace switched interconnect technology. The speed revolution, still in its infancy, has many participants and it will be some time before the dust clears and serial standards are recognized —and even longer before they are deployed in military systems.
By John McHale
Military single-board computer designers, always looking for ways to increase speed and improve the overall performance of their products, are closely watching the progress of various switched fabric technologies, which promise drastic increases in I/O speed.
The new interconnect speeds have the potential to enhance the performance of current radar and signals intelligence systems dramatically. However, it is very early in the game and companies are waiting to see who comes out on top.
Market drivers such as AMD in Sunnyvale, Calif., Intel in Santa Clara, Calif., Mercury Computer Systems in Chelmsford Mass., Sky Computers in Chelmsford, Mass., StarGen in Marlborough, Mass., and Cisco Systems in San Jose, Calif., are involved in various switched fabrics that serve applications ranging from the desktop to servers to embedded applications and military shipwide networks. Still, none of their switched fabrics have started to dominate the industry.
AMD and Intel are targeting the desktop with HyperTransport and Third Generation IO (3GIO) respectively; Mercury is the creator of the RapidIO fabric for embedded applications; Sky Computers is a major backer of InfiniBand; StarGen is the creator of the StarFabric for telecommunications and military applications; while Cisco Systems is a major provider of Gigabit and 10 Gigabit Ethernet.
Although there are many big names behind these fabrics, many companies and industry experts are still playing a wait-and-see game and are hedging their bets by keeping a foot in the door of each one.
"The fabric situation is definitely a mess," says Ray Alderman, executive director of the VME International Trade Association in Scottsdale, Ariz. "Many companies are experiencing a lot of FUD — fear, uncertainty, and doubt."
There are many to choose from and no one is sure which fabrics ultimately will come out on top, Alderman adds.
"The reasons to move to switched fabric technology are because processor speeds are outperforming I/O speeds, because you need to work over greater distance and buses are not designed to do that, and because of scalability," he continues. "You can't get scalability on a bus," Alderman says.
Unlike the shared medium of a bus architecture, switch fabrics are point-to-point, says Tim Miller, vice president of marketing at StarGen. Each end-point is connected to every other end-point through one or a series of switches. End-points can be considered 'bridges' to existing standard buses or components, he explains.
The new interconnect technology uses switched architectures as opposed to bus architectures to decrease capacitance and increase I/O speed, says Joe Pavlat, president of the PCI Industrial Computers Manufacturing Group (PICMG) in Wakefield, Mass.
The difference between parallel and serial revolves around the size of the computing domain, Alderman says. If the domain is small, designers go with a parallel bus, but if the domain is large, designers go with a serial interconnect.
Switched fabrics represent "an architecture that interconnects sources of data and destinations via a dynamic switched element," Pavlat explains. However, switched fabrics are not tied to serial computing or parallel processing; they are applicable to both, he adds.
"If you want to go faster you eventually run into the gods of physics with bus architectures," Pavlat says. "Basically you're trying to drive a signal from 0 to 1 the fastest way possible. Normally you do that by reducing capacitance, but we've already got that as low as it can go. You can then try and drive more current through, but the more current you drive through the more unpractical it becomes. A third way is to reduce voltage. The technology has already dropped from 5 to 3.3 and now to 1, however it cannot go much lower without creating noise problems and other issues."
Switched fabrics involve only one source, one destination, and the switch at any given time. This is the reason that they can reduce capacitance more effectively than buses, Pavlat explains. Bus architectures bog down because several different sources and destinations are trying to get a piece of the signal as it goes down the bus, he adds.
Switched architectures also are inherently more reliable than bus architectures, Pavlat says. If one bus circuit fails, it takes the whole bus with it, he continues. Yet a switched architecture isolates the fault and the switch works around it, Pavlat adds.
Matching I/O and processor speeds
For decades engineers worked to match processor speeds with I/O speeds, which culminated in the processor performance revolution of the 1990s, Alderman explains. At that time, engineers did not pay as much attention to improving I/O performance as they did to boosting processor speed. The result now is processors that are too fast for today's typical I/O technology, he says.
Switched fabric technologies are changing that, Alderman says. After the improved bandwidths that switched fabrics promise will come the inevitable onset of optical interconnects and optical computing. This emerging trend will, in 10 to 20 years, put the emphasis back on improving processor speeds, he explains.
Optical technology will take computing beyond serial fabrics such as InfiniBand and RapidIO, Alderman says. Right now the technology is on the box-to-box level; optical fibers instead of copper wires connect boxes, while some work is in progress on optical backplanes as well, he says.
Once optical computing becomes practical, processor speeds will need to catch up to I/O speeds, Pavlat says.
The military will most likely turn to the Ethernet switched technology first — such as Gigabit Ethernet and 10 Gigabit Ethernet, Alderman says. "It's mostly a no-brainer," because the infrastructure is already there for it, he says.
Other than that the military will stick with VME and the VME extensions because of the broad installed base that VME technologies enjoy in military electronics; it is easier for the military not to change, Alderman says.
"Radar systems for military aircraft could benefit by connecting the various components of radar systems [such as sensors, processors, displays, and recorders] using switched fabrics which offer signal paths with less weight, higher speed, and better fault tolerance," says Rodger Hosking, vice president of marketing at Pentek in Upper Saddle River, N.J.
"Many military applications have been using 'proprietary standards' such as RaceWay, SkyChannel, and Myrinet for processor connectivity," says Ron Marcus director of marketing at Synergy Microsystems in San Diego. "These technologies provide significant performance, but also make it difficult to develop heterogeneous systems, locking customers and integrators into single vendor solutions (hardware, operating system, and connectivity software).
"Military projects needing the bandwidth improvements of a switched fabric architecture which would ordinarily have necessitated a single-vendor, proprietary solution can now turn to a relatively open-standard, supported by multiple vendors," Marcus continues. "This will provide significant cost reductions and better support in a more competitive market, both important concerns for [commercial-off-the-shelf] requirements."
What to expect
"Major processor vendors like Intel, IBM, Motorola, and AMD will have a strong influence by supporting, promoting, and implementing switched fabrics on new processor offerings and new peripheral chip sets," Hosking says.
Switched interconnects will coexist in hybrid chassis with existing technology for a long time before they dominate the market, StarGen's Miller says. Inside would be a bus or backplane, with a switched fabric connecting the chassis to other chassis, he explains.
"However, the main problem with switched fabrics is that they are not real-time," Alderman says. "They are a lot like the Unix operating system, which you can't make real-time either. Making them real-time is like making two parallel lines cross at infinity.
"Real-time is a spectrum and there are three levels — hard, soft, and mushy," Alderman continues. VME and VxWorks are real-time, in other words they have response times in the 6 to 20-millisecond range, he says. A soft real-time example is CompactPCI that does not run on Microsoft code, and has response times in the 50 to 200-millisecond range, Alderman explains. "Mushy real-time, which includes anything with response times greater than 300 milliseconds, describes switched fabrics and Unix."
The real-time scenario is why the military systems designers may adapt switched fabrics slowly, and are sticking with VME and the 2eSST proposals coming out of Motorola Computer Group in Tempe, Ariz., Alderman says. They can experiment with fabrics on a mezzanine level without placing a major investment in one technology or another before the market actually clears up, he explains.
The biggest challenge with switched fabrics will involve software, Alderman says; hardware is only 10 percent of the problem, he adds. The majority of cost associated with any system today is software development, not to mention the expense of porting to existing software, Alderman says.
Software issues may give the StarGen StarFabric an advantage, Pavlat says. "They are taking software that is currently running on PCI and adapting it to their fabric."
StarFabric is a scalable and universal switch fabric for communications systems engineers who design advanced data, voice, and video networks, StarGen officials say. It enables designers to build any combination of scalable platforms on a common architecture, from a few endpoints to thousands of endpoints, while providing hundreds of gigabits per second of switching capacity, company officials say.
"StarFabric is using PCI as a starting point, but is not tied long-term to it," StarGen's Miller says. The key lies in Starfabric's ability to work with existing PCI technology since StarFabric has adapted to existing PCI software, designers can reuse their old software when they use a switched interconnect, Miller says.
The engineers who develop many other switched fabrics have yet to reach the silicon level — which is the real starting point — let alone reaching the stage where they can develop software, Miller says.
One of the keys to StarGen's ability to field products so quickly is the 100-percent backward compatibility of StarFabric with PCI, PICMG's Pavlat says. One of the only concerns with StarGen is the company's small size and resulting potential that StarGen may be an attractive purchase or takeover target of larger companies.
StarGen's SG 1010 StarFabric Switch and SG 2010 StarFabric Bridge together provide communications equipment designers with high-speed, scalable, and reliable systems, while providing a migration path from current PCI and CompactPCI based architectures, StarGen officials say. The SG 1010 StarFabric Switch is a high-speed, cascadable serial switch that provides 30 gigabits per second switching capacity with six ports and is a component of the StarFabric open switch fabric architecture, StarGen officials say.
The SG 2010 StarFabric PCI Bridge provides access to the switch fabric from existing PCI standard buses. These devices can combine in a variety of configurations to create cost-effective, small-scale systems to large room scale equipment with hundreds of end points, company officials claim.
"Out of all the high-speed interconnect solutions being talked about today, the StarFabric technology appears to have the most complete roadmap," says Eric Gulliksen, an analyst at Venture Development Corp. in Natick, Mass. "By being able to deliver useable solutions to the market now, StarFabric should be well positioned to be implemented broadly in the CompactPCI environment and beyond."
StarFabric is deploying in traditional communications applications, as well as in a wide variety of embedded applications; design-in plans are happening in communication access systems, digital convergence platforms, distributed computing applications, and PCI bus expansion, StarGen officials say. StarFabric includes support for loosely coupled, distributed computing with support for independent address domains, sophisticated message-passing capabilities, memory protection schemes, and advanced interrupt handling, company officials say.
Industry experts are developing the PICMG 2.17 StarFabric CompactPCI specification to standardize StarFabric technology at the system level within the current CompactPCI environment, StarGen officials say.
For more information on StarFabric contact StarGen on the World Wide Web at http://www.stargen.com.
While StarFabric and other switched fabrics work on a smaller level, InfiniBand originally targets servers and workstations; supporters say it one day may connect the electronics on U.S. Navy warships.
InfiniBand is a serial, point-to-point interconnect that uses a 2.5-gigabit-per-second wire speed connection with one, four, or twelve wire link widths, says Bob Hoenig, chief technical officer at Sky Computers in Chelmsford, Mass. The technology also supports copper and optical fiber specifications, he adds.
The architecture offers advantages in scalability, flexibility, reliability, lower latency, built-in-security, and cost savings, Hoenig says. "The technology provides tremendously high data rates compared to bus technology," he continues. It potentially can move data as fast as 10 gigabits per second, Hoenig adds.
"Between chassis or between facilities, serial fabrics are much more appropriate, and InfiniBand or Switched Ethernet make the most sense," Pentek's Hosking says. "In these cases, the need to move from copper to optical can be justified to cover the distance, even though this incurs the penalty of higher power dissipation."
InfiniBand will most likely be the winner in the server market, VITA's Alderman declares. It is good technology and silicon has already been out for some time, he adds.
"InfiniBand was originally designed as a replacement for PCI," writes Steve Paavola in a white paper entitled Serial Interconnects for High Performance Computing." This switched fabric "provides performance, scalability, and reliability not possible for PCI. InfiniBand allows developers to easily build very large computer room-sized systems composed of hundreds or even thousands of smaller computer systems and supporting peripherals.
"As originally architected, InfiniBand was primarily cables connecting boxes," Paavola wrote. "However, the standards have evolved so that a backplane and a packaging definition is being developed as well. The base technology for InifiniBand is 2.5 GHz [low-voltage differential signaling pairs (LVDS)] — one transmit and one receive — using 8bit/10bit encoding to move packets over the connections. Higher bandwidth connections are possible by grouping four or 12 of these basic LVDS pairs together, for up to 3 gigabytes per second full-duplex connections. Crossbar switches are used to tie InfiniBand connections together."
InfiniBand would be an ideal way to connect electronics aboard a ship, Paavola says. It could connect the radar and sonar subsystems, A-D controllers, and other electronics all on an InfiniBand fabric, he adds.
The InfiniBand fabric also could work with other switched interconnect technologies such as RapidIO, Paavola says. Because parallel RapidIO implements on a chip-to-chip level, it could fit right into the InfiniBand network, he explains.
Nevertheless, a Navy ship equipped with an InfiniBand fabric is many years away, Paavola says. It would start out in much smaller applications in the military such as a radar subsystem, he adds.
For example in a radar application "InfiniBand would be used to connect the system to the outside and to high-performance peripherals," Paavola writes in his white paper. "Often there is a requirement to log the input data, for example. InfiniBand could also be used to transmit the resulting image to a workstation for viewing, or another system for further processing."
Hoenig says he believes InfiniBand will be the fabric to emerge as the industry standard, based on its technological advances as well as its market share. More than 200 companies are members of the InfiniBand Trade Association and have a vested interest in seeing InfiniBand succeed, he says.
However, PICMG's Pavlat cautions InfiniBand has been around for five years and has yet to drive the market. Many of the companies in the trade association are involved with other switched fabrics such as RapidIO, Gigabit Ethernet, etc., he points out. They are doing the smart thing and hedging their bets, Pavlat says.
For more information on InfiniBand contact the InfiniBand Trade Association in Portland, Ore., on the World Wide Web at http://www.infinibandta.org.
Although RapidIO does not currently compete with InifiniBand, that may change soon, says Richard Jaenicke, director of product marketing for Mercury Computer Systems. RapidIO today is only a parallel, chip-to-chip switched interconnect, yet RapidIO just released a serial specification that would place it in competition against InfiniBand, he adds.
The first military implementations of RapidIO will involve radar systems, which have a great need for higher bandwidth as opposed to sonar, Jaenicke says. RapidIO will find its most success in embedded applications, whereas InfiniBand will have a niche in the server market, he adds.
"For communication between processors on a multi-processor board, a strategy like RapidIO using short, parallel copper data connections may be the most efficient for speed, space, and power consumption," Pentek's Hosking says. "Since board power dissipation is already being pushed to new levels, this could be a significant factor. As you move off the board, fewer data lines are a plus, so serial interfaces like InfiniBand or serial RapidIO become more attractive. Copper is still the preferred medium."
The RapidIO interconnect architecture is an electronic data communication standard for interconnecting chips on a circuit board and for interconnecting circuit boards using a backplane, say officials at the RapidIO Trade Association in San Francisco. The RapidIO standard increases performance and provides a more robust interface for future networking products and high-performance embedded systems, they claim.
The RapidIO interconnect targets primarily the networking market. Unlike other contemporary computer-centric interconnects, the RapidIO technology addresses the networking industry's needs for software transparency, greater reliability, and higher bandwidth in an "in-the-box interconnect," trade association officials say. The RapidIO interconnect provides high bus speeds that allow chip-to-chip and board-to-board communications at performance levels scaling greater than ten gigabits per second, organization officials say.
In its simplest form, a RapidIO end-point can fit inside a field programmable gate array (FPGA), trade association officials say. RapidIO end-points are small enough to leave most of the FPGA available for other functionality, they explain. RapidIO technology is also flexible enough to enable several different system topologies, address maps, and transactions to suit a variety of applications, trade association officials say.
RapidIO Trade Association experts also designed an additional low-power transmission mode not found in the other standards, which also supports the reliability features of the parallel specification, such as hardware error detection and recovery, says Tom Cox, chair of the RapidIO Marketing Working Group.
The main advantages of switched fabrics are high performance, high reliability, scalability, and wide industry adoption, Jaenicke says. A switched fabric like RapidIO offers other advantages for embedded applications, such as low latency, fast error correction, guaranteed delivery, many devices and clock frequencies, and software transparency, Jaenicke explains.
In short, systems designers can embed RapidIO easily because it is compatible with PCI drivers and processor protocols, has a low pin count, is inexpensive, and is compatible with FPGAs, he adds.
RapidIO's parallel implementation is 0.5 to 4 gigabits per second in 8-bit or 16-bit configurations, Jaenicke says. The architecture has multi-level error management and typically implements entirely in hardware, he adds.
RapidIO represents the next generation of switched fabric for Mercury RACE systems, Mercury officials say. RapidIO has the promise of broad adoption beyond Mercury's traditional markets into the general embedded computing markets, company officials say. Mercury designers will use RapidIO to coordinate data exchange and synchronization within a digital signal processor-oriented, parallel computing subsystem, Mercury officials say.
Race++ currently runs at about 267 megabits per second, while RapidIO eventually will go about 20 times faster than that, Jaenicke says.
RapidIO's embedded advantage over Intel's Third Generation I/O (3GIO) lies in RapidIO's relationships to components in the system, Jaenikce explains. While 3GIO's architecture is not peer oriented — or does not place processors on an equal footing with the CPU — the centralized RapidIO switch treats every component equally, he says.
Serial RapidIO operates at 1.25, 2.5, and 3.125 gigabits per second, which provides bandwidth for signal processors and backplane applications, trade association officials say. The specification defines one differential link in each direction between devices with the capability to "gang" four links together to increase throughput, they say. System developers have the ability to connect parallel and serial RapidIO devices through switches without special bridging functions, trade association officials explain.
If Motorola gets behind RapidIO on PowerPC systems, then RapidIO will prevail and generate a lot of military interest, PICMG's Pavlat says.
RapidIO will probably end up being the switched fabric of choice for embedded applications, VITA's Alderman says.
For more information on parallel and serial RapidIO contact the RapidIO Trade Association on the World Wide Web at http://www.rapidio.com.
Gigabit Ethernet and 10 Gigabit Ethernet
Gigabit Ethernet and 10 Gigabit Ethernet are likely to succeed in the near term because Ethernet technology is already in about 70 percent of the world's computers, PCIMG's Pavlat says. There is already a lot of software out there for Ethernet and for the next 10 years it will probably be the most common switched fabric, he adds.
"Customers are very excited about 10 Gigabit Ethernet technology and products," says Bruce Tolley, vice president for the 10 Gigabit Ethernet Alliance and manager of emerging technologies for Cisco Systems. "Many Ethernet companies have shipped or are about to ship 10 Gigabit Ethernet modules for switches, routers, and other devices to enterprise, service provider, and carrier customers. These customers are deploying 10 Gigabit solutions in local, metro, and wide area networks."
Since its inception at Xerox Corp. in the early 1970s, Ethernet has been the dominant networking protocol, Cisco Systems officials say. Of all current networking protocols, Ethernet has, by far, the highest number of installed ports and provides the greatest cost performance relative to Token Ring, Fiber Distributed Data Interface (FDDI), and ATM for desktop connectivity, company officials say. Fast Ethernet, which increased Ethernet speed from 10 to 100 megabits per second, provided a simple, cost-effective option for backbone and server connectivity, Cisco Systems officials say.
Gigabit Ethernet builds on top of the Ethernet protocol, but increases speed tenfold over Fast Ethernet to 1,000 megabits per second, or 1 gigabit per second, Cisco Systems officials say. This protocol, which was standardized in June 1998, is a dominant player in high-speed local area network backbones and server connectivity. Since Gigabit Ethernet significantly leverages on Ethernet, customers are able to leverage their existing knowledge base to manage and maintain gigabit networks, company officials say.
Ethernet, Fast Ethernet, and Gigabit Ethernet are clearly the technologies of choice for high-performance local-area networks (LANs), say officials at the 10 Gigabit Ethernet Alliance (10GEA) in Newport Beach, Calif. 10 Gigabit Ethernet is simply the next logical development in this Ethernet bandwidth hierarchy, they say. An evolutionary step forward, 10 Gigabit Ethernet will preserve many of the same characteristics of previous versions of Ethernet, organization officials say.
The 10 Gigabit Ethernet standard should be ratified later this year, 10GEA officials say. "Essentially, all the technical work related to the formation of the standard is complete," says Brad Booth, editor-in-chief of the IEEE P802.3ae task force and strategic marketing manager for Intel's LAN Access Division. "We remain on schedule for completion of the standard in the first half of next year."
Positioned as a high-speed, unifying technology for networking applications in LANs, metropolitan area networks (MANs), and wide area networks (WANs), 10 Gigabit Ethernet will provide simple, high bandwidth at relatively low cost, Cisco Systems officials say. In LAN applications, 10 Gigabit Ethernet will enable organizations to scale their packet-based networks from 10 megabits per second to 10,000 megabits per second. Ten Gigabit Ethernet MAN and WAN applications will enable designers to create extremely high-speed longer distance Ethernet links at competitive cost, company officials say.
Two differences separate 10 Gigabit Ethernet and other speeds of Ethernet, 10GEA officials say. First is a long-haul (40 or more kilometers) optical transceiver or physical-medium-dependent (PMD) interface for single-mode fiber that designers can use either with the LAN physical layer (PHY) or WAN PHY for building MANs. The second is the WAN PHY option, which enables 10 Gigabit Ethernet transport transparently across existing SONET (synchronous optical network) OC-192c or SDH (synchronous digital hierarchy) VC-4-64c infrastructures, 10GEA officials say.
For more information about the IEEE P802.3ae visit the IEEE web site at http://grouper.ieee.org/groups/802/3/ae/index.html. For more information on 10GEA contact the organization on the World Wide Web site at http://www.10gea.org.
Officials at Intel are planning to drive the I/O revolution in the desktop market with 3GIO.
At the Intel Developers Forum earlier this year Intel officials announced their plan to replace PCI altogether with their Third Generation IO architecture without any migration to current PCI standards, VITA's Alderman says.
All those companies that are taking an evolutionary approach to PCI will be in trouble when their competitors start making products with 3GIO, Alderman says. "Like Bob Dylan said, 'There will be blood on the tracks,'" he adds.
All the electronics on a Navy surface ship may one day be linked via an InfiniBand switched fabric, say engineers at Sky Computers.
Intel is targeting 3GIO at the volatile desktop level. Since these kinds of applications change every 16 to 18 months, they are perfect for a revolutionary change such as 3GIO, they say. The server market moves a little slower, however, and is less likely to adopt 3GIO, Alderman says. Also, there is volume in the desktop market, but no volume in servers, Alderman says.
3GIO is a serial I/O interconnect that decreases interface pin count resulting in cost effectiveness, maximum bandwidth per pin, and high scalability, say officials at the PCI Special Interest Group (PCI-SIG) in Portland, Ore. It leverages the PCI programming model to preserve customer investments and to facilitate industry migration. Developing the draft specification will be experts at the Arapahoe Work Group, an independent industry work group comprised of Compaq, Dell, IBM, Intel, and Microsoft. Arapahoe officials will make the specification available for industry consideration and potential adoption through the PCI-SIG. The PCI-SIG will maintain the 3GIO specification.
PCI-SIG officials say 3GIO test equipment and tools will appear in late 2003. Initial applications for 3GIO will be desktop PCs in 2004, with portable devices and low-end servers and workstations in late 2004, PCI-SIG officials say. 3GIO will not emerge in high-end servers and workstations until late 2005, organization officials say. It will be backward compatible with PCI software, but not with PCI form factors, PCI-SIG officials say. Therefore 3GIO is an unreasonable near-term solution for high-end applications — such as servers and workstations — that have longer design cycles, organization officials say.
For more information on 3GIO contact PIC-SIG on the World Wide Web at http://www.pcisig.org.
Intel's main competitor, Advanced Micro Devices (AMD) of Sunnyvale, Calif., currently offers a switched interconnect for desktop called HyperTransport.
Product is already shipping with it, VITA's Alderman says. For example, the Xbox game system from Microsoft in Redmond, Wash., uses HyperTransport technology, he adds.
HyperTransport, formerly code-named Lightning Data Transport (LDT), moves information as fast as 6.4 gigabits per second, and enables chips inside PCs, to communicate with each other as much as 24 times faster than with existing technologies.
AMD officials say they also plan to use HyperTransport in servers, workstations, and PCs powered by AMD's next-generation family of processors. HyperTransport provides a universal connection to reduce the number of buses within the system, provides a high-performance link for embedded applications, and enables scalable multiprocessing systems, AMD officials claim.
Compared with existing system interconnects that are as fast as 266 megabits per second, HyperTransport's bandwidth of 6.4 gigabits per second represents better than a 20-fold increase in data throughput, AMD officials claim. The interconnect complements externally visible bus standards such as PCI, as well as emerging technologies such as InfiniBand, AMD officials say.
HyperTransport is to provide bandwidth for the new InfiniBand standard to communicate with memory and system components inside of new servers, AMD officials say. The technology targets primarily information and telecommunications systems.
HyperTransport also has a daisy-chainable feature that connects several different HyperTransport input/output bridges to one channel, AMD officials say. The interconnect supports as many as 32 devices per channel and can mix and match components with different bus widths and speeds, company officials say.
HyperTransport can interface with today's AGP, PCI, 1394, USB 2.0, and Gigabit Ethernet buses as well as future buses including AGP 8x, InfiniBand, PCI-X, PCI 3.0, and 10 gigabit Ethernet.
For more information contact the HyperTransport Technology Consortium in Sunnyvale, Calif., on the World Wide Web at http://www.hypertransport.org.
Motorola Computer Group proposes standard for a switched interconnect on VME
Officials at the Motorola Computer Group in Tempe, Ariz., are proposing a standard for switched serial interconnects on the VMEbus as part of Motorola's VME Renaissance strategy to extend the life and improve the performance of VME.
The so-called "VXS" proposal stands for VME Switched Serial. The new standard would have the P-0 connector be used as an interface for the switched fabrics, says Jeff Harris, director of research and software architecture at Motorola Computer Group.
The emergence of switched fabrics does not spell the end of VME and even PCI, Harris says. VXS will enable military designers to retain reliability of VME and still access the fast I/O speeds of switched interconnects, Harris explains.
Harris says the major elements in the proposal will:
- add a switched serial interconnect to VMEbus coincident with the VMEbus parallel bus;
- employ standard open technology for the switched serial links;
- accommodate multiple standard open technologies for the links, but not necessarily at the same time;
- maintain backward compatibility with the VMEbus ecosystem; and
- bring more DC power onto each VMEbus card.
Previously, the P-0 connector was for PMC pinouts to the backplane and a few extensions of the PCI local bus to a few cards in the first slots of the chassis, says Ray Alderman executive director of the VME International Trade Association (VITA) in Scottsdale, Ariz. The P-2 connector has already been used for databuses such as RaceWay from Mercury Computer Systems in Chelmsford, Mass., SkyChannel from Sky Computers, also located in Chelmsford, and by Myrinet from Myricom in Arcadia, Calif., Alderman says.
"The leading serial interfaces for VXS include InfiniBand initially and possibly switched Ethernet connections in the future," Alderman says. Motorola should have a VXS product out in 2004.
The VME Renaissance will begin with Motorola's launch of a PCI-X to 2eSST VMEbus bridge called "Tempe," which will implement the 2eSST protocol, established as an industry standard by VITA. The protocol enables the VMEbus to run at 320 megabytes per second.
"Our complete commitment to moving the VMEbus technology forward is based on helping our current and future customers compete in their respective markets," Harris says. Military applications where the VME renaissance will most likely find a home will be in applications such as radar, sonar, and signal intelligence, he says.
The Tempe chip, which supports existing VMEbus protocols, is backward compatible with existing VMEbus cards, enabling existing cards and new Tempe-enabled cards to work together in the same system. It also enables the cards to talk at regular VMEbus speeds and at improved 2eSST speeds.
The Tempe chip has a PCI-X bus host-side interface running at up to 133 MHz, which provides transfer rates as fast as 1 gigabit per second. This is a 2X improvement over a 64-bit/66 MHz PCI interface, company officials say.
Officials at the PCI Industrial Computers Manufacturers Group (PICMG) in Wakefield, Mass., also recently announced a new series of specifications, PICMG 3.0, to deal with switched fabric architectures for CompactPCI and telecommunications applications, says Joe Pavlat, president of PICMG. The 3.0 Committee determined that the next specifications in the family would likely be PICMG 3.1, for Ethernet fabric, and PICMG 3.2, for InfiniBand fabrics, PCIMG officials say.
For more information on VXS contact Motorola Computer Group and VITA on the World Wide Web at http://www.motorola. com/mcg and http://www.vita.com. For more information on PICMG 3.0 contact PICMG on the World Wide Web at http://www.picmg.com.
Military board vendors use StarFabric with new products
Engineers at Dy4 Systems in Kanata, Ontario, and Radstone Technology in Towcester, England, are joining hands with experts at StarGen in Marlborough, Mass., to produce single-board computer products based on StarGen's switched fabric technology, StarFabric.
The first Dy4 product — a switch fabric interconnect PCI mezzanine card (PMC) from Dy4, known as StarLink — targets multi-computer architectures for radar, sonar, and image processing, says Duncan young, director of marketing at Dy4. This kind of signal processing can take advantage of StarFabric's increased I/O speeds, he adds.
The Dy4 PMC, which eliminates the need for an additional switching card by using a six-port fabric switch, provides fault-detection, recovery, and redundancy, Dy4 officials say.
Radstone is launching a range of rugged products based on StarGen's StarFabric technology for its quad PowerPC digital signal processor (DSP) family, the G4DSP. The company's first StarFabric product, the PMC-StarLite is a PMC module aimed at adapting the G4DSP's architecture across multiple board sets. The module will have four externally available StarFabric ports.
"StarFabic switching technology provides a wealth of excellent features such as multi-casting, quality of service functionality and high availability features, making it an ideal choice for inter-board interconnectivity" says Stuart Heptonstall, Radstone's DSP and Analog I/O product manager. "StarFabric benefits from being incorporated in the PICMG 2.17 back plane standard and is a zero protocol technology. It maps straight in to PCI address space and inter-board transfers are implemented in the same way as inter-node transfers on the G4DSP, making the software task across multiple G4DSP's very simple and portable".
Other board vendors are also looking closely at StarGen's technology.
"StarGen appears to meet most all of the needs of customers at this time," says Ron Marcus, director of marketing at Synergy Microsystems in San Diego. "Unlike nearly all of the other proposed fabric architectures, PCI-StarFabric bridges and crossbars are available now. New standards like Rapid I/O and InfiniBand have yet to be finalized, much less translated into silicon and software support."
StarGen leaders first targeted telecommunications applications, rather than military applications, says Tim Miller, vice president of marketing at StarGen. Yet the military is turning out to be a viable market because StarGen works with Dy4 and Radstone — two noted military suppliers. In addition, StarFabric is backward compatible with PCI software, so military designers do not have their rewrite their code to take advantage of the switched architecture, he explains.