Next-generation high-speed databuses at the crossroads
Most experts acknowledge that 1553's days as a primary databus are coming to an end, with consensus for near-term improvements settling on FibreChannel.
By J.R. Wilson
The FibreChannel data bus interconnect is the high-speed network of choice on military platforms such as the upgraded U.S. Navy F/A-18 Hornet jet fighter-bomber.
Most experts acknowledge that 1553's days as a primary databus are coming to an end, with consensus for near-term improvements settling on FibreChannel. In the wings, however, wait InfiniBand and RapidIO to determine the network of choice for future military and aerospace integrated systems
The crocodile, one of the last survivors of the age of dinosaurs, has avoided extinction simply because it is tough and good at what it does. Many in the defense electronics business would say the same is true of the MIL-STD-1553 databus.
Yet the crocodile and the 1553 share another characteristic. Like the crocodile, the 1553 is rare in the world except for select niches where it fits the environment just right. Where the crocodile and 1553 do not fit exactly, the world passes them by.
After serving for more than two decades as the backbone bus in avionics systems, 1553 is facing a growing challenge from the high-bandwidth demands of video, audio, and data distribution. Most of the new technologies toward which designers are being pushed will live in concert with legacy 1553 applications. Among the candidate technologies competing with 1553 are InfiniBand (previously called System I/O, NGIO, and Future I/O), FibreChannel, RapidIO, the Fiber Distributed Data Interface (FDDI), and the Front Panel Data Port (FPDP).
The legacy of 1553
As competing new technologies evolve, 1553 continues to cling to its own niche. Its longevity as the military databus of choice is attributable to a number of still positive characteristics, says Wilf Sullivan, product marketing manager at DY 4 Systems, a manufacturer of single-board computers and bus interfaces in Kanata, Ontario. These characteristics include:
- linear local area network architecture;
- capacity for redundancy;
- support for dumb and smart nodes;
- high level of electrical confidence;
- excellent component availability; and
- guaranteed real-time determinism.
"Despite these positive features, future adoption of 1553 in more demanding military systems is limited because the serial transmission rate of the bus is only 1 megabit per second," Sullivan says. "While this data transmission rate remains suitable for more rudimentary functions such as control of landing gear and munitions, it is too slow to serve the increased peer-to-peer communications needed by avionics and vetronics (integrated vehicle electronics) applications in support of data, audio, and video information exchange."
Nevertheless 1553 still has a lot to offer for military and aerospace systems designers. "1553 has some advantages that aren't fully accounted for in more modern designs like FibreChannel," notes Richard Jaenicke, director of product marketing for Mercury Computer Systems (Chelmsford, MA). "It has much less latency, for example, even though it has lower throughput. Communications protocols designed to go between systems usually are higher latency, which is why 1553 is unique."
Even among its naysayers, 1553 can draw at least lukewarm praise. "There's nothing inherently wrong with 1553; from a technology point of view, it may be limited in bandwidth," says Jack Staub, president of Delphi Engineering, a FibreChannel provider in Costa Mesa, Calif.
Staub points to the 1553's relatively low bandwidth and its relative lack of commercial support as only two of its disadvantages. "It has absolutely no leverage off the commercial market. It is an interface really only used by the military. But as time goes on, it will become, in a relative sense, lower and lower in performance. And as we go into the video age, even military systems are going to integrate a lot of high-bandwidth signals for which it simply is not suited."
Even where 1553 is well suited, Staub says it eventually will cease to play a central role. "I doubt 1553 will be used in more than limited capacities in future systems, perhaps to support legacy subsystems. Nor do I think a higher-performance variance of 1553 will be viable," Staub says. "Instead, you'll wind up with some new technology, such as FibreChannel or InfiniBand. But 1553 won't go away tomorrow — it's too deeply ingrained."
Yet will any of the new technologies show the staying power of the venerable 1553? Using the built-in upgradability of a fiber optic network such as FibreChannel, FDDI, and others, it seems unlikely. But perhaps equally unlikely is that any of those now or soon to be available will be the one to toll the death knell for 1553. Even in new aircraft — as well as spacecraft, ships, and land systems — military designers continue to leverage the proven reliability and hardiness of 1553 where appropriate.
Some technologies are becoming available that have the potential to speed 1553 throughput and make it even more appealing to systems designers, particularly those who are looking to upgrade electronic architectures. One such technology "that may be applicable to 1553 legacy networks is adaptive line equalization," Staub says.
"Basically, when you want to move high data rate over copper wire, you have a problem because the wire doesn't have properties that lend itself to that kind of transmission. It distorts the signal," Staub says. "But if you put in digital front ends, you can equalize that line within the system and correct for distortion automatically. If you had that technology running on legacy wiring on a plane or ship, you might be able to equalize it to adapt newer technologies, such as gigabit FibreChannel. But there isn't a huge commercial market there and it needs a market to really happen because it is very expensive."
The new frontier of FibreChannel
Meanwhile, FibreChannel is in place and expanding its applications base.
"FibreChannel is very well entrenched in the storage area and clustering in commercial markets, which has a potential for billions of dollars of sales," Sullivan notes. Sullivan's company, DY 4 Systems, is among those pushing that technology forward with Channel 1, a high-performance system-area network architecture for harsh-environment avionics. The goal of that program is to use one type of network to connect all computing elements rather than use several incompatible interconnects.
"This allows systems integrators to replace a mixture of incompatible connections," Sullivan says. "There are benefits to be derived in cabling and in development; it's cheaper if all your software engineers are using one interconnect."
This diagram represents the RapidIO switched fabric interconnect, which its backers say will provide multiple, simultaneous connections between chips on a board and between boards inside an embedded system.
Channel 1 is to bring together general-purpose processors and allow them to communicate with mission computers, displays (graphical interfaces), storage devices, sensors, and I/O. Sullivan says DY 4 engineers chose FibreChannel because it is a standard and is being adopted for use in avionics platforms, is high speed (gigabit per second with a roadmap to higher speeds), and has low data latency, which DY 4 hopes to leverage for real-time distributed computing applications.
In conjunction with Channel 1, DY 4 experts also have been promoting the use of a lightweight, high-performance programming interface known as virtual interface (VI) to communicate between different computing elements. While it is being adopted to run over FibreChannel, VI also is being developed for use with the InfiniBand community.
"The intent is to remove yourself from any particular underlying hardware by building around standard programming interfaces, such as VI, giving you the ability to migrate from FibreChannel to InfiniBand, for example," Sullivan explains. "At the lower end, InfiniBand as a physical interface will look very much like FibreChannel. It is being built on the same physical cabling interface and 8-bit encoding."
As several different boxes try to communicate simultaneously, the network itself can become a bottleneck. The purpose of high-speed interconnects and high-performance programming interfaces is to overcome that congestion and enable data to move quickly and efficiently.
"Channel 1 is intended to provide the mil/aero community with a high-speed network interface built on standards, with FibreChannel as the media and VI as the programming interface, integrated with commercial real-time operating systems," Sullivan says.
Channel 1 will also bring cost benefits in the long term, Sullivan says. "Channel 1, because it is based on commercially available standards, will drive down the cost of FibreChannel interconnects, making it considerably less expensive than the 1553 interconnect is today," he says. "You want to adopt a technology that is standard, but that doesn't mean, by itself, it will be successful unless it has a commercial following. FibreChannel has that."
In addition, FibreChannel developers and proponents are trying to adapt the technology for avionics upgrades where 1553 is involved. "There are different strategies for tying legacy systems into FibreChannel," Sullivan says. "The Fibre Channel Avionics Environments (FCAE) working group is considering a proposal to have a 1553 protocol that runs over FibreChannel, so you can take existing 1553 applications and apply them to FibreChannel. Where new applications and mission requirements drive demand for higher speed, you'll start to see new system upgrades inserting FibreChannel into platforms alongside 1553. But 1553 is ubiquitous and I expect the two technologies will live together for some time."
FibreChannel is being designed into a lot of existing platforms, such as the U.S. Navy Boeing F/A-18 jet fighter-bomber, that already have 1553. It would be cost-prohibitive to tear out all the old wiring in legacy platforms to replace it with FibreChannel, but the latter is considered a strong alternative in designing new platforms that are not tied to 1553, such as the Joint Strike Fighter (JSF).
InfiniBand and RapidIO
"InfiniBand is the future interconnect for multiple systems. And the next technology for connecting chip-to-chip and board-to-board inside a system is RapidIO," says Mercury's Jaenicke. "It is very high performance, starting at a minimum of a gigabyte per second communications. The RapidIO interface fits into a corner of an FPGA [field programmable gate array], so it could be easily integrated into any new designs for embedded systems communications."
Members of the consortium of workstation and PC manufacturers behind InfiniBand intend for it to replace FibreChannel in the next level upgrade.
"They plan to use it mostly to connect servers to get clusters and connecting storage in enterprise-size systems," Jaenicke says. "Typically, you don't find enterprise systems in a military aircraft and it's not really designed for embedded applications because it takes a lot of power and chips to implement an interface. But down the road, if you have fully different systems on a plane, eventually the technology for connecting multiple servers together could make its way to connecting multiple systems on a vehicle. That might be more likely on a Navy ship than on an airplane, where you could conceivably have several large systems, each with different functionality, that you want to talk together over high-speed lengths."
RapidIO and InfiniBand are in the developmental stage, with no products currently available. The final specifications for the two consortium-based standards are expected before the end of this year, which could lead to some early chip-level applications next year.
The 186-member InfiniBand consortium includes DY 4 Systems, Compaq, IBM, Intel, Microsoft, 3Com, Fujitsu Siemens, and NEC. RapidIO, meanwhile, is the product of a consortium effort that includes Mercury Computer Systems, Nortel Networks, Alcatel, Cisco Systems, and Lucent Technologies.
Once the standards are released and product development begins, there may be more competition than initially expected between InfiniBand and RapidIO, with the former challenging the latter inside the box, says Greg Buzard, marketing engineer at FuturePlus Systems in Colorado Springs, Colo.
"One camp says InfiniBand is actually a solution for both internal and external worlds, which probably will be true, but it will take some time for that to happen," Buzard says. "Dealing with very high-speed buses is a huge challenge to digital designers, who must consider such aspects as microwave effects and low voltage differential signaling, which both RapidIO and InfiniBand use. There is a tremendous interest in low-voltage differential signaling and a lot of the big companies are putting too much energy into it for it to quietly fade away."
One likely scenario is for InfiniBand to catch on slowly. "For the next few years, then, InfiniBand will be mostly box-to-box, which will be good for high end servers and similar applications," Buzard says. "Down the road, however, InfiniBand inside the box could be a very positive thing because of the switched fabric. With standard bus technology, you had to wait for the bus and if somebody took it over, everything else stalled. But with the switched fabric, you can just follow another net rather than waiting for the bus."
On the other hand, Jaenicke says RapidIO is broadly applicable to any embedded system and, with its low latency design, is especially useful when multiple processors are trying to talk to each other.
"If you are connecting lots of items, RapidIO is a switch connector, not a bus; you provide multiple paths for scalable bandwidth in the system," Jaenicke says. "In very simple circumstances, you might not use the switch, you might have two RapidIO interfaces on a chip, using one for incoming and the other for outgoing data. You could then form a daisy chain of such chips without a switch. But if you need to connect more than a small handful of devices, you'll quickly need to go to a switch. There will probably be very small, inexpensive switches for those small embedded applications and larger, high-performance switches for high-end applications, similar to the range of switches you find in the Ethernet world."
Many military systems have data-throughput demands that match RapidIO's characteristics, Jaenicke says. "Eventually, we expect RapidIO to become so ubiquitous, so easy to get interface chips and already built into the processor, you would still use it even for low bandwidth requirements because it's already there."
RapidIO and InfiniBand are about a year away from shipping as actual products. "Next year we expect low-level silicon products, but it might be another year before you see higher level products based on RapidIO," Jaenicke says. "We expect it to be easy for FPGA vendors to have IP cores tailored toward a RapidIO interface during the first half of next year. If someone wants to build an off-the-shelf chip with RapidIO on one side and some other protocol on the other, it may take a little longer. There's always a little time lag at each level for any new technology."
The future of FDDI
Like 1553, another databus competitor also has only lackluster commercial support, which ultimately will work to its disadvantage. This refers to FDDI, one of the oldest of the "new" technologies and the most technologically mature.
"One of the key advantages of FDDI as a replacement for MIL-STD 1553 is its built-in dual redundancy feature, which automatically [transparent to the application layer] bypasses a downed station," says DY 4's Sullivan. "Unfortunately, it rates poorly with respect to deterministic data communication, as FDDI is a token ring in which communicating nodes gain access to the network when an active node releases a shared token."
Sullivan asserts that systems designers have a hard time simulating the time division multiplex command response protocol of 1553 with a token ring without placing artificial limitations on the media and reducing overall bandwidth. The topologies of FDDI and 1553 are fundamentally different, as FDDI is a token ring, and 1553 is a linear bus.
Commercial momentum — or more precisely the lack of it — also is working to FDDI's detriment. "FDDI has failed to capture a significant share of the commercial networking market since silicon was first introduced over nine years ago," Sullivan continues. "With the rapid growth in competing communication standards, it is extremely unlikely that it ever will."
Large computer networks were about the only market niche for FDDI, and these applications today are turning to faster solutions, Sullivan says. FDDI's "primary success, though limited, has been in the large computer backbone networking market previously served by proprietary solutions," he says. "Its maximum data rate of 100 megabits per second, while at the upper end when FDDI was first introduced, is now a limitation, with 1-gigabit-per-second technologies becoming commonplace. Additionally, the high cost of each node adapter has been a roadblock to mass deployment. The growth in this market will likely migrate to fast Ethernet and FibreChannel."
Trends in FPDP
The changing nature of the tasks that systems integrators need to perform, the systems they use, and their design philosophies also are pushing changes in how they handle communications and data flow aboard new-generation platforms.
One traditional approach to moving data inside rack-mount systems has been the Front Panel Data Port (FPDP), a platform-independent 32-bit synchronous data flow path pioneered by Interactive Circuits and Systems Ltd., in Gloucester, Ontario. FPDP moves data at 160 megabytes per second over moderate distances between boards and processing blocks.
The genesis of FPDP came in the late 1980s when Sky Computers Inc. of Chelmsford, Mass., came out with a front panel 32-bit parallel I/O port called SKYburst that used a ribbon cable to connect to other boards. FPDP became a VMEbus International Trade Association (VITA) standard in 1995, and became an American National Standard Institute (ANSI) standard in 1999.
For example, Mercury's Jaenicke says FPDP, which takes data using a 40-connector ribbon cable from an analog-to-digital converter, worked well when designers wanted to have analog-to-digital (A-D) converters close to the computation.
"Now there is a trend to more up-front processing close to the sensor," Jaenicke says. "That means the A-D converter and some small special-purpose processor would be located up by the sensor. That needs a different type of connection. What's been developed is a fiber optic version of FPDP, called Serial FPDP, that takes that protocol and runs it over a different physical layer — an optical FibreChannel that runs at gigabit speeds with very low overhead because it doesn't use the higher level FibreChannel protocols. That works very well for streaming broad data into a system.
"As long as there is space for a thin optical cable, it would be very easy to retrofit this into a legacy system," Jaenicke continues. "A lot of newer aircraft designs already include fiber optic runs because of the weight savings. Also, the cost of the technology, which once was prohibitive, has come down dramatically."
The need for high bandwidth
Delphi's Staub says he believes which of the competing technologies ultimately comes out on top is of less importance than having a true high-bandwidth solution that is affordable and meets all requirements. But accomplishing that will mean high commercial demand, which could create an interesting outcome, he says.
"I think the next generation of FibreChannel will be so close to what InfiniBand and RapidIO are talking about, you may not need either of them," Staub predicts. "There is a big market for FibreChannel being applied as a high-performance network technology and even as a serialized SCSI, which is how it is being used commercially now."
For the military, he adds, about 90 percent of their systems require low latency and high throughput to support multiprocessor environments — just the application for which many companies are now fielding FibreChannel with the Virtual Interface (VI).
"If VI becomes a standard in the much larger commercial market for server clusters, it will drive the technology that is available, along with price, interoperability, and performance," Staub says. "And any new system would have to really think twice before adopting a more proprietary technology. And if it reaches that point, why RapidIO or InfiniBand? FibreChannel would be the natural winner."
Staub says he predicts the next generation of FibreChannel databuses "will see gigabyte — not gigabit — throughput and less than 10 microsecond latency. That's a lot of performance. And if you have an application that needs more than that, it will be a very small niche where some proprietary solution would apply."
Commercial market power
The key to success for any technology — 1553 being a notable and nearly unique exception — is commercial viability. Proponents of InfiniBand, RapidIO, and FibreChannel all cite the commercial potential of those technologies in predicting their ultimate success.
"Commercial market acceptance will pick the winner — regardless of the relative merits of the technology," Staub says. "And I don't think RapidIO has made the progress InfiniBand has made. But that could change. RapidIO looks pretty good from a military point of view, but I don't see it gaining as much ground commercially as InfiniBand. At this point, I'd put my money on InfiniBand. But whichever becomes successful on the commercial telecommunications market, it will win everywhere and the other will fade away."