Demanding high-speed I/O

Sept. 1, 2009
Today’s complicated military electronics systems require I/O that moves data from sensors to commanders in real time.

Today’s complicated military electronics systems require I/O that moves data from sensors to commanders in real time. In embedded computing circles I/O solutions primarily revolve around two switched fabrics–Serial RapidIO and Gigabit Ethernet.

By John McHale

Many program managers in the U.S. Department of Defense (DOD) say they are constantly looking for ways to reduce the time between sensor and shooter, which improves warfighter survivability by helping him kill the other guy first.

There is a tremendous amount of data being produced by sensors on the ground, in the air, and on the sea that provides commanders, pilots, and ground soldiers with situational awareness. The sheer volume of information being transmitted requires high-bandwidth systems to move the data at high speeds throughout the network.

As systems in military platforms “become more electronic there will be a greater need for I/O capability,” says Fred Haber, director of sales and marketing at North Atlantic Industries in Bohemia, N.Y. It ranges from electronic doors in large aircraft to the complicated systems of unmanned aerial vehicles (UAVs), he adds. I/O is short for information input/output. The increased amount of data from these sensors, especially video, requires greater and greater bandwidth for I/O systems, he continues.

In the past, most I/O in aircraft was centralized; today, the trend is toward distributed I/O, says Paul Feldman, chief engineer of the I/O division at North Atlantic Industries. The electronic heart of most platforms was centralized and all data was routed through that central node, Feldman continues. Now, embedded I/O boards are designed into the wings of aircraft–in other words, distributed around the platform–providing more and more connectivity.

For distributed I/O solutions, systems integrators want high-speed and high-density solutions that are low power and low cost, Feldman says. Data is sent via several standards, such as 1553, Fibre Channel, and on the embedded side through Gigabit Ethernet, 10 Gigabit Ethernet, Serial RapidIO, PCI Express, etc., he adds.

One of North Atlantic’s high-density VME I/O boards is the 64C2, Haber says. It is a one-slot, 6U VME multifunction I/O and serial communications card with a mother board that contains six independent module slots, each of which can be populated with a function-specific module, according to the North Atlantic data sheet. “The 64C2 VME I/O board can be controlled via Ethernet, as well as by the VME databus. This design eliminates the need for several specialized, single-function cards by providing one board solution for a broad assortment of signal interface and serial communication modules.”

The device also has a built-in-test capability that runs in the background automatically so that the user does not even know it is there, Haber notes.

In embedded computing applications, the high-performance I/O has been enabled by switched fabrics, such as Ethernet and Serial RapidIO (sRIO), Feldman says.

VPX and switched fabrics

The need to take advantage of these switched fabrics drove the development of the VITA standard VPX or VITA 46. VPX provides military VME bus-based systems with support for switched fabrics over a high-speed connector. It promises near-supercomputer performance in small embedded form factors such as 3U that never took off with traditional VME.

North Atlantic Industries 64C2 VME I/O Board can be controlled via Ethernet, as well as by the VME databus.
Click here to enlarge image

The value of VPX is that it enables the density that is needed and provides the pins necessary on the backplane to work with the different high-speed fabrics, Feldman says.

The main standards being used by VPX designers are Ethernet–Gigabit and 10 Gigabit versions–and sRIO.

The big trend is that there are many opportunities opening up for rugged VME devices as well as VPX products that take advantage of switched fabrics–“especially in 3U form factors,” says Michael Stern product manager for AXIS multiprocessing at GE Fanuc Intelligent Platforms in Charlottesville, Va. There is a lot more flexibility in terms of I/O with VPX, he adds. System integrators also like 3U Compact PCI designs, which offer similar I/O performance but can be less expensive, he adds. However, stern notes that VXS and VME-based I/O devices are still being designed into programs, such as the Medium Extended Air Defense System (MEADS) program, which uses GE Fanuc’s DSP220 VXS multicomputer, an enhanced version of the company’s PPCM2 6U VME 2eSST dual PowerPC single-board computer. Engineers from Lockheed Martin’s Radar Systems in Syracuse, N.Y., selected the GE Fanuc board for the program.

Stern says the system can easily be upgraded to a VPX system if necessary.

Not much has changed in the last year regarding these standards, says Mark Littlefield, product marketing manager at Curtiss-Wright Controls Embedded Computing in Leesburg, Va. The difference is that many more are getting deployed, and companies are producing VPX products.

Currently for communicating between systems and boxes, one Gigabit Ethernet and 10 Gigabit Ethernet is the main choice; with the box between boards, Serial RapidIO has taken hold and on the chip level, PCI Express is popular.

“I see a lot of activity in the defense market” for RapidIO technology, says Tom Cox executive director of the RapidIO Trade Association in Ottawa, Ontario. Companies such as Mercury Computer Systems, Curtiss-Wright Controls Embedded Computing, and GE Fanuc Intelligent Platforms are seeing much success in high-end applications, he adds.

Serial RapidIO (sRIO) has solidified a niche in intensive signal-processing applications like radar and sonar, Cox adds.

Mercury Computer Systems in Chelmsford, Mass., has created a hybrid system that uses both, but in truth both standards solve different problems. “sRIO will not go box to box but be more confined to one chassis,” says Marc Couture, manager of systems application engineering at Mercury Computer Systems. For inter-chassis communication, it will be 10 Gigabit Ethernet, he adds.

High-performance military aircraft demand more and more I/O capability than ever before.
Click here to enlarge image

The hybrid Mercury solution basically transferred data via 10 Gigabit Ethernet from outside the box to the Serial RapidIO fabric inside, says Tom Roberts, product marketing manager at Mercury Computer Systems. Then once the compute nodes in the chassis do their computations, the data is sent out of the system on the same 10 Gigabit connection, he adds.

Front-panel data port (FPDP) is also seeing resurgence, Couture says. It offers strong I/O performance in a smaller form factor, he adds.

Ethernet will be useful for processing video from networked cameras because of its bandwidth, Stern says. sRIO works quite well in command-and-control multiprocessing and in heterogeneous processing through the use of FPGAs, he adds.

Ethernet & sensor data

The main advantage Ethernet brings to network communications is bandwidth, which is essential to moving sensor data, says Jack Staub, president of Critical I/O in Irvine, Calif. “We’ll see Ethernet take over” the majority of I/O applications because of its bandwidth potential, he adds. Currently 10 Gigabit Ethernet is starting to get design-ins and “pretty soon we will see 100 Gigabit Ethernet,” he says.

While the amount of data Gigabit Ethernet and 10 Gigabit Ethernet can handle is substantial, it is still sometimes too much for the processor to take on, says Rob Kraft, vice president of marketing at AdvancedIO Systems in Vancouver, British Columbia. This could put a lot of pressure and workload on processors that need to be free to handle more important functions.

So many designers like Kraft and his company use offload engines to take the data off before it gets to the processor, Kraft continues. If they did not, the systems would need more processors, which would create more heat and add unnecessary weight, he adds.

Critical I/O uses offload engines but has also created a product that bridges wideband sensors to 10 Gigabit Ethernet data networks, Staub says.

The device–Sensor Link–“acts as a bridge connecting wideband sensors to standard 10 Gigabit Ethernet data networks–and it does this without the need of any processors and/or special software,” Staub continues. “This approach offers data throughput and latency characteristics but, more critically for avionics applications, Sensor Link also allows 10 Gigabit Ethernet interfaced sensors to be implemented with reduced power, weight, and complexity as compared to processor-based solutions.”

Unmanned aerial vehicles in particular “can’t afford excess processors because of the power requirements for the systems,” Staub says. Also, by eliminating the need for processor functionality the system becomes less expensive, he adds.

According to the Critical I/O data sheet, the Sensor Link connects “simple wideband I/O devices (A/Ds, digital receivers, and imaging devices) to standard 10 Gigabit Ethernet networks where data can be streamed at wire speed with low latency to processors, storage devices, and/or other Sensor Link modules–creating a Gigabit Ethernet sensor fabric. The bidirectional Sensor Link converts sensor data streams to/from standard UDP Ethernet data, at rates of up to 1,200 megabytes per second.”

AdvancedIO’s latest 10 Gigabit Ethernet product–the V1121–uses a field-programmable gate array (FPGA) network called expressXG that offloads pre-processor tasks into the “10 Gigabit Ethernet fat pipe,” Kraft says. It is a conduction-cooled XMC module with dual front-panel optical interfaces.

Kraft says the optical interfaces eliminate the long cable runs or electromagnetic interference (EMI) issues that come with copper interconnects.

“Customers have determined–sometimes as a result of painful previous experiences like running cables through exterior bulkheads in harsh environments–that copper interconnects present too many challenges,” Kraft says. “In these situations, optical links provide a far simpler alternative.”

One Gigabit Ethernet is more deployed than 10 Gigabit Ethernet, but eventually many of these users will upgrade to take advantage of the greater bandwidth with the next-generation technology, Staub says.

Without offload engines, the processor resource can drop as low as 30 percent efficiency, Cox says.

Kraft says he sees system integrators soon taking advantage of 40 Gigabit Ethernet then 100 Gigabit Ethernet. The military applications that will take advantage of these greater bandwidths include sensor fusion, radar, sonar, and network security and encryption, he adds.

End of Altivec

The future looks bright for Ethernet applications, and sRIO designers are seeing much success but their future is filled with a bit of uncertainty.

The next-generation PowerPC family from Freescale, the QorIQ, has a CPU core–the e500–which does not support the Altivec engine that commercial off-the-shelf (COTS) single-board computer suppliers rely on for many of their military digital signal processing (DSP) systems. The Altivec is not being end-of-lifed, it is just not being offered in Freescale’s next-generation chip.

The problem for sRIO designers is that the main alternative to the Altivec–Intel’s family of multicore devices with SSE–do not have sRIO end points because Intel does not see the demand from the majority of its customer base, GE Fanuc’s Stern says.

“Our military and aerospace customers require high-speed, low-latency fabric interconnects between processing elements to support data I/O and interprocessor communication that is required to meet the real-time processing budgets for their applications,” Stern notes. “These include the requirement to field multiprocessor systems capable of providing the compute power to meet an expanding operational mission profile in radar, sonar, sensor processing, and communications, such as software-defined radio applications.

“Whereas Freescale, Texas Instruments, and other chip vendors have seen demand for sRIO-enabled end points from a significant segment of their market–defense and telecommunications–Intel has not so far seen the need to support sRIO on chip,” Stern continues. “Other candidate fabrics for these applications include Gigabit Ethernet, 10 Gigabit Ethernet, PCI Express, etc. Intel does support 10 Gigabit Ethernet and PCI Express in a big way. This means that COTS board vendors will need to support several fabrics over time. While sRIO is suited to distributed multiprocessing applications, it does not have as wide a market acceptance as PCI Express or 10 Gigabit Ethernet since it addresses a narrower application space.”

This is not surprising as sRIO is most popular among military signal processing designers. During a panel discussion at the Military & Aerospace Electronics Forum conference last year, Ron Parker of Intel said that the defense industry represents less than one percent of Intel’s business.

“The e600-based designs continue to be available from Freescale’s 90 nanometer process node and are widely used in military and aerospace circles for general purpose and DSP applications,” Stern says. “The Altivec extensions were developed by Motorola in the late 1990s to target application spaces such as high-performance graphics for desktop systems, such as the Apple Mac, however this architecture soon found market acceptance in the defense arena because it provided a high-performance platform suitable for embedded DSP applications, such as radar, sonar, communications signals intelligence, electromagnetic intelligence, and image processing.

The CHAMP-FX2 single-board computer from Curtiss-Wright Controls Embedded Computing uses a 6U VPX-REDI form factor, two Xilinx Virtex-5 FPGAs, the Freescale 8641D dual-core PowerPC processor, and supports Serial RapidIO.
Click here to enlarge image

At GE Fanuc, “we achieved several significant design wins that we continue to service now. Freescale does understand the embedded market and will support these devices for the long periods–
10-plus years–expected by defense original equipment manufacturers and system integrators,” Stern continues.

Mercury has a “significant algorithm library that relies on the Altivec vector engine,” Couture says. The next-generation Freescale device can handle some command-and-control applications but not full blown DSP tasks that are critical for military signals intelligence programs, such as Fast Fourier Transforms (FFTs), Couture says.

GE Fanuc’s latest board with AltiVec technology is the DSP230, which adds support for 8640D, front-panel I/O, and a PCI Express-enabled PMC site. The Freescale MPC8640D System on Chip (SoC) e600 AltiVec multi-core platform is a low-power, pin- and code-compatible version of the MPC8641D, according to the GE Fanuc data sheet. The device is available in air-, spray-, and conduction-cooled versions. The DSP230 quad 864xx 6U VPX multiprocessor supports concurrent, any node to any node, data movement over PCI Express and sRIO.

The QorIQ e500 platform will also be relevant to part of this defense market where the Altivec unit is not used–image processing is one example. “However, customers are looking for alternative CPU platforms to support their requirements in future.”

The DSP230 quad 864xx 6U VPX multiprocessor board from GE Fanuc Intelligent Platforms supports concurrent, any node to any node, data movement over PCI Express and Serial Rapid IO.
Click here to enlarge image

Some companies are working around this issue with a PCI Express-to-sRIO bridge/switch device but these devices are not yet available and those designing them are under non-disclosure agreements, Stern says.

Another solution Stern says is to offer FPGA-based solutions that deploy custom IP to get from PCI Express to sRIO, yet FPGA-based solutions present a challenge because “they are costly in terms of power budget and card real estate. The FPGA solutions are also perceived in the market as proprietary because they are not available from a wide supplier base and they tend to be customized to cater to particular customer use cases.” Stern says the company is “working with industry partners including Intel to bring such solutions to the defense COTS board market. We will do this by providing AXIS software support to enable customers to migrate to the platforms.” GE Fanuc offers boards that include Intel devices, such as the SBC620, SBC341, and SBC320.

Mercury is working on a bridge chip and developing FPGA solutions though its Echotek group in Huntsville, Ala., which produces an FPGA Mezzanine Card (FMC) called the SCFE-V5-VXS.

FPGAs

“SCFE stands for Stream Computing FPGA Engine,” Mercury’s Roberts says. It combines three Xilinx Virtex-5 FPGAs, two quad small form factor pluggable fiber interfaces, two FMC sites, and several links for moving data among the FPGAs and I/O interfaces, he says. The FMC board is for initial stage processing of high-bandwidth data steams, with a great deal of flexibility in the types of formats/standards/protocols it can accept. “There is also flexibility in the way data can be moved between different components, supporting many types of application-specific processing models,” he says. There is quite a bit of interest in the FMC or VITA 57 standard because of its I/O capability and flexibility in terms of anti-tamper applications, Couture says.

Curtiss-Wright Controls Embedded Computing is getting a lot of traction on its FMC offering, the CHAMP-FX2, Littlefield says. The device uses a 6U VPX-REDI form factor and two Xilinx Virtex-5 FPGAs, along with the Freescale 8641D dual-core PowerPC processor and an sRIO switching fabric. “With several large DDR2 SDRAM and fast QDR-II+ SRAM blocks, greater than 13 gigabytes per second total FPGA memory bandwidth, and several on-board and off-board RocketIO serial ports, the two FPGA nodes provide a mix of processing capabilities with memory, inter-FPGA, and off-board bandwidths.” The CHAMP-FX2 may be used in a one-board configuration, or with CHAMP-AV6 or VPX6-185 single-board computers with sRIO to form large, heterogenous multicomputing platforms.


Gigabit Ethernet communications for network-centric warship operations job goes to Boeing

Engineers at the Boeing Co. Integrated Defense Systems segment in Huntington Beach, Calif., are providing Gigabit Ethernet communications networking for Australian Arleigh Burke-class destroyers under terms of a $14.6 million contract awarded this week.

This network-centric capability for Australian navy warships is called the Gigabit Ethernet Data Multiplex System (GEDMS), which is a shipboard network upgrade for the Burke-class destroyer warship.

The GEDMS system is a technology refresh to the Fiber Optic Data Multiplex System (FODMS) shipboard network, and will increase the overall shipboard communications networking bandwidth by replacing the Fiber Distributed Data Interface (FDDI) backbone associated with the FODMS system with a Gigabit Ethernet backbone.

The GEDMS system transfers data, command, or status messages between various types of user source and user sink devices. For purposes of survivability, the GEDMS system uses a Mesh Topology over two independent network backbones; each network uses backbone switch enclosures (BSEs) for connection to network and user links via the fiber-optic cable

Boeing also is providing GEDMS capability for a land-based GEDMS trainer, GEDMS hardware, and installation, and checkout repair to the Australian navy.

GEDMS is a ship-wide data transfer network for a ship’s machinery, steering, navigation, combat, alarm and indicating, and damage control systems. It was designed to replace the miles of point-to-point cabling, signal converters, junction boxes, and switchboards associated with conventional ship’s cabling.

Work will be in Huntington Beach, Calif., and should be finished by early 2011. Awarding the contract was the Naval Surface Warfare Center Dahlgren Division in Dahlgren, Va.

More Military & Aerospace Electronics Issue Articles

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!