Dilemma: databus or switched fabric?

Feb. 14, 2005
Single-board computers rely on fabrics for speed, not stability, which presents designers of data-intensive, interrupt-driven, hard-real-time systems with a raft of difficult decisions.

Single-board computers rely on fabrics for speed, not stability, which presents designers of data-intensive, interrupt-driven, hard-real-time systems with a raft of difficult decisions.

By Ben Ames

Today's soldier carries more computing power on his belt than his father could load in the back of a Jeep. Sensors, meanwhile, gather data around the clock, and unmanned vehicles steer themselves through entire missions. It falls to the engineer to build single-board computers and mezzanine board computers that can handle this challenge.

Board designers today face a new performance bottleneck; modern processors are so fast that traditional parallel databuses cannot keep them adequately supplied with data to take advantage of their blazing speed.

The answer may be switched serial interconnects, an emerging family of high-speed networks, or "fabrics," capable of moving vast amounts of data among components on the board. Still, designers complain that these developing technologies are not yet reliable or predictable enough for battlefield use.

Many chipmakers are not waiting on the sidelines; they are giving in to intense pressure to pick sides, so microprocessor brands each will use a different fabric.

Power efficiency also will drive the choice, since designers must choose between low-wattage systems for wireless and mobile tasks, or high-powered systems for demanding processing of data-intensive signals from radar and sonar sensors.

"You're going to see a lot more talk than shipments of fabrics," says Ray Alderman, executive director of the VMEbus International Trade Association (VITA) in Fountain Hills, Ariz. "2005 will be the year of reality; stuff that's over hyped will bite people that got too far down the pipeline with it."

That is because fabrics rely on software to run smoothly, and that code has not been written yet. Fabrics are 10 percent hardware and 90 percent software, but the world hasn't seen that yet, he says.

In the meantime, military designers will stick with what works.

"Fabrics cannot do what buses do. They can't do deterministic, real-time applications, and their latencies will always be higher. For hard real time apps, VME will be around for 20 years or better," Alderman says.

Designers have to crawl and walk with fabrics before they can run with them, Alderman says. People will not upgrade as fabrics evolve from 2 to gigabytes per second; they will wait until fabrics can operate reliably at 10 gigabytes.

While designers wait, those growing fabric standards will fragment into market shares throughout the electronics industry, experts predict.

Computer-makers want their products to be unique, not commodities, so each will adopt a different fabric to stay incompatible with the competition, Alderman says. In the telecommunications market, switch and router manufacturers will adopt different fabrics to stay incompatible with each other. In the industrial and commercial market, server manufacturers will do the same.

Three fabric categories

As they move ahead, they will choose fabrics from three categories, Alderman says:

-- a "tightly coupled, shared-everything" system, which is deterministic and hard-real time like VME and RapidIO, that enables all its processors to tap all its resources;

-- a "snugly coupled, shared-something" system, which runs in soft real time, that requires its components to share disk or memory space, such as Infiniband, StarFabric, and PCI Express Advanced Switching; and

-- a "loosely coupled, shared nothing" system like Ethernet, in which each processor has its own operating system and resources, and that supports very little relationship between the board and box.

That extreme independence spells trouble for Ethernet, Alderman warns. "One-gigabit Ethernet is an electronic version of cancer," Alderman quips. "It takes one gigahertz of processing power to move one gigabit of data, and the protocol overhead is 70 percent of the processing requirement."

In contrast, Infiniband uses remote direct memory access (RDMA) for memory-to-memory transfer between boards without taxing the microprocessor, requiring only 10 percent overhead, Alderman says.

In September 2003, scientists at Virginia Tech in Blacksburg, Va., collected 1,100 dual-processor PowerPC G5-powered Macintosh PCs running the Linux operating system, and tied them together with 24 high-speed Infiniband switches from Mellanox Technologies in Santa Clara, Calif.

The resulting "Big Mac" supercomputer ranks in the top five in the world, despite a cost of only $5 million. Researchers use the cluster to examine nanoscale electronics, chemistry, aerodynamics, molecular statics, computational acoustics, and molecular modeling.

This supercomputer has its own meaning to computer scientists, Alderman says. It made people notice that Infiniband has a niche application in supercomputing, and its low latency could also be used for embedded applications in radar and sonar arrays, he says. That is why planners chose Infiniband for the new VXS board standard.

Still, Alderman has not given up on Ethernet yet.

Researchers at IEEE's 802.3 committee are putting RDMA into 10 Gigabit Ethernet, and finding a way to run that over a pair of copper backplane wires. Those improvements would move the fabric squarely into Alderman's second category, and qualify it for soft-real time, embedded applications, he says.

Simple jobs keep Ethernet popular

Three years ago, planners at the PCI Industrial Computing Manufacturers Group (PICMG) in Wakefield, Mass., saw the newest processors running faster than parallel backplane databuses could feed them data. As a result, PICMG created the 2.16 standard for running Ethernet over the backplane.

Other fabric options have surfaced since then, and military designers are scrambling to predict which will prevail. In fact, that choice may be decided more by marketing than by technology, says Joe Pavlat, president of PICMG.

Processor manufacturers are trying to preserve market share by choosing different fabric standards. That is why PowerPC chips from FreeScale Semiconductor of Austin, Texas, (spun off from Motorola) will use Serial RapidIO, and Intel's Pentium family will use PCI Express, he says.

"The switched serial interconnects are falling along the CPU-manufacturer boundaries. That's another reason for Ethernet. Because it's the only remaining ubiquitous switched serial interface," Pavlat says.

Before PCI, everyone used his own private bus designs, he says. PCI was the only ubiquitous databus the industry had ever seen, and now that's going away; Serial Rapid IO will never interface with PCI Express.

Still, those options comprise just 20 percent of the market. "The switched serial fabric that powers 80 percent of all transactions on the planet is Ethernet. That's because Ethernet is good enough, it's cheap enough, and people understand it," Pavlat says.

Military designers are loath to use Ethernet because its high software protocol overhead leads to latency and poor determinism, he admits. In comparison, choices such as PCI Express, StarFabric, and Serial Rapid IO are more deterministic, since they run with known latency and known jitter.

Still, Ethernet is not a lost cause. Engineers are trying to fix its overhead problem with TCP IP offload engines, called TOEs. As the standard grows toward 10 Gigabit Ethernet, its pure speed will overwhelm those shortcomings. Pavlat cites the Internet telephone network called voice-over-internet-protocol (VOIP) as proof that Ethernet can produce dependable timing.

If Ethernet can handle Internet telephony, it can handle battlefield communications, short of real-time tasks, like fire control and avionics, he says.

"New military technologies like WIN-T and Net-Centric Warfare are just moving voice, video, and radar data. Those are high-density communications interfaces, so they will need Ethernet too. You will be trading battlefield pictures instead of American Idol, but it's still video," Pavlat says.

Computer makers are also picking sides, as Dell began shipping its PCs in 2004 with PCI Express instead of the usual PCI, he said. Most users will never know they are using a switched serial interconnect instead of a parallel bus, and Dell will reap the technical advantages.

Parallel buses like PCI and VME will fail if one board in the system fails, but switched serial systems can work around those bad components. That flexibility will also help to insulate military designers from parts obsolescence, Pavlat predicts.

Fabrics also suffer less than do parallel buses from electromagnetic interference (EMI). Switched serial interconnects use differential signaling, making them noise-immune, and they use low voltages, so they also produce less noise.

Whoever wins the fabric wars, PICMG planners will stay flexible. Now the group is pushing fabrics into another corner of the industry by evolving beyond the PCI Mezzanine Card (PMC). The new module is the AMC -- short for Advanced Mezzanine Card -- which is "a PMC module on steroids," Pavlat says. The AMC standard describes a hot-swappable mezzanine card with versions to accommodate Ethernet, PCI Express, or RapidIO.

Market share shifts

Among bus architectures, the VME standard has a large advantage in market share compared to Compact PCI, but its lead is shrinking, says Eric Gulliksen, embedded hardware group manager at Venture Development Corp. (VDC) in Natick, Mass.

VDC researchers performed a market survey in April 2004, measuring manufacturers' sales numbers for all types of electronics -- single-board computers, I/O cards, digital signal processing boards, graphics, networking, backplanes, mass storage, and others -- to the North American military and aerospace COTS market.

In 2003, the split was $291.4 million for VME compared to $41.8 million for Compact PCI. Both bus architectures are predicted to grow in coming years, but Compact PCI will grow much faster. Predicted sales for 2008 will reach $315.7 million for VME and $82.8 million for Compact PCI, VDC researchers say.

Sales in Western Europe show the same trend, although the market is about one-fourth the size. In both regions, single-board computers alone represent roughly half the total market of electronic devices.

"So there is a trend toward Compact PCI away from VME," Gulliksen says. That migration will happen faster in the rest of the world than the U.S.

"The reason is we're at war. Field commanders don't want to make technology changes and go to war with it," Gulliksen says. "Compact PCI offers some size advantage and some cost advantage, so will go into some naval vessels and aircraft, but not in a wholesale way until the war is over."

Another reason the rest of the world will adopt Compact PCI more quickly is they are buying new electronics, not replacing current gear. That means they do not have the huge installed base of VME acting as market-share inertia, Gulliksen says.

Military designers are still slow to adopt fabrics. Of the $41.8 million of Compact PCI devices shipped to the North American military and aerospace market in 2003, 92.6 percent were not fabric-enabled, VDC researchers say. Just 6.9 percent used PICMG 2.16, the Ethernet standard, and 0.3 percent used other fabrics, including PICMG 2.17, the StarFabric standard.

Eventually, fabric use will rise quickly. Of the $58 million market predicted for Compact PCI in 2005, the share of non-fabric devices will fall to 89 percent, with PICMG 2.16 rising to 9.7 percent, according to VDC figures.

Board makers stay flexible

Military planners are seeking ways to build network connectivity throughout the battlespace, like the Global Information Grid (GIG) in the sky.

To meet that goal, electronics engineers will have to design each single-board computer and node with its own Internet Protocol (IP) address, says John Wemekamp, chief technology officer for Curtiss-Wright Embedded Controls in Kanata, Ontario.

Fortunately, the trend of electronics miniaturization means that single-board computers are becoming single-board systems on a card, using several processors, onboard memory, high-speed I/O, and switched-fabric connections.

In turn, Curtiss-Wright makes a quad PowerPC card for the signal processing market. Boeing uses this product for its Operational Flight Program (OFP), running four operating systems and four applications on one board, he says.

Military designers are using single-board computers for digital signal processing because they need the extra horsepower for jobs such as sensor processing, data fusion, relaying information to the right operator, and autonomous operations of unmanned vehicles.

"They need more MIPS," Wemekamp says, referring to computer speed measured by million instructions per second. "No matter how much we give them, they want more."

These fast computing speeds demand faster connections than traditional VME and Compact PCI. So single-board computer makers are eagerly awaiting VITA 46, the emerging standard for high-speed serial interconnects as board I/O, he says. Likewise, VITA 42 (XMC) will provide a new standard for switched mezzanine cards, adding connectors to enable greater gigahertz.

Already, notebook and desktop makers are transitioning from parallel buses to PCI Express, as are graphics accelerator chipmakers like 3Dlabs, ATI, and Nvidia, he said. Other options for badly needed high-speed interconnects include Serial Rapid IO and StarFabric.

As hardware makers build these choices into their products, designers will have to pick fabrics to match their mezzanines and processors. And given their small market share, designers of military products will probably not drive that choice, but hang off the coattails of the commercial world, he says.

Regardless of those choices, options like PCI Express Advanced Switching and Rapid IO will be available for years, so Curtiss-Wright board designers will produce electronics flexible enough for any option.

"We'll try to stay fabric agnostic, using a middleware software layer to protect our customers' investments and allow them to migrate," Wemekamp says.

Customers demand flexibility.

"We often have to upgrade to use fewer boxes (LRUs), and also support legacy interfaces. So even our latest single-board computers need 1553 ports on base cards," Wemekamp says.

At the same time, designers of new systems like the Army's Future Combat System are looking at emerging standards like Gigabit Ethernet, USB, Serial ATA and new switch fabrics as well. Fitting all those options on a card forces Curtiss-Wright designers to confront thermal challenges, with rising watts per card.

Future single-board computers must be tailored for specific applications, such as running a processor slowly if the board gets too hot. "People are concerned about power; they can't cool it. We can provide enough horsepower to shrink from six to two LRUs, but the boards get too hot. So people usually go with the thermal limit because they have more horsepower than they need anyway."

Here comes the heat

Thermal management is the next big frontier for single-board computers, agrees PICMG's Pavlat.

"Things are moving from parallel databuses to switched serial interfaces, and that's a good thing," he says. "But cooling is the next major engineering challenge we'll face. It's already started: in April 2004, Apple started shipping the Power Mac G5, the first commercial liquid cooled product."

The one problem facing all industries -- military, industrial, and telecommunications -- involves the upward trend in power density, and the increasing heat that results in multi-core processors. Designers will be forced to move beyond air-cooled electronics, and begin to use liquid cooling. Designs like the Advanced Telecom Computing Architecture (ATCA) have already pushed air-cooled electronics to the limit.

"We spent the 90s getting faster, and now we have to figure out how to manage the heat," Pavlat says. We're at the stage of saying "Oh great, a 150-watt processor. Now what do I do with it?" he said. "You could use a box full of air-cooled 30-watt processors, or use two liquid cooled 150-watt processors, which may actually be cheaper."

Fabrics run today

High-speed fabric interconnects are not just potential plans; they are being deployed today. "Fabric architecture has been critical for recent design wins, particularly for signal processing applications," says David Compston, director of marketing for Radstone Technology in Woodcliff Lake N.J. "So connectivity and high-bandwidth interconnects are where we've been focusing."

In the past, military designers with high-bandwidth requirements had to use proprietary backplane interconnects. Today they are looking at StarFabric, Compston says. Another popular option is PCI Express Advanced Switching, still under development by engineers at StarGen in Marlborough, Mass.

Radstone leaders plan to launch a StarFabric switch in early 2005, intended for programs such as Apache Block 3, and various applications in naval radar, ground-mobile radar, and mine detection.

At the same time that military applications are getting faster, they are getting smaller. "6U is usually where we see state of the art processing, but we're now seeing requirements for smaller, more integrated systems with full capabilities, driven by the market for unmanned vehicles and the need to reduce power and reduce space," he says.

Taken together, these trends present an engineering challenge -- a fast, small computer creates heat, but Radstone engineers have found an advantage in dissipating heat from the 3U size, since the processors are located closer to the sidewalls than they would be in a 6U box.

Single-board computers shrink onto one chip

Even as new technologies provide better computers, military requirements are growing even faster, says Craig Lund, chief technology officer for Mercury Computer Systems in Chelmsford, Mass.

High-performance applications include sensors that collect high-volume data streams, multi-mission computing for every task, and cramming compute power into constrained environments as applications move closer to sensors.

The solution to all those engineering challenges is switch fabric-based architectures, running with multiple processor configurations, he says. Multi-processor chips are a near-term reality, but they demand complex software to manage the raw speed.

Another approach is "system-on-a-chip" (SOC) technology, which is quickly beginning to include much of the functionality now found on a traditional single-board computer. Military systems that used commercial off-the-shelf (COTS) single-board computers do not always need those boards anymore; they can simply stick an SOC chip into the corner of some other board in the system, Lund says.

Because of their small size, SOCs soon could proliferate across the system as super-intelligent I/O controllers, also handling other functions that previously required more application-specific devices.

Fabrics are crucial here, too. Instead of the buses that connect a processor to peripheral chips on today's single-board computers, such a sea of SOCs requires the high-speed, peer-to-peer connections of a switch fabric, Lund says.

This is not to say that single-board computers will disappear. A market still exists for these modules using the highest performance processors, which would generate too many watts to fit on another board like the SOC.

Fabrics boost efficiency

Designers at Analog Devices Inc. (ADI) in Norwood, Mass., will include a fabric port on their TS301 TigerSHARC digital signal processor next year, says Michael Long, the company's strategic marketing manager for digital signal processing. Still, he insists that chip manufacturers cannot blaze the trail alone; COTS board manufacturers will have to support fabric standards, too.

That is one reason that ADI designers will build a variant of the TigerSHARC to support three different fabrics: Serial Rapid IO, PCI Express, and Gigabit Ethernet.

Users choose RapidIO for its efficiency, he says; the standard offers strong performance per square inch as opposed to cranking out absolute watts. Applications such as radar, sonar, and missile tracking would have much higher wattage budgets for the board than a wireless application.

The challenge is how to perform fast Fourier transforms (FFT) and digital signal processing with both power and efficiency. That task is complicated because existing solutions cannot keep the processing core fed with data from off-chip data storage.

ADI designers cope by including enough integrated memory on board to do FFT without fetching data from off-chip, Long says.

They also have moved from efficient SRAM memory to embedded DRAM, thus integrating four times the memory in same embedded area -- jumping from 6 to 24 megabytes. DRAM also needs less power, and has a smaller error rate.

The problem is even worse for wireless applications, which must store massive amounts of data in antennas, creating problems with throughput and latency.

Fabrics can be a solution, moving data from device to device or node to node. But the high overhead of some fabrics means they would work better as a backplane between cards in a rackmount than between devices on a board.

Today, many users choose PCI Express for commercial applications and Serial RapidIO for military, aerospace, and communications, Long says.

That division is as much from force of habit as for technical reasons, he says. Intel's backing of PCI Express has pushed it into many consumer applications. And military designers envy the pure multi-gigahertz clock speed of Pentium chips, but cannot support their power and heat requirements.

Hardware pushes fabrics to market

The RapidIO interconnect is well suited for military and aerospace applications, agrees Andrew Bunsick, product marketing manager for Altera Corp. in Kanata, Ontario.

That is because RapidIO was developed specifically as a high-performance, packet-switched interconnect technology, designed to pass data and control information between microprocessors, digital signal processors (DSPs), communications and network processors, system memories, and peripheral devices, he says.

It is also a good match because it offers a common interconnect protocol for host and control processors and DSPs. And it provides scalability through its point-to-point I/O technology and switch-based architecture.

All industries are slow to adopt new interconnects, largely because hardware manufacturers are slow to provide complimentary products, Bunsick says.

Manufacturers of Application Specific Standard Products (ASSPs) have already released RapidIO switches and plan to release Serial RapidIO switches in early 2005.

However, these switches provide a fixed number of RapidIO ports, not tailored to users' systems requirements. So many users could require multiple devices to handle their switching and bridging requirements.

One solution is the field programmable gate array, which offers the ability to bridge from RapidIO to anything, to support any number of switch ports, and to deliver any DSP function, he says.

Company information

Acromag Inc.
Wixom, Mich.
www.acromag.com

Ampro Computers
San Jose, Calif.
www.ampro.com

Carlo Gavazzi Mupac Inc. Electronic Packaging
Brockton, Mass.
www.carlogavazzi.com

Crystal Group Inc.
Hiawatha, Iowa
www.crystalpc.com

Curtiss-Wright Embedded Controls
Kanata, Ontario
www.dy4.com

Diversified Technology
Ridgeland, Miss.
www.dtims.com

DNA Computing Solutions
Richardson, Texas
www.dnacomputingsolutions.com

GE Fanuc Embedded Systems
Ventura, Calif.
www.geindustrial.com/cwc/gefanuc/embedded/

General Micro Systems
Rancho Cucamonga, Calif.
www.gms4vme.com

Lockheed Martin Systems Integration
Owego, N.Y.
www.lockheedmartin.com/si

Macrolink
Anaheim, Calif.
www.macrolink.com

Maxwell Technologies
San Diego
www.maxwell.com

MEN Micro USA
Carrollton, Texas
www.men.de

Mercury Computer Systems
Chelmsford, Mass.
www.mc.com

Motorola Embedded Communications Computing Group
Tempe, Ariz.
www.motorola.com/computers

Nallatech Inc.
Eldersburg, Md.
www.nallatech.com

North Atlantic Industries Inc.
Bohemia, N.Y.
www.naii.com

Parvus
Salt Lake City, Utah
www.parvus.com

Pentek
Upper Saddle River, N.J.
www.pentek.com

Radstone Technology,
Towcester, England
www.radstone.com

Sarsen Technology
Marlborough, England
www.sarsen.net/sarsen-manufacturer-bittware.html

SBS Technologies
Raleigh, N.C.
www.sbs.com

Sky Computers
Chelmsford, Mass.
www.skycomputers.com

TEWS Technologies
Reno, Nev.
www.tews.com

Tales Computers
Raleigh, N.C.
www.thalescomputers.com

Themis Computer
Fremont, Calif.
www.themis.com

VMETRO
Houston, Texas
www.vmetro.com

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!