The evolution of embedded computing chassis, backplanes, and enclosures

Feb. 25, 2021
High data throughput and innovative thermal management may lead to a revolution in systems design that places the burden of electronics cooling on the enclosure more than on the card.

The traditional embedded computing backplane and design is subject to many pressures these days, such as increasing demands for thermal management, open-systems standards such as the Sensor Open Systems Architecture (SOSA), demands for increased data throughput, steady demands for customization within accepted industry standards, and demands for increased data input/output (I/O).

These pressures are leading to innovations that taken together may be altering embedded chassis and enclosure design in a fundamental way, as systems integrators seek to accommodate rapid upgrades on the chip and board level on the one hand, while maintaining fundamental chassis and backplane design approaches on the other.

Suffice it to say that today’s embedded computing chassis and backplanes are far from your father’s VME architectures. Pressure to deal with ever-growing
amounts of heat, ever-shrinking electronic components and board architectures, and the need to produce ruggedized computing subsystems in ever-smaller form factors will ensure a rapid pace of change for embedded computing backplanes and enclosures.

Thermal management

It’s clear that computer components that get too hot do not perform to their designed specifications. The objective, then, is to cool these hot components sufficiently so they can operate at top performance. The problem, however, involves size, weight, and power consumption — better-known as SWaP. Systems designers want high performance in small packages, but the smaller the package, the more heat it generates.

“Thermals and cooling are more and more key,” says Ram Rajan, senior vice president of engineering and research at chassis and electronics enclosure specialist Elma Electronic Inc. in Fremont, Calif. A decade or more ago the electronics cooling and thermal management challenge was relatively uncomplicated. Ruggedized systems for avionics or land vehicles featured convection cooling where fans blew heat way from hot components, or conduction cooling where processor heat flowed to card wedge locks and out through the walls of the chassis. For all but the most demanding applications, these approaches were sufficient for many years.

It’s different today, however, as embedded computing systems are higher performance, yet run far hotter, than ever before. “The trend is going away from air-cooled, and to conduction cooling and liquid cooling,” says Elma’s Rajan. Driving demands on thermal management are advanced applications such as signals intelligence (SIGINT) and electronic warfare (EW), and ever-more tight packaging for SWaP.

This is leading to new innovations in thermal management within chassis and enclosures. Elma designers, for example, rely on a hybrid cooling approach that combines convection and conduction cooling called Air Flow Through (AFT), which is outlined in the ANSI-VITA 48.8 open-systems standard. Elma’s Rajan says these approaches historically have been for niche aerospace and defense applications, or for test-and-development chassis products.

“Compared even to five years ago, the cooling requirements have jumped up significantly because of the higher-Wattage boards,” explains Justin Moll, vice
president of sales and marketing at Pixus Technologies in Waterloo, Ontario. “Now we need to cool in an air-cooled chassis maybe 2,000 to 2,500 Watts — and in some cases our customers want these to be in rugged deployable chassis.”

There was a time not long ago when this kind of cooling capability came only in chassis intended for benign environments, but not so today. “One thing that is changing is people want that kind of performance in the rugged chassis, as well,” Moll says.

While conduction cooling dominated rugged embedded computing systems just a few years ago, the trend today is shifting to approaches that blend-in air cooling like VITA 48.8 designs, he says. “We are seeing more air cooling today over conduction cooling,” Moll says.

A similar hybrid electronics cooling approach similar to VITA 48.8 is called Air Flow By, as illustrated in VITA 48.7. Both approaches blend conduction and forced-air cooling to wring the most performance possible out of embedded computing systems.

“We have to have card guides for wedge locks, but there is air going through or passing over the boards,” Moll says. “There is definitely much more of a push to flow air over the conduction-cooled boards to supplement the cooling. We are seeing more of that over typical conduction cooling for VPX type of systems.”

Because modern embedded computing architectures can generate so much heat, designers are seeing a spike in demand for liquid-cooled chassis and modules. “In the last year we have done more liquid-cooling designs than we had in the previous 20 years,” points out Elma’s Rajan. Similarly, Pixus experts plan to announce a an electronics enclosure later this year that cools hot components by running liquid through the chassis walls.

Design transformation

The challenges of hybrid and liquid cooling for embedded computing systems are encouraging systems designers to question some of the fundamental chassis and enclosure design issues as seek to find the most efficient solutions for cooling super-heated systems, as well as to accommodate rapid systems upgrades and technology insertion.

Modern embedded computing design typically calls for thermal management at the card level. Circuit boards are designed specially to conduction cooling, convection cooling, hybrid cooling, and liquid cooling. Today some experts are starting to question if thermal management should move primarily to the enclosure level, rather than remain at the card level.

“We have watched power density increase year-over-year for the past handful of years, and what’s happened is the approach of conduction cooled cards in enclosed chassis has solved the vast majority of the needs for deployed military electronics,” explains Jacob Sealander, chief architect for C5ISR Systems at Curtiss-Wright Defense Solutions in Ashburn, Va.

Traditional conduction cooling simply cannot keep up with today’s rapid evolution in processor cards. As state-of-the-art processor solutions move from boards to systems-on-chip, the heat generation increases rapidly with the decrease in size of general-purpose processors (GPUs), field-programmable gate arrays (FPGAs), and general-purpose graphics processing units (GPGPUs).

In this situation “you are removing components that are 50 or 60 Watts apiece to components that are 120 Watts apiece, Curtiss-Wright’s Sealander says. “Cooling technology is bumping into the same thing as the high-end processors out there, which have employed non-standard cooling technologies. They are not defacto solutions, but are exotic solutions that are becoming more of a mainstay.”

While the old VME design approach several years ago resulted in stable and predictable architectures, the rise of OpenVPX has resulted in a fragmenting of industry standards, which can confound systems integrators.

“Cards are all very different designs for liquid and Air-Flow-Through,” Sealander says. “One of the things industry is pushing for is not to have such wildly different designs for cooling, because the costs are too great. So we need to ask how can I accommodate these different methods of dealing with heat without dealing with wildly different card form factors.”

Hands-down, Sealander says, industry needs a manageable set of standards that can deal with high power density. “There will be a continued push for a standardized card that can deal with all the different cooling methodologies,” he says.

Despite this, The SOSA and C5ISR/EW Modular Open Suite of Standards (CMOSS) standards still are attempting to standardize at the card-level interfaces, rather than at the box level. That may be in the process of changing.

“People are talking more about solutions these days rather than on hardware like chips and boards,” Sealander says. “CMOSS and SOSA are embracing that by standardizing on the hardware to get that pool-of-resources approach that you see in cloud and IT world ... what is the hardware layer we need to get to the desired functionality? CMOSS and SOSA are helping the military electronics market embrace the cloud computing and virtualized space methodology. That’s where we need to go because rather than having a computer hardware dedicated to a function, you can have a pool of resources that can achieve that functionality.”

Sealander says Curtiss-Wright is in the initial stages of devising a single-card design that accommodates a wide variety of cooling methodologies, and places the difficult burden of thermal management into the chassis and enclosure. “Rather than designing how the card is cooled, we could put the complexity into the enclosure itself,” Sealander says. “The cooling fluid itself could be air, or liquid — both are viable for pulling heat off cards.”

This approach could enable embedded computing designers to change card designs quickly to meet customer demands, while using enclosure and chassis
designs that could change slowly and accommodate new card designs rapidly. “The enclosure part of the infrastructure could change slowly: the metal box and the cabling. That is where the need is. The electronics is changing quickly, and if we don’t want to keep fighting against it, we want to adapt at the speed of technology. With natural conduction or forced-air convection, the cards all have to be different. We are working toward a single card design to take advantage of advanced cooling.”

SOSA and standards

SOSA, CMOSS, and related emerging standards seek to take the proliferating OpenVPX standards into a manageable set of guidelines for aerospace and defense electronics designs, which is a driving trend in electronic chassis and enclosures. “The desire for systems to support what were previously multiple, physically separated functions on one converged system is driving the need for more cores and support for virtualization,” says Peter Thompson, vice president of product management at Abaco Systems Inc. in Huntsville, Ala.

The U.S. military services also are increasing their support for SOSA, CMOSS, and a variety of other related open-systems standards. The U.S. secretaries of the Navy, Army, and Air Force have issued the so-called “Tri-Service Memo” directing the Pentagon’s service acquisition executives and program executive officers to use open-systems standards that fall under the umbrella of the Modular Open Systems Approach (MOSA) project, of which SOSA is a part.

SOSA, which revolves around the VITA OpenVPX embedded computing standard, focuses on single-board computers and how they can be integrated into sensor platforms. It involves a standardized approach on how embedded systems interrogate sensor data to distill actionable information.

CMOSS is intended to move the embedded industry away from costly, complex, and proprietary solutions and toward readily available, cost-effective, and open-architecture commercial off-the-shelf (COTS) technologies. It was started at the Army Communications-Electronics Research, Development and Engineering Center (CERDEC) at Aberdeen Proving Ground, Md.

“The SOSA effort is an area where there is significant activity,” says Pixus’s Moll. “It is using backplanes that employ some variation of VITA 66 for optical and VITA 67 for RF interfaces over the backplane.

Backplane speeds

Systems designers also are demanding increasing data throughput from backplane and enclosure manufacturers. “The speeds of the backplane are increasing from the PCI Express Gen 3 type of speed of eight gigabaud. Now 40 gigabit Ethernet is becoming common, but in the future we are seeing PCI Express Gen 4, and eventually 100 Gigabit Ethernet,” Moll says. “Together with these faster boards, the drive is more speed in less space.”

Abaco’s Thompson also identifies 100 Gigabit Ethernet interconnects and switches as a major trend in what Abaco’s customers are looking for. Echoes Elma’s Rajan, “We are getting more and demands for 25-gigabit backplanes. For that you would need PCI Gen 4 for any of the high signaling.” 

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!