Posted by John Keller
ALBUQUERQUE, N.M., 18 April 2012.
for rugged mobile military embedded systems
has become a hot topic, with several major players in this market voicing interest, or announcing upcoming products to help satisfy the insatiable appetite for processing power of aerospace and defense systems like vetronics, radar, and electronic warfare.
High-performance computing -- which seems to be the contemporary term for what used to be called supercomputing
-- came up in conversation this week during several meetings I had in Arizona and New Mexico to find out some of the latest trends in rugged embedded computing.
Engineers in the Intel Corp. Intelligent Systems group in Chandler, Ariz., are combing their company's high-performance microprocessor technology with embedded computing expertise at Kontron in Poway, Calif., on a high-performance-computing proof-of-concent-program to create supercomputer performance in the size of a shoe box, says Ajit Patel, a marketing manager at Intel.
The plan, which is in its early stages, would place six high-performance computing blades in a high-bandwidth backplane for vetronics, unmanned vehicles, and other aerospace and defense applications that require dense floating point performance in a small package.
For this project, Intel is bringing its latest generations of microprocessor technology to the fore, while Kontron will concentrate on packaging, thermal management, and other embedded computing design issues, Patel says.
This talk of emphasizing high-performance computing for embedded systems applications struck me as more than coincidence, since just last week I was writing about a big project at General Micro Systems in Rancho Cucamonga, Calif., called Zeus to create high-performance server-class computing for military vetronics applications.
Still, the talk of high-performance computing didn't stop with Intel, Kontron, and General Micro Systems. Jay Swenson, director of marketing and business development for military and aerospace embedded business at GE Intelligent Platforms in Albuquerque, N.M., says GE is increasing its emphasis on high-performance computing.
Within a months' time, GE Intelligent Platforms will announce a new high performance computing center of excellence to focus research and business development in this area. "There is going to be a need for a lot more high-performance computing," Swenson told me.
The reason primarily is communications bandwidth -- or the fact there's never enough for demanding aerospace and defense applications like radar processing, signals intelligence, and electronic warfare. "We have to move as much signal-processing capability closer to the sensor," Swenson says.
That means putting extremely sophisticated floating-point-intensive signal processing capability on small unmanned vehicles, in military combat vehicles that already are overburdened with onboard equipment, and even on individual infantry soldiers, who themselves rapidly are becoming walking sensor platforms and communications nodes.
Packaging high-performance computing so it can be cooled adequately and withstand the rigors of the battlefield is not without its challenges, but there may be a new design issue that could complicate things further, points out Greg Rose, vice president of marketing and software management at safety-critical software specialist DDC-I in Phoenix.
High-performance embedded computing these days, with few exceptions, relies heavily in the newest generations of multicore microprocessors from companies like Intel and Freescale Semiconductor in Austin, Texas.
One military electronics industry trend that is converging on the recent emphasis on high-performance computing involved safety-critical software that must be certified to industry standards like DO-178B and C. These standards primarily are for the commercial aviation business, but it's only a matter of time -- just a few years, perhaps -- before the military will be compelled to join the safety-critical software bandwagon.
When that happens, embedded systems designers had better find a reliable way for safety-critical software to run on multicore microprocessors. Today, Rose points out, some systems designers have to shut down all microprocessor cores except one to run safety-critical software reliably.
The problem involves sharing one memory among several microprocessor cores. Software designers have yet to find a bullet-proof way to share memory while guaranteeing that no data corruption can happen under any circumstances. Sure, many claim they can, but few would bet the futures of their companies on those claims, and that's essentially what they'll have to do.
So here we go, as high-performance computing and safety-critical computing step into the ring. I think engineers will be able to work out the most difficult issues facing them, but we're all in for a boatload of frustration first.