Sensor and signal processing embedded computing at the speed of battle

Aug. 29, 2022
High-performance embedded computing for sensor and signal processing aims at the 3D 360-degree picture of the battlefield to give commanders a leg-up determining enemy intentions.

By John Keller

High-performance sensor and signal processing today is all about the ability to distill information and make crucial military decisions at the speed of battle. This is more complicated than it sounds, but the rewards are great, and often can tip the balance between victory and defeat in any armed conflict.

Sensor and signal processing at the speed of battle relies on a host of new and emerging enabling technologies, ranging from high-performance microprocessors; secure tactical networking; advanced sensors; software algorithms; small size, weight, and power consumption (SWaP) components; sensor fusion, artificial intelligence (AI), machine automation, and more.

"Today you have a platform with high-resolution sensors and high-performance computing to get a 3D 360-degree picture of the battlefield," explains Chris Ciufo, chief technology officer at embedded computing specialist General Micro Systems in Rancho Cucamonga, Calif.

"A human can't make predictions of what's important in those battlefield pictures," Ciufo points out. "Now we need computers that bring together multiple platforms, each with multiple sensors, and make some quick rapid-fire decisions on what's the highest threat, how do we deal with this quickly to take-out the most important things. We have more sensors, we are fusing these platforms together where they share data, and now you have 30 fire hoses of data all at once."

Sensor processing with different kinds of sensors

"You can use sensor fusion computing to create a 3D picture through optical sensing and optical imaging of a warfighting scene," says Rodger Hosking, director of
sales of Mercury Systems Inc. in Upper Saddle River, N.J. "You are taking those different sensors and combining them into an image that could be presented to a pilot or operator to give him a much richer view of the warfighter scene. Those can be highlighted graphically in different ways that can enhance certain things the operator wants to see, to help him make a decision on what to do."

Typically this kind of capability requires several different kinds of digital processors. "You have multiple types of processors in the processing chain," explains Denis Smetana, senior product manager at the Curtiss-Wright Corp. Defense Solutions division in Ashburn, Va.

"At the front end, where you have sensor data coming in, you typically have FPGAs [field-programmable gate arrays] for front-end filtering to find the signals of interest that might need additional processing, Smetana says.

"FPGAs are flexible and are good at massive amounts of signal processing," Smetana continues. "After FPGAs do that front-end processing, the processed data goes to a more general-purpose processor to do next-level analysis and decides what to do with the signals of interest."

A different kind of processor -- the general-purpose graphics processing unit (GPGPU) -- often is involved in advanced sensor and signal processing. "GPGPUs also can do parallel processing, acting as a co-processor," Smetana says. "GPGPUs can do some of the signal processing. The front-end processor must interface directly with the sensor, and the GPGPUs don't typically have the interfaces to the sensors, so signals usually go to FPGAs first."

Processor technology itself also is evolving quickly toward integrated technologies on the same chip -- often called systems on a chip. "At the silicon level we are seeing a merging of technologies ... where processor cores, GPU cores, and FPGA cells are all combined in the same device," says Curtiss-Wright's Smetana.

"We have low-to-mid-range processing that make use of traditional single-board computers as they are using six- and eight-core processors," says Aaron Frank, senior product manager at Curtiss-Wright. "We have our tiger Lake H 11th-generation single-board computers on our VPX6-1961 6U VPX Intel Xeon W-11000E processor card, which also can do some DSP processing at the low-to-mid-range level."

Demand for ever-more-powerful embedded computers remains a constant in sensor and signal processing. "On the platform itself we need more powerful computers with smaller size, lighter weight, and low power consumption," points out General Micro Systems' Ciufo.

Other important processor technologies for modern signal and sensor processing, experts say, include the Versa FPGA from Xilinx Inc. in San Jose, Calif.; the NVIDIA Jetson GPGPU from NVIDIA Corp. in Santa Clara, Calif.; The Thunderbolt 4 networking interface; and direct RF digitizing, which uses fast analog-to-digital and digital-to-analog converters.

Intel Ice Lake D

Still, perhaps the most important technology breakthrough in sensor and signal processing over the past few years is the so-called Ice Lake D general-purpose processor from Intel Corp. in Santa Clara, Calif. Ice Lake D is an enhancement of the company's Xeon D processor, which evolved from server-class chips.

"At high end Intel has its Scalable Processor (SP) product line for data centers, but the real challenge with using server-class processors is they tend to be very high power --  hundreds of Watts -- so they are very difficult to cool and handle all of that power," says Curtiss Wright's Smetana.

"The Xeon D really is targeted at the mil-aero market, because it has lower numbers of cores, and is easier to cool. With Ice Lake, the new features it brings are the AVX 512 floating-point engine to double the bandwidth of that processor to run floating point algorithms," Smetana says. "The other is targeting at artificial intelligence and machine learning, and vector neural network instructions (VNNI). Rather than running floating point, can run lower-level integer solutions. These are optimized to do that more efficiently within the processing architecture."

It is the Ice Lake D with its additional capabilities that have gained the attention of many designers of high-performance embedded computing systems. "A lot of our customers are looking to see how they can apply that technology to radar and signal processing," Smetana says. "The other key to the Ice Lake D is it has native 100 GbE and PCI Express Gen4 capability built in. It also adds memory banks, significantly improving memory bandwidth which is key for a lot of signal processing algorithms, which require parallel processing with multiple cores. Without the extra memory banks, memory can be a bottleneck."

Part of what separates the Ice Lake D from previous-generation or low-end processors is the number of processor cores the new chip has. "The primary difference
is ice lake class processors have more cores, so they are better for applications that need signal processing," says Curtiss-Wright's Frank. "Lower-end processors have fewer cores, but run at higher speeds; it depends on the software architecture if you can use all those threads."

As an example, The Curtiss-Wright Tiger Lake-class-based single-board computer is for mission processing, data center type virtualization, mission command, and command-and-control applications, while the newer Ice Lake D-based products are more for intelligence, surveillance, and reconnaissance signal processing, Frank says.

General Micro Systems has been providing Intel Xeon D embedded computing products for several years, for systems such as the U.S. Army Warfighter Information Network-Tactical (WIN-T) Increment 2. Improvements of the Ice Lake D over the Xeon D, however, are striking, says General Micro's Ciufo.

"Ice Lake D now goes up to 20 cores, and it has native Ethernet 100 Gigabit Ethernet pipes directly onto the processor itself, as well as PCI Express Gen 4, and the ability to connect disk drives directly to the processors," Ciufo explains. "NVMe drives can be connected directly to the processors to create RAID processor arrays directly to the processor."

Says Mercury's Hosking, "The Intel Ice Lake D will bring more horsepower because it is a multicore high-performance engine, for things that are being done today in scalar processors. You just need more and more of these resources, and these multicore processors, with really good memory access, offer built-in memory blocks -- parallel cores running at very high rates -- to solve some of those high channel-count compute-intensive processing applications."

Networking and sensor fusion

A significant technology trend in sensor and signal processing today is a transition from traditional sensor interfaces to network-centric interfaces, says Curtiss-
Wright's Frank. "In the future, sensors will go to gigabit speeds and beyond. Network interfaces can now distribute signal processors in parallel; it's not a serial processing chain, but now designers can do parallel processing more easily. Still, you need a change in the architecture when you distribute the signal for processing.

"We also are seeing a resurgence of the network architecture," Frank says. "We are looking at the entire signal flow, and designing with a network architecture for data distribution."

The number of processors necessary and the networking approach also depends on the application, says Curtiss-Wright's Smetana. "It also may depend on the number of sensors you need to use. Search and rescue, for example, may need fewer processing streams, while tracking hundreds of targets on the battlefield will require more parallel processing. Sensor fusion needs multiple types of sensors working together to create a better and more composite picture."

One of the most important transformational technologies in high-performance sensor and signal processing is 100 Gigabit Ethernet, says General Micro's Ciufo.

"100 Gigabit Ethernet requires fiber optics, or short distances over copper, and can connect processors together over Ethernet," Ciufo explains. "It's fast enough to
move data from one processor's memory set to another processor's memory set, and you can just use; you don't need RapidIO or InfiniBand to move data between processors and memory. You don't need those esoteric technologies; you can just use 100 Gigabit Ethernet.

One of 100 Gigabit Ethernet's design advantages is its ubiquity. "Everybody understands Ethernet and how to connect Ethernet," Ciufo says. "It has acceptable latency for hard real-time systems."

Where 100 Gigabit Ethernet really shines is in sensor and signal processing in front-line military applications. "On the battlefield, cameras and sensors are looking everywhere, they can't tell from what the foliage and dirt look like if there is an enemy or a vehicle nearby. With high-end signal processing, the camera takes all that information in, applies filters, changes contrast, does edge detection, and says I see here that area of grass has been tamped down, or the dirt is slightly darker. It can do that in real time, using visible light sensors, thermal sensors, and signal processing."

Ciufo says he predicts the next generation of military combat vehicle will have several different sensors to enable the vehicle crew to look at their surroundings, enable the vehicle to drive itself, and scan for threats. "To do this you need more processing per time unit," Ciufo says.

"The next step will be adding more sensor processing on the platforms, with the need to connect them together. We need to fuse all this data into an integrated picture and make decisions in real time on what we should do. We can network all of these sensor platforms together to see what is happening, and what will happen next. It will offer much more rapid decision making and much better outcomes."

AI and machine learning

A primary goal of sensor and signal processing technology development is incorporating artificial intelligence and machine learning into high-end systems.

"One area where we see AI machine learning is in signal identification," says Curtiss-Wright's Smetana. "A lot of algorithms can be used to distinguish one signal from another, what the target is, and where the target is going. Doing this depends on a very clean reception of that signal, and in real life there is a lot of noise that can distort that signal. AI can help train signal processing in noisy environments to find signals that might not otherwise be identified because of the signal distortion. For AI to work, it has to be something you can train the algorithms to learn, and to do that you need a lot of data to train it with."

The kinds of sensor fusion necessary to form a 3D 360-degree picture of the battlespace, even today, requires a great deal of human effort, from intercepting and determining the sources of signals, to interpreting signal behavior that might indicate the presence of an enemy. "A lot of machine learning and AI processing can be automated to assist a human operator," says Mercury's Hosking.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!