The role of image and video processing

March 22, 2023

NASHUA, N.H. - Today's military and the its partners in the technology sector enable warfighters at the front to have the most up-to-date intelligence. Electro-optics work in concert with high-performance embedded computing image and video processors to distill the veritable "fire hose" of information down to actionable intelligence thanks to the speed at which data is processed by today's cutting-edge systems.

Tim Morin, a technical fellow at Microchip Technology in Chandler, Ariz., says that vision systems are tasked with a heavy workload in military and aerospace environments for applications like sensors and avionics.

In the military, Morin cites their use in:

-- autonomous target identification in smart munitions like drones and missiles;

-- satellite-based hostile target identification for hostile land installations, aircraft, surface warships, and submarines;

-- integrated head-up displays for targeting and situational awareness for combat pilots; and

-- situational awareness using head-mounted vision systems for infantry.

In aerospace, Morin lists:

-- vision-based navigation or star tracking;

-- spectral sensing used for terrestrial terrain mapping (Earth, lunar surface); and

-- artificial intelligence (AI)-based navigation taking satellites out of orbit;

Regarding avionics, Morin selected vision-based pilot monitoring in cockpit simulators close circuit cameras inside the aircraft as examples.

According to Ike Song, vice president and general manager of platform systems at Mercury Systems in Andover, Mass., much of that capability is enabled by developments in the commercial sector.

"Mil-aero image processing is moving steadily from 1K to 4K processing, and trends in the commercial world are driving this advancement at a much faster rate.
Urban Air Mobility (UAM) and autonomous vehicle companies are making significant investment in machine learning and 8K processing for pattern recognition and visual navigation, which can then be leveraged and adopted by the military at a much lower cost. This technology will allow the military to use affordable sensors for accurate navigation and automatic target recognition."

Mark Littlefield, the senior manager of the Elma Electronic embedded computing products and systems solutions division in Fremont, Calif., explains that open systems standards also provide vendor-agnostic platforms.

"As data inputs continue to increase, we’re entering a critical era of data availability to turn this information into intelligence, and provide actionability in defense operations," Littlefield says. "Leveraging proven technology on a common platform reduces integration issues and development time as well as shorten deployment. This can be seen in The Open Group’s Sensor Open System Architecture (SOSA) initiative, which is bringing availability of common platforms that use the latest GPGPU processors. From this, we are seeing more image and data processing capabilities emerge."

COTS concerns

SOSA's open-systems and commercial-off-the-shelf (COTS) approach not only fulfills U.S. Department of Defense (DOD) requirements, but it also provides system components provides increased capabilities at lower costs, says Paul Garnett, the chief architect, business development at Curtiss-Wright Defense Solutions in Ashburn, Va.

"The main drive is the availability of the technology – the need has always been there but now the means is available through the use of GPU devices from the commodity graphics market at an affordable price point," says Garnett. "The affordability of high-performance GPUs means that lower cost vehicles can also support a more advanced capability."

While intelligence for warfighters boosts the likelihood that they won't be taken out of the fight, AI- and machine-learning-capable unmanned vehicles continue to show their efficacy as a force multiplier and to keep those on the front lines safe.

In the end, that intelligence is often presented to humans-in-the-loop who decide what, if anything, to do with it. "Image processing requires powerful computing with software algorithms to process and identify objects of interest from electro-optical sensor input," Song says. "Displays need to support protocols like SDI, DVI, HDMI and ARINC 818 to display this information in real-time."

Electro-optical sensors add a lot of data to the situational awareness pipeline. "Advances in electro-optics increase the quality -- and amount -- of information available, high performance embedded computing is able to process that immense amount of real-time data and present it in a digestible from through modern rugged displays," says Curtiss-Wright's Garnett. "The computation and software have the potential both to process the data in innovative ways to alleviate operator task saturation, but also to introduce new ways of presentation/analysis, all in real-time, for the best possible mission outcomes. If you can’t measure or see the threat, you can’t manage or respond to it.

"Real-time image processing is critical for automatic target recognition, blue force and red force tracking, minimizing friendly fire, speeding the OODA loop for
faster operations, performing accurate battle damage assessments, and running visual navigation and sense-and-avoid systems that will reduce pilot workload," Garnett continues. "Embedded solutions that deliver this level of high-performance processing to the platforms in theater are critical to delivering real-time access to this information to our customers."

Increasingly, that actionable data is being used by the system that is collecting the information itself thanks to machine learning and artificial intelligence enabled by graphics processing units (GPUs) and general-purpose GPUs (GPGPUs).

Richard Jaenicke, the marketing director for safety and security-critical products at Green Hills Software (GHS) in Santa Barbara, Calif., says that the COTS approach is being expanded into the higher levels of integration. GHS offers the INTEGRITY-178 tuMP real-time operating system (RTOS), which provides a unified solution for multicore processors.

"One example is the Replacement Multi-Function Controls and Displays (RMCD) on the C-5M Super Galaxy transport aircraft," Jaenicke says of the higher-level integration. "On the C-5M, PU-3000 multicore avionics computers from CMC Electronics will be combined with VDT-1209 video display terminals from Intellisense Systems to form the full C-5M cockpit display system, including the primary flight displays. The PU-3000 multicore avionics computers run the INTEGRITY-178 tuMP RTOS to provide the foundation for real-time safety-critical operation to the highest RTCA/DO-178C Design Assurance Level (DAL A)."

Eye on AI

The U.S. military continues to leverage the growing list of capabilities machine learning and AI enable in more and more machines that operate in concert with -- and sometimes independently of -- soldiers, sailors, airmen, and Marines.

Microchip's Morin says "that cameras are used more than ever before in aerospace and defense applications. More cameras add weight and power requirements.
Customers are moving towards a higher resolution to capture more details in images. The need to drive more pixels and the increased use of AI drive the need for high-performance embedded computing."

That focus on AI was a universal trend when mil-aero industry experts were asked what they saw as technology driving development of image processing systems and components. Curtiss-Wright's Garnett explains that processing power is now meeting long-standing demand for it while component cost comes down. In short, systems can do more at a lower cost and that allows the proliferation of the technology into more vehicles.

"We are seeing increasing use of GPUs to support machine learning/AI in imaging systems for threat detection/SA/sighting systems," Garnett says.

Elma's Littlefield says that areas like machine condition-based monitoring and predictive maintenance, semi-autonomous driving and driver advisory systems are relying on the parallel processing architecture of GPGPU.

"Complex GPGPU inference computing at the edge is enabling visual intelligence, including high-resolution sensor systems, movement tracking security systems, automatic target recognition, threat location detection and prediction," Littlefield says. "Much of the high compute processing taking place within these critical embedded systems relies on NVIDIA compact supercomputers and their associated CUDA cores and deep learning SDKs used to develop data-driven applications, turning data inputs into actionable intelligence."

Unmanned importance

Aitech Systems in Chatsworth, Calif., provides rugged mil-aero embedded systems among other technologies. The director of its video and GPGPU product line, Dan Mor, says the continuing conflict in Eastern Europe highlight both the need and efficacy of crewless vehicles enabled by today's image and video processing technology.

"The latest war in Ukraine proves how important unmanned ground mobile and avionic platforms are," Mor says. "These platforms need high performance SWaP [size, weight and power] optimized systems for image and video processing at the edge, AI – for object recognition, classification and tracking and networking – for deterministic communication and real time decision making. New geo-politic world situations, threats and available technology drive the mil-aero market."

Mercury’s Song says that while many current unmanned aerial vehicles (UAVs) aren't autonomous because a human pilot must be in the loop, "but by using more advanced image processing and AI/machine learning, the military will be able to build truly autonomous vehicles."

While many UAVs and other unmanned systems are not yet fully autonomous, each camera, cable, sensor, board, and processor onboard systems contribute to the information available to commanders providing orders and to frontline warfighters.

Eyes in the sky

Image processing technology allows military aircraft to operate in areas that would have prohibited them in the past. With multi-sensor systems now in use, aircraft can operate in degraded visual environments (DVE) like smoke, fog, snow, and heavy rain.

"For example, helicopters experiencing takeoff and landing DVE created by a rotorwash-induced brownout or whiteout conditions can cause loss of visual reference with the ground and man-made obstacles at low altitudes," GHS's Jaenicke says. "Such DVE can cause a hard landing that often results in aircraft damage or loss as well as personnel injury. All aircraft are susceptible to DVE during flight created by low visibility conditions can result in Controlled Flight into Terrain (CFIT) or collision with other obstacles.

Modern DVE systems use real-time fusion of multiple passive and active sensors capable of deep penetration through environmental obscurants to provide a clear picture of the environment around the aircraft. Hazard detection and highlighting software provides terrain and obstacle proximity alerts and symbology on the fused imagery aligned to terrain map data. Due to the level of criticality, DVE systems generally require safety-critical design up to RTCA DO-178 DAL A."

Jaenicke notes GHS’ INTEGRITY-178 tuMP RTOS has been used on multiple DVE systems to provide the basis for safety certification. "Because both real-time sensor fusion and obstacle detection are computationally demanding, it helps to have a safety-critical RTOS like INTEGRITY-178 tuMP that can utilize all the processor cores in a modern CPU while maintaining safety-critical operation," Jaenicke says.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!