Myriad missions rely on a robust, reliable, and secure intelligence infrastructure, including high-performance embedded computing (HPEC), on the digital battlefield.
BY Courtney E. Howard
Myriad sensors pervade today’s digital battlefield—on land, air, and sea—gathering intelligence that could prove critical to soldier safety or mission success. Military decision-makers, warfighters, and missions increasingly rely on data, including videos and imagery, and the electronics infrastructure responsible for acquiring, processing, storing, securing, and transmitting data in the field.
“The need to quickly and decisively act on intelligence information secured through video surveillance is a major challenge,” asserts Michael Steele, senior manager of strategic alliances at Nvidia in Santa Clara, Calif. “Today, the growing volume of video and image data poses a huge problem for analysts to provide effective situational awareness in a timely, relevant, accurate way.
“The challenge for the military intelligence community is daunting, considering the need to review hours and hours of video and imagery, enhance the images in numerous ways, detect and discern among objects of interest, provide change detection, and geospatially tag the data for extended analytics,” Steele continues. “It’s nearly impossible for a typical analyst to view and comprehend this deluge of imagery and geospatial information—this means that actionable intelligence opportunities are being lost on a regular basis.”
The National Geospatial-Intelligence Agency, a U.S. Department of Defense (DOD) combat support agency and member of the U.S. Intelligence Community in Springfield, Va., is storing more than 30 million minutes of motion imagery, noted Barry Barlow, director, acquisition directorate, NGA. Barlow, in his talk at the 2011 National Association of Broadcasters Show, characterized full motion video (FMV) as a top DOD priority and a critical warfare capability.
The hottest trend in the military geospatial-intelligence community is full motion video, but it requires a robust infrastructure for capture, storage, analysis, and transmission with high bandwidth and low latency, says Ching Hu, manager of avionics system architecture, Aerospace & Defense division, Xilinx in San Jose, Calif. Technology firms in the mil-aero community are keeping a close eye on such trends, anticipating and delivering on technological demands.
Demand for video is increasing in three mil-aero vectors, Steele notes. “Combat air patrols of existing vehicles are up 300% since 2007, the number of new platforms is going up exponentially, and the number of cameras/sensors mounted on each platform is going up. The Predator has 1 camera, the new Gorgon Stare platform has 9 cameras, and the next generation will have 36.”
Deluge of data
“We’re going to find ourselves, in the not-too-distant future, swimming in sensors and drowning in data,” cautions Lt. Gen. David A. Deptula, the first Deputy Chief of Staff for Intelligence, Surveillance, and Reconnaissance, U.S. Air Force. Echoing Deptula’s sentiment is Larry Schaffer, vice president of business development in applied image processing at GE Intelligent Platforms, headquartered in Charlottesville, Va. The plethora of multi-spectral, video-based sensors in the battle space is huge—and both fortunate and unfortunate, he notes.
It is important to separate useful video from a sea of useless video, which is where image processing comes in, Schaffer says. Image processing can: extract interesting features or activity from uninteresting backgrounds, enhance the image quality of the interesting things, capture and tag related metadata about the interesting things, and impart the resulting fused, rich data set about interesting things to human observers, he says. “In this way, the decision process—latency between occurrence of an interesting thing and effective reaction to it—is shortened.”
All this data requires a tremendous amount of processing for enhanced quality, real-time analytics, and geospatial intelligence, Steele says. “Considering the tasks at hand, combined with the push by the U.S. Government to cut the military budget by $420 billion by 2023, the solution is more powerful and cost- effective automated systems. This is where modern graphics processing units (GPUs) come into play, for professional visualization and massively parallel computational capabilities.”
Deployed unmanned aerial, undersea, and ground vehicles, carrying electro-optics payloads and growing in number, are fulfilling intelligence, surveillance, and reconnaissance (ISR) roles. They are delivering streams of detailed imagery that often requires considerable processing to be deemed actionable intelligence—valuable data upon which decisions can be made and missions can be planned. Imagery captured by unmanned aerial vehicles (UAVs) requires rendering, stabilization, and enhancement prior to efficient viewing and analysis, for example.
“Using just central processing units (CPUs), this process takes too much time to enable information to be viewed in real time, resulting in potentially outdated intelligence data and inaccurate guides for military action,” laments an Nvidia spokesperson. As a result, advanced image-processing software, such as Ikena ISR from MotionDSP in San Mateo, Calif., is being designed to harness the power of the GPU. Ikena ISR’s computationally intense, motion-tracking algorithms reconstruct video with cleaner detail, increased resolution, and reduced noise.
Executing the sophisticated video postprocessing algorithms required for effective reconnaissance using only CPUs would result in between four and six hours of processing for each hour of video—not a viable solution where real-time results are critical, says an Nvidia representative. Tesla GPUs in combination with Ikena ISR software reduce latency and enable real-time processing on standard workstations small enough to fit in military field vehicles.
“With the GPU, we’re bringing better video to all ISR platforms, including smaller UAVs. We’re increasing the effectiveness of these platforms by augmenting them with advanced image processing running on COTS PC technology,” explains Sean Varah, chief executive officer of MotionDSP.
Automation also plays a role in effective video and image analysis, helping ensure human operators, often inundated with data, don’t overlook details. Panoptes software from IntuVision in Woburn, Mass., takes advantage of Nvidia Tesla GPUs to evaluate large quantities of video quickly and accurately, a process that proves too time-consuming to perform manually.
“With the advent of affordable parallel computing, we’ve harnessed the computational power of GPUs to develop products which help humans analyze incredibly complex video environments,” says Dr. Sadiye Guler, founder/president of IntuVision. Panoptes software was used to analyze video shot from tower- mounted cameras on a ground-based surveillance system prototype developed for Marine Expeditionary Units. Video data, collected on an unsecured area prior to main units moving in, was able to be analyzed and acted upon within an hour of receiving it vs. taking half a day.
The modern warfare model has shifted toward urban warfare, with more focus on reducing non-combatant casualties by way of precision strikes. “Actionable intelligence is at the heart of all successful precision strikes,” Hu says. “The trend of military precision strike operations and modern aircraft relying on high-speed, reliable on-board networks to collect sensor data and control mechatronics is increasing the demand for video/image-related products and services in mil-aero.”
Field-programmable gate arrays (FPGAs) are being adopted to meet the need for systems offering low power consumption, highly parallel processing, and secure transmission—all integral to today’s military precision strike operations. “FPGAs are ideal for parallel processing, a much more efficient model for processing massive data,” Hu notes.
Xilinx FPGAs provide on-board processing and networking in the mission computer, electro-optical targeting system, and other processing intensive integrated modular avionics (IMA) aboard the Joint Strike Fighter (JSF) aircraft. “JSF aircraft are literally flying sensors with an unprecedented degree of 360-degree situational awareness from all the sensors on board, and that requires an extensive network, both on-board and remote, to transport the data,” Hu says. “FPGA application and usage in ISR and video is rapidly growing, and will continue to grow rapidly.” In addition to JSF, Xilinx FPGAs have been used in Predator UAVs, guided missiles and munitions, satellites, and communication networks, including handheld, manpack, vehicle-mounted, and fixed-site tactical radios.
“FPGAs inherently have the capability to do parallel operation with comparably lower power consumption than equivalent processors performing equivalent task,” Hu asserts. “Xilinx FPGAs have dynamic partial reconfiguration capability, which means the end user can store many different functions/features/algorithms in one location and load them to execute as needed—effectively reducing the size, weight, power, and cost (SWaP-C) of the end application. Also, by incorporating FPGAs, traditional lower-bandwidth copper wires used for networks can potentially be replaced with fiber-optic cable, which has significantly lower weight.”
An electro-optic/infrared (EO/IR) imaging pod, such as those mounted on larger UAVs, employ Xilinx FPGAs—typically high-end Xilinx Virtex-5 and Virtex-6 devices—for such functions as noise-reduction digital signal processing (DSP) and real-time image transform processing. “Some [EO/IR] designs use Virtex-6 SXT devices for intense DSP, and others use Virtex-6 LXT devices for massive parallel logic operations,” Hu describes. Xilinx FPGAs, he says, also perform massive parallel processing and DSP functions in synthetic aperture radar (SAR) imaging systems, which are typically mounted on larger aircrafts and UAVs or in satellites.
Assets over Ethernet
Video is now the most valuable asset the warfighter has, according to Schaffer. “The warfighter cannot afford educated guesses, based on sparse data; he must know.”
The military is relying more on visual information for decision making, admits Dan Veenstra, sensor processing product manager at GE Intelligent Platforms. “The amount of data created by visual sensors (i.e., cameras) creates a challenge since it can easily overwhelm the existing communications infrastructure and can use up large amounts of storage.”
Dependence on video drives the need for highly reliable capture, transport, and processing combined with secure networking and storage. “GE Intelligent Platforms has focused its efforts on high-speed, fiber-optic, Ethernet-based transport, switching, and processing of large video loads for applications like 360-degree distributed aperture sensor/situational awareness (DAS/SA) systems,” Schaffer says. GE-IP’s SBC324, GRA111, and GFG500 multicore and many-core processors, combined with GE Research Center’s algorithms for real-time video analysis, are intended to bring “unprecedented video exploitation power to the warfighter” and streamline the decision cycle.
For display and storage, GE-IP’s ICS8580 conversion/compression board and GFG500 Gigabit Ethernet, 64-core video processor board address the need for high-speed, visually lossless compression and Video over Ethernet distribution.
NASA Dryden Flight Research Center and the National Oceanic and Atmospheric Administration deployed the Global Hawk UAV— designed to stay airborne longer, cover more area, and record a more complete data set than comparable aircraft. The NASA Global Hawk’s sensor payloads provide information crucial to research missions. NASA engineers outfitted the UAV with GE-IP’s ICS-8580 video compression platform to ingest high-bandwidth, high-resolution video streams from onboard sensors, compress the data, and enable video feed transmission over a communication link to the ground station for observation and analysis with negligible impact on image quality.
Rugged is required
Today’s need for full motion video analysis requires computing power beyond the capabilities of rugged laptops; yet, the fastest path to failure is to deploy non-rugged or semi-rugged equipment in the field, admits Todd Prouty, business development manager at Crystal Group Inc. in Hiawatha, Iowa. “The military is looking for smaller, lighter, more portable systems that are powerful enough to be used ‘real-time’ as information is being collected. At Crystal Group, Prouty sees increased demand for self-contained, forward-deployed systems able to integrate multiple types of sensor data into a single analysis point to improve situational awareness of the battle space.
The appetite for computing power and graphics processing in airborne, ground stationary, and ground mobile applications is constantly growing. “Systems today are utilizing combo data storage/image processing products configured for real-time video analysis with 4.8 teraflops combined processing and asking for more,” Prouty describes.
Crystal Group has completed a ground-station system combining a data receiver, RS112 and RS378L24 servers, a large RSS639 JBOD (just a bunch of disks), an RCS3750G-24TS network switch, and an RD2119 display. “The customer needed a portable solution that could ingest data from an airborne sensor, have the ability to process and display the data, and temporarily store multiple missions of data into a JBOD with 78 terabytes of storage,” Prouty says.
Crystal Group has also deployed its RS255 rugged server configured for image capture and data storage in a mil-aero program, for which a commercial server was initially considered. Weighing the program requirements against the performance, features, and environmental pedigree of Crystal Group’s server, however, “they decided to take the low-risk approach,” Prouty recalls. “Several programs in development also utilize the RS47FL24 rugged server with seven PCIe 2.0 expansion slots able to be configured with multiple data acquisition, data transfer, and image processing COTS cards.”
“As the digital landscape in the battlefield changes, the ability to synchronize communication protocols and software platforms, coupled with reliable and simplified user interfaces, is critical to military campaign success,” says Michael Woolstrum, CEO of Touch International (TI) Inc. in Austin, Texas.
Touch-enabled embedded computers and displays ensure fast, efficient interaction with data. “Touch International touch screens address the need to make the display interactive and add critical, life-saving features, such as EMI shielding, NVIS compatibility, and LCD heaters for cold-weather and harsh-environment operation.
“Demand for simplified and unified user interfaces has increased exponentially,” Woolstrum explains. “The military is going to continue to rapidly adopt interactive displays for applications ranging from handheld GPS devices to large displays used by central command units.” TI’s touch/filter/heater technology display solutions have been deployed in and throughout various military vehicles.
Similarly, the IP67/NEMA-rated MSM08V monitor from Digital Systems Engineering (DSE) in Scottsdale, Ariz., is deployed as a rugged video display within various UAV control stations. Able to operate in extreme temperatures, dust, sand, or rain, the display features three composite video inputs, accommodating multiple video feeds simultaneously. “DSE’s advanced LED backlight and night- vision technologies deliver clear and high-resolution imagery from UAV sensors under any lighting condition,” says Doug Hladek, DSE business development manager.
The challenges for display systems are significant in the battlefield environment, requiring high brightness and contrast levels with a good dynamic range to enable the display of multispectral, color, detailed sensor data in a variety of military environments, explains Anthony McQuiggan, chief technology officer at Curtiss-Wright Controls Embedded Computing in Laindon, England. Compatibility with night-vision technology, for example, is a growing demand for displays deployed in battlefield environments.
“Modern displays, such as those from Curtiss-Wright, feature hardened touch screens and multiple LED lighting sources that deliver optimized high-brightness, high-contrast data display in air and ground vehicles during daylight and starlight conditions,” says McQuiggan. “They offer full-color presentation of map and warning information, while maintaining compatibility with night-vision goggles, so goggles and head-down viewing of the displays are possible simultaneously without one affecting the other.”
The complexity of video on the battlefield is increasing month on month with the introduction of digital video sources, compared to the traditional analog techniques used in legacy systems, says McQuiggan. “The requirement to move video signals between airborne platforms, mobile vehicles, and individual soldiers requires compression,” he says. “The use of complex compression techniques, to allow the near-lossless compression and transmission of the video, places challenges on the equipment used in these applications.” Further, 360-degree awareness sensors and the fusion of images from those sensors into a seamless, 360-degree, high-resolution view also increase the storage, distribution, and image processing required. Curtiss-Wright products using high-speed GPGPUs enable the harmonization and presentation of 360-degree views to be achieved in small processing modules able to be fitted to ground and airborne platforms.
“The technology is moving to keep pace with current requirements, which are moving to higher resolution, faster refresh rate, and the ability to manipulate large quantities of data in real time and to extract features and content from the imagery automatically to reduce operator workload,” McQuiggan continues. “The challenge is always to pack previously unprecedented processing power into a small environment. Curtiss-Wright has introduced solutions to allow compact packaging and high-density processing to be realized in very high vibration and demanding vehicle platforms.”
Curtiss-Wright is currently deploying video distribution and management systems into aircraft in the National Guard and U.S. Army light utility helicopter program for the UH-72. “These systems provide an optimized, friendly man-machine interface to operators, allowing them to select, manipulate, and forward video in a real-time environment to carry out missions,” McQuiggan says.
Fast fielding & future functionality
Today’s warfighters are gaining new technology faster than ever, but technology is moving even faster, says Schaffer. “What is available as a completed system today, ready for deployment finds a long, painful route to installation on the platform. Low SWaP; open-source software; powerful, simple programming and user interfaces; and the ability to introduce new functionalities into existing infrastructures—these are all ways in which GE is shortening the time from technology emergence and warfighter capability.”
The airborne market, in particular, is looking at ruggedized COTS to gain the performance they need now and in the near future and to provide the best value to the end users, says Prouty. “Programs that fund a point solution and wait 18 months to 36 months for full-rate production are taking on two major risks: processing performance limitations and technology obsolescence,” he cautions.
“The demand for real-time, high-resolution imaging systems in all parts of the battlefield is growing and the range of applications, the vehicle types, and the requirements are becoming more diverse,” McQuiggan explains. Curtiss-Wright Controls Embedded Computing has responded by specifically establishing a new project group to address the requirements of video and display systems on the battlefield, where the demand for high capacity, very fast storage, distribution, and image processing is high.
The market is in a very rapid “requirement growth” stage, Prouty agrees. “Until the software can mature, we expect it to continue to drive multiple iterations of technology improvements.”
For mil-aero applications, there is a push to make video compression products smaller, lighter, and faster, Veenstra says. “I expect that product development will proceed in two directions: smaller form factors that take advantage of the increased processing power and standard form factors that can support more and more simultaneous data streams.”
Steele predicts a bright future for GPUs in graphically demanding mil-aero applications. “The future points to exponential growth of video and imaging data, the fast analysis of which will continue to be a tremendous challenge to the intelligence community. Future solutions must be automated and massively parallel. We will see GPUs driving all forms of video, image, and geospatial intelligence processes and analytics.”
Bandwidth, processor speed, power consumption, size, rapid development—these are the challenges, Schaffer affirms. “Meeting them takes a combined approach and an understanding of how video and image processing is used in the field.”