Joining sensors through data fusion

Nov. 1, 2008
The Pentagon and U.S. defense industry are pursuing several major programs that depend on enabling radar, infrared systems, and other sensors to work together such that they combine their strengths and mitigate their weaknesses to create a sensor network that is greater than the sum of its parts.

The Pentagon and U.S. defense industry are pursuing several major programs that depend on enabling radar, infrared systems, and other sensors to work together such that they combine their strengths and mitigate their weaknesses to create a sensor network that is greater than the sum of its parts.

By John Keller

Sensors have become ubiquitous on the battlefield to detect and track everything from aircraft to foot soldiers, and from enemy radar systems to covert communications. Yet the number of sensors continues to grow exponentially as the global battle space becomes digitized, and data experts are struggling to refine sensor outputs into useful information, rather than just the electronic noise of information overload.

To do this, they are relying on various approaches to sensor-fusion technology to enable individual sensors to work together to pool their information and understanding, fill in blanks or incomplete data, compensate for weaknesses, and essentially create a whole sensor picture that is greater than the sum of its parts.

A short range, unitary, ballistic missile, Scud-like target lifts off from the Mobile Launch Platform, the decommissioned USS Tripoli, in a Missile Defense Agency flight test last June off Hawaii.
Click here to enlarge image

The notion of sensor fusion has been discussed for decades, and has been developed and deployed on several major military weapons systems, such as the U.S. Navy’s Cooperative Engagement Capability (CEC), a Raytheon Co.-built system that enables Navy ships and aircraft to combine radar data for improved defenses against attack aircraft and cruise missiles–particularly in coastal waters where land clutter can make it difficult to formulate a reliable radar picture.

Advanced commercial off-the-shelf (COTS) microprocessors and high-speed wireless data networking have made sensor fusion even more feasible today for a broad variety of military and aerospace applications than it was a decade ago when the CEC was first deployed. In the future, in fact, the military’s concept of network-centric warfare rests on the foundation of sensor fusion.

Network-centric warfare, by definition, relies heavily on sensor information, from night-vision rifle scopes on the forward-edge of the battlefield to orbiting satellites–all shared, filtered, refined, analyzed, and available on tactical and strategic networks. This fused data information, furthermore, eventually should be available to a wide variety of forces that need different levels of secrecy, all tailored for the needs of specific users.

The meaning of sensor fusion

Sensor fusion falls within the U.S. Department of Defense (DOD) overall definition of information fusion, as specified by the DOD’s Joint Directors of Laboratories (JDL). The JDL’s definition of information fusion has four levels, ranging from identifying and tracking targets of interest, to determining whether these targets are threats.

Level 1 of the JDL’s model concerns the ability of sensor-fusion technology not only to detect and identify targets of interest, but also to fix the targets in space and time. At this level, sensor systems work together to determine what a target is and where it is at a specific moment in time. Using this distilled information about a target’s location over time, Level 1 information fusion build target tracks–or the paths targets have been following and the paths they are likely to take in the future.

Level 2 of the JDL’s information fusion model finds patterns among the locations and movements of many different targets in an area of interest. This can indicate mass, coordinated movement, simply chaotic and random wandering, or several different pockets of coordinated movement. From this kind of sensor fusion, human analysts can make judgments about whether an enemy attack might be in progress, whether enemy forces are massing for any reason, or whether a battlefront is quiet.

Level 2 in the JDL’s model, which is called “situation refinement,” helps analysts form an abstract interpretation of how targets of interest might be behaving, based on the targets’ relationships to each other. In other words, it helps form a situation assessment.

Building on this level of sensor fusion achieved in Levels 1 and 2, the next layer of the JDL information fusion model continues a repetitious process of comparing the patterns in location and movement of targets of interest, and finding larger patterns of movement, concentration, dispersal, or of simply stationary behavior.

This process, called “threat refinement,” helps analysts make judgments about the potential intentions of the targets of interest, and to assess the threats those targets might pose as part of an overall threat assessment.

The fourth level of the JDL model continues monitoring and assessing the locations, movements, and patterns of targets of interest to further determine, threats, intentions, and potential scenarios.

Cooperative Engagement Capability

The Navy CEC and its related AN/USG-2(V) cooperative engagement transmission processing set helps ships and aircraft to share sensor and weapons data such that they can use tracking and targeting data from other platforms, and also to blend their collective data to fill in blanks and use ambiguous or incomplete information to best advantage.

CEC, primarily an air-defense system, is from the Raytheon Systems Co., Network Centric Systems segment in St. Petersburg, Fla. (formerly Raytheon E-Systems), together with Johns Hopkins University Applied Physics Laboratory in Baltimore. The system essentially blends ship and aircraft radar into one composite track picture. The CEC is composed of three elements–the Cooperative Engagement Processor (CEP), the Data Distribution System (DDS), and the interfaces between the CEP and the ship’s weapon systems.

This ability enables separate ships and aircraft to concentrate their radar surveillance in specific sectors. Operating individually, ships and aircraft would have to direct their radar systems across the entire horizon and risk missing important information.

An SM-3 launches in 2006 from the Navy cruiser USS Shiloh (CG 67), during a joint Missile Defense Agency, U.S. Navy ballistic missile flight test. Two minutes later, the SM-3 intercepted a separating ballistic missile threat target.
Click here to enlarge image

CEC composite data source is used by CEC ships and aircraft for missile engagement by coordinating AAW sensors into one composite track picture. Each CEC unit combines own-ship radar measurement data with those from all other CEC units using the same CEC algorithms. CEC distributes radar measurement data (not tracks) from each CEC unit to all other CEC units.

Ballistic missile defense

One of the most pressing applications of sensor fusion is blending information from several different radar systems in ballistic missile defense, such as the U.S. Ground-Based Midcourse Defense (GMD) program. GMD blends information from sea-based mobile radar systems; the Upgraded Early Warning Radar (UEWR) at Beale Air Force Base, Calif.; ship-based radar systems aboard U.S. Navy Aegis-equipped cruisers and destroyers; the mobile Sea-Based X-Band (SBX) radar system; radars from the Theater High-Altitude Air Defense (THAAD) missile batteries; as well as surveillance radar systems in Alaska and Europe.

“Our sensor and communications networks enable the GMD’s ability to hit a bullet with a bullet,” says Dr. Greg Hyslop, vice president and program director for the Ground-based Midcourse Defense (GMD) program at the Boeing Co. in St. Louis. GMD feeds fused radar information to interceptor missiles at Fort Greely, Alaska, Vandenberg Air Force Base, Calif., and at a future missile field in Poland.

One of two ballistic missiles launches from the Pacific Missile Range Test Facility at Barking Sands, Kauai, Hawaii, to be intercepted about 250 miles from Kauai and approximately 100 miles above the Pacific Ocean by missiles launched from a U.S. Navy cruiser in 2007. Sensor fusion helped make the engagement a success.
Click here to enlarge image

“Our most complicated sensor fusion test was this summer, which we launched from Kodiak, Alaska, which was flying to the southeast toward an intercept area off the West Coast,” Hyslop says. This test was to simulate potential ballistic missile launches from North Korea towards the U.S. mainland.

“We had TPY-2 radar in Juneau, Alaska, and had an Aegis ship track the target,” Hyslop explains. “All those tracks went to the BMDS [Ballistic Missile Defense System] battle manager in Colorado Springs, Colo. Once the GMD fire control received those tracks, it cued the radar at Beale. We also cued the SBX radar. Data from all four sensors were being processed and fused by the GMD fire-control system to provide the firing solution. Fusing data from four sensors in real time, with the data correlated and cured, was sufficient for the interceptor to do its job. It was the first time we had fused and correlated data from four sensors in a test like that.”

The GMD system uses commercial off-the-shelf (COTS) computer processors, a state-of the-art tactical network, and software algorithms that Hyslop says are well understood. “Algorithmically, the track data includes position and velocity, but also includes a covariance for uncertainty–or a representation of how accurate the track is,” he says. “The correlation is to say two tracks from two sensors are tracking the same thing. We fuse those tracks into one track, and come up with combined track information with one answer that says they are tracking the same object, and using the best data from all the sensors. That is sensor fusion.”

Eventually Boeing and DOD experts will use sensor-fusion technology developed and lessons learned from the GMD program to tie in other large sensor systems for ballistic missile defense, such as the Airborne Laser System (ABL), and other terminal missile-defense systems.

Future Combat Systems

One of the most ambitious projects to fuse sensor data on a large scale is the U.S. Army Future Combat Systems, a massive program managed by Boeing Integrated Defense Systems in St. Louis, which is touted as the cornerstone of Army modernization. FCS will have eight new manned armored combat vehicles, several unmanned air and ground vehicles, new artillery called the non line of sight-launch system, as well as advanced tactical and urban sensors–all connected on a global tactical network.

The underlying objected of the FCS is to improve soldiers’ awareness of the battlefield situation so they can survive and take the battle to the enemy. Although the first full FCS-enabled Army brigade is set for fielding in 2015, Army leaders are trying to move FCS-developed technology to the service’s modular brigades as the technology becomes available.

With reference to the JDL’s information fusion model, experts are implementing the full stack of sensor fusion in FCS systems and the overall FCS architecture, explains Ray Carnes, chief engineer for FCS software and distributed systems integration at Boeing Integrated Systems.

“At the lowest level we have lots of sensors, on multiple platforms, that form reports, and the reports are fused locally in sensor-fusion algorithms to get better correlation between sensor technologies,” Carnes explains. “FCS involves thousands of platforms, all of which have sensors with local capability to fuse sensors from their local area. They fuse what they are seeing, and we correlate between the vehicles.”

FCS employs what Carnes calls distributed fusion management, which blends information from many different sensors–optical and electronic–and forms a common operating picture. “We take the fusion picture from each of the platforms and fuse it into a regional picture, so everyone has the same synchronized picture,” he explains. “If you are in vehicle looking at the common operating picture and someone else is five kilometers away, you want them to see the same target at the same place at the same time.”

FCS architecture engineers use what Carnes calls multiple hypothesis software to refine and resolve information from different sensors to ensure they are giving the same information about the same targets of interest. “You want multiple sensors to tell you there is only one object there,” he says. “Sensors are not perfect and there is always the possibility for error.”

FCS sensor-fusion software algorithms use pre-processing front-end algorithms to ensure the validity of the sensor data before that information blends with that of other sensors. These pre-processing algorithms look for sensor clutter, false alarms, and other bad data to avoid corrupting the situation picture.

At higher levels of sensor fusion, the FCS identifies all targets that FCS sensors detect and track as friendly or enemy, and then employs additional correlation algorithms to make judgments on the enemy’s organization and kinds of units might be in play. “We are trying to determine what kind of organization this is, and whether it is a squad, platoon, or something bigger,” Carnes explains.

Situational and threat assessments distilled through sensor fusion are distributed throughout the battlefield on links ranging from soldier combat radios to satellite networks.

“We make use of about every communications technique you can imagine,” Carnes says. “There are up to 18 different data links within each unit that are supportable. We use almost all of them to communicate sensing data or a common picture out of the platforms. Soldier radio networks let solders receive the common picture–including data from small unattended sensors and small unmanned ground vehicles, as well as from larger unmanned ground vehicles and unmanned air vehicles with larger network links and sensors.”

Sensor fusion is a cornerstone of U.S. ballistic missile defense capability. Shown above, a Standard Missile-3 (SM-3) is launched from the Pearl Harbor-based Aegis cruiser, USS Lake Erie (CG 70), on November 6, 2007 en route to an intercept as part of a Missile Defense Agency test of the Sea-based capability under development.
Click here to enlarge image

At the battle front, wireless tactical networks in the range of 10 megabits per second bandwidth can download video from ground vehicles, while satellite links distribute the battle picture out to higher-level headquarters near the battlefield or back to top commanders in the Pentagon.

Capabilities and limitations

Although sensor fusion research has been going on for decades, its capability is still somewhat limited, relative to human reasoning. “The human is awesome at combining information,” explains Mike Bryant, universal situational awareness vector lead at the U.S. Air Force Research Laboratory’s sensors directorate at Wright-Patterson Air Force in Dayton, Ohio. “With computers and algorithms we are still struggling to get to that level.”

Air Force and defense industry scientists are making evolutionary advances in blending sensor information–particularly for situational and threat assessment–yet expert human analysts are still necessary for making sense of complex sensor data. “We still use image analysts and other types of intelligence analysts to do the fusion work,” Bryant says. “A lot of the really hard problems will remain a human problem for a while.”

The Raytheon Sea-Based X-Band (SBX) radar system, shown above, provides crucial information to sensor-fusion computers for U.S. ballistic missile defenses.
Click here to enlarge image

Sensor fusion technology today works best when the sensors involved are detecting and measuring similar kinds of data in similar ways, and are detecting and measuring events that are relatively close to one another in space and time, Bryant says. “From the algorithms perspective, many of those work best when the multiple sources of information are close in spectral information and the time between one sample and another sample. The key enabling technologies are very precise in position, navigation, and timing information.

Hence, today’s sensor fusion technology is skillful and reliable at tracking factors like spatial changes over time–such as vehicles and people arriving at or leaving a scene, progress of buildings under construction, or bridges destroyed in the last bombing raid.

“Change detection is one kind of fusion that is useful,” Bryant says. “We have a reference image of a location, and come back and take another shot of the same area, and compare and contrast the two to find things that were there that are gone, and things that weren’t there before that have arrived.”

Experts at the Lockheed Martin Corp. Radar Systems segment in Syracuse, N.Y., are involved in blending data from similar kinds of sensors that gather information at near the same times. “Where we are is in multisensor data fusion, which is multiple radars in either strategic-fixed locations, or perhaps in tactical situations around a region of interest,” explains Doug Reep, vice president and chief engineer at Lockheed Martin Radar Systems.

Enhancing the practicality of blending information from different radar systems is the relatively new process of digitally encoding radar signals with time and location information from the Global Positioning System (GPS), which helps reduce errors caused by overlaying radar signals from different times or places, Reep says.

The problem is straightforward when sensors–like radar systems or imaging cameras–are gathering information in similar ways. It is far more difficult today however, to blend information from different kinds of sensors–such as radar and infrared images–especially if there is much of a time difference between when the data was gathered.

“There might be infrared data on a target, and another piece of information that might be a radar image,” Bryant says. “It is difficult for algorithms to fuse that information together.” Different kinds of sensors produce different kinds of data based on different statistical models, he says. One of the difficulties of sensor fusion technology is blending data from sensors where experts have different levels of understanding of each sensor’s statistical models.

“When you don’t have a good statistical understanding of the pieces, and you make assumptions that are incorrect, then your algorithms are not going to work,” Bryant says. “A lot of today’s fusion research is to find better characterization of those individual vague pieces of information.”

Vagueness in sensor information often involves a difference in statistical information, Bryant points out. Blending data from different sensors, for example, can have a problem telling human analysts the differences of whether a target is simply in the state of Ohio, or on a specific city block in Cleveland. “The challenge is understanding the statistical properties in how you combine the information,” he says.

One of the next challenges in sensor fusion research will be to devise layered architectures that improve the way fighting forces share large amounts of fused sensor information on networks across large theaters of operation, says Mike Nowak, layered sensing team lead at the Air Force Research Lab sensors directorate.

“We want to provide information for anyone from a general officer or carrier battle group commander, all the way to an individual soldier, marine, or airman at the tip of the spear, such as where are the good guys and the bad guys, or what is the attitude of the local sheiks,” Nowak says. “We are looking toward that now. It is very grandiose, but we want to fuse multisource information and get it to every decision maker in the battle space.”

Experts also are trying to find new ways of ensuring the reliability of information from sensor-fusion networks so that incorrect information does not creep in to corrupt situational and threat assessments.

Configuring sensors as nodes on large networks will likely be worth the time and energy it takes to deal with development issues. “I see some things at the network-and-sensor level that have to do with Internet Protocol technologies,” says Lockheed Martin’s Reep. “One of the games we play is what standards do we adopt for sensor flow along a network, and how do we enable that network? The software framework for that network-centric message is very Internet-like, as is how we ship it around the network for specific purposes,” Reep says.


Navy’s newest electronic jamming aircraft is built on sensor fusion

The U.S. Navy’s next-generation electronic warfare and jamming aircraft, the Boeing F/A-18G Growler, enhances mission effectiveness, uses smaller and more lightweight avionics, and reduces the aircraft crew from four to two, primarily by a sophisticated approach to sensor fusion.

The two-seat Growler, which is to replace the Navy’s four-seat EA-6B Prowler aircraft, is designed to suppress enemy radar-based air-defense systems by jamming their signals or destroying hostile radar with the high-speed anti-radiation missile (HARM). The Growler also is designed to jam enemy radio communications networks.

The Prowler avionics essentially feed the backseat jammer-control officer with refined and prioritized information by fusing data from four on-board systems: the ALA-218 digital receiver; the HARM missile system; pre-briefed digital flight information; and the Multifunction Information Distribution System (MIDS) Link 16 communications system, explains Rich Martin, who manages the Boeing EA-18G program.

“For the G, it is really all about sensor fusion,” Martin says. “The whole mode of operation is about sensor fusion. It’s about how you go about prioritizing the information actually presented to the air crew. We went through a lengthy process to develop the right kind of software and the right kind of display technology to allow us to do that. We allow the air crew to operate at a higher level so they are not caught up in doing so many manual tasks.”

The Navy’s next-generation electronic jamming aircraft, the F/A-18G Growler, relies on sensor fusion to enhance mission effectiveness and relieve the air crew of unnecessary workload.
Click here to enlarge image

Blending information from the Growler’s four primary sensor systems helps enable the aircraft to communicate while jamming–a capability the EA-6B does not have. “Essentially it allows the system to recognize what jamming signals are going out, and to optimize the communications around that,” Martin says.

Blended sensor data is communicated to the Growler’s flight crew through an 8-by-10-inch, high-resolution liquid crystal display with two hand controllers–one on each side–and a series of buttons around the display. “Through this interface, the software we have developed gets the air crew a lot of flexibility in how they can group things,” Martin says.

The Growler is set to deploy for the first time in September 2009. The first 10 fleet squadrons will start trading their EA-6Bs in for Prowlers this spring, beginning with Electronic Warfare Squadron 129 at Whidbey Island Naval Air Station, Wash.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!