Multi-sensor fusion hits the mainstream

Nov. 1, 2009
Once considered as futuristic, difficult, and elusive as artificial intelligence, multi-sensor fusion is coming into its own as a standard approach of processing signals from a wide variety of sensors, and making sense of incomplete and sketchy sensor data.

Once considered as futuristic, difficult, and elusive as artificial intelligence, multi-sensor fusion is coming into its own as a standard approach of processing signals from a wide variety of sensors, and making sense of incomplete and sketchy sensor data.

By John Keller

Multi-sensor fusion, once considered a black art of dubious reliability among military computer scientists, is starting to enter the realm of mainstream computing, as a growing number of sensor-fusion applications are in advanced development or are seeing military service in the field, such as the U.S. Navy Raytheon Cooperative Engagement Capability (CEC), The BAE Systems Guardian Angel system, the U.S. Army Joint Land Attack Cruise Missile Defense Elevated Netted Sensor (JLENS) system, and the Boeing-Australia NC3S Vigilare system.

A central technological breakthrough involves the burgeoning ability for computer scientists to convert sensor data into standard data formats for processing and distribution over secure and non-secure networks, and then reconstitute this data quickly and reliably enough at the receiving end to be understandable and useful to those who need the data most.

The Australian Vigilare sensor-fusion system takes data from the Australian Wedgetail airborne warning and control aircraft, shown above.
Click here to enlarge image

This ability to convert sensor data into a standard information format for easy retrieval, processing, and networking over commercially available computer processors and data networks has been the key to integrating new and legacy sensors into a cohesive networked system that blends bits and pieces of digital information and sensor data into a whole dynamic sensor picture that is much greater than the sum of its parts.

Thus far, the primary use of today’s most advanced multi-sensor fusion technology for the military has been for integrated air- and cruise missile defenses, yet experts say the principles of what exists today eventually could be used to create gigantic sensor fusion systems that blend data from sensors based in space, in the air, on land, and on the sea, as well as vast networks of undersea sonar systems.

For today, however, military multi-sensor fusion aims at an integrated air picture that includes information from military and civilian air surveillance radars, aircraft-based radar systems, as well as data from civil air traffic controllers and flight plan databases of reported takeoffs and landings of private, commercial, and military aircraft.

Guardian Angel

The BAE Systems Technology Solutions group in Merrimack, N.H., has experience in multi-sensor fusion from a program sponsored by the U.S. Army Research Laboratory in Adelphi, Md., called Guardian Angel, which was a technology demonstration program designed to detect improvised explosive devices (IEDs) in dangerous portions of Iraq and Afghanistan.

“Guardian Angel was focused on the benefits of multi-sensor fusion in going after very, very difficult targets,” explains Clark Freise, vice president and general manager of BAE Technology Solutions, who leads efforts to use imagery and near-imagery sensors to look for indeterminate targets. “It was a quick effort to bring together an architecture to pull together several sensors, fusion algorithms, and automatic target-recognition algorithms that have shown promise.”

Among the sensors involved in the Guardian Angel program were infrared and visible-light Imaging radar, lower-frequency ground-penetrating radars, multispectral sensors, and laser-based sensors.

The challenge of Guardian Angel was to make the best use of data from fleeting passes of sensor platforms like aircraft to detect IEDs in areas where imaging sensors are present only rarely and for short duration. “A lot of the targets we look for are not direct line of sight, so you can’t get a lot of pixels on the target,” Freise explains. “How do you go after those very difficult targets that have been obscured?”

The Australian Wedgetail airborne warning and control system aircraft, shown above, plays an important part of the Vigilare Australian multi-sensor fusion system.
Click here to enlarge image

The key for Guardian Angel was to combine, or essentially overlay, imagery and near-imagery data from many different sensor passes over time, and determine any deviation from one sensor pass to another that would suggest any changes in the scene–disturbed soil, vehicle tracks, objects moved, new holes, or holes filled in.

“Part of what you look for is a change that has occurred,” Freise explains. “You might see a certain signature, and from that signature you might remember how something has changed. The benefit is the ability to combine sensor outputs and help an analyst in a ground station or operations center pull together the right pieces. That analyst is looking for something anomalous. If there is no variation across multiple sensors, chances are you do not have a target embedded. You look at what causes you alarm, or a material that is not homogenous. You look at conditions and monitor for changes.” The Guardian Angel algorithms run on a standard laptop computer.

Multispectral imaging sensors, which can separate and combine several different wavelengths of reflected light, for example, can detect recently disturbed soil and many other indications of recent change in a scene. Visible-light sensors can detect physical changes in a scene over time. Infrared sensors can indicate warm areas where humans or vehicles might have been recently, as well as disturbed soil and other factors.

BAE Systems experts also brought target-recognition algorithms to bear in Guardian Angel sensor processing to help analysts emphasize changes in sensor information, and filter out consistent data that would not be of interest.

“By combining sensors, can you get a higher detection rate at a smaller false-alarm rate, and can get you to something that is actionable, where you would send a team out to deal with a target,” Freise says. “Nobody has found a single sensor that can go out and consistently find these difficult-to-find targets that are causing our troops problems, like IEDs. They are buried, so what you can see is probably not the threat.”

The Guardian Angel research program, which has been in place over about the past 18 months, not only has demonstrated solutions to the concealed IED problem, but the program also has provided a kind of research test bed to identify new and different kinds of sensors that are promising for IED detection. “This has helped us test out promising sensors. Some may be deployed in the near term,” Freise says.

Although the Raytheon Cooperative Engagement Capability system today is for aircraft surveillance and tracking, algorithm changes eventually enable to system to handle ballistic missile defense.
Click here to enlarge image

“The program was about the architecture necessary to bring a broad range of sensors together that can run their individual algorithms, and then we run target-recognition and sensor-fusion algorithms on top of that,” Freise says. “It was a great program to display raw products, run thorough fusion and target-recognition algorithms, and compare and contrast the resulting data. We have used a large number of targets, sensors, and algorithms. We can combine algorithm A and algorithm B, and come up with something better than the results of each sensor one at a time.”

Although the Guardian Angel architecture is still in the laboratory, Freise says he hopes at least portions of it make it to the field. “The technology could be ready within six months if we had a path to follow. We would hope to put the architecture on a ground-based sensor with links to different sensors.”

NC3S Vigilare

Another example of cobbling together many different kinds of data in a multi-sensor fusion system is the Australian Network Centric Command and Control System (NC3S) air-defense system, better-known as Vigilare. This is a product of the Boeing Defence Australia Network and Space Systems segment in Brisbane, Australia. The Vigilare system blends information from civil and military ground-based radar systems, naval ships, military and civil defense aircraft radar, as well as database information from military and civil aircraft on-time takeoff and flight plan information.

The Vigilare system is primary for air surveillance of the Australian continent and surrounding ocean areas, yet it also handles tactical air operations to vector military surveillance aircraft, in-flight tankers, and jet fighter aircraft to areas of concern when the system’s surveillance segments identify targets of concern, or outright threat.

As many as 50 sensors currently feed into the Vigilare system, where air surveillance experts use tailorable multifunction displays to keep an eye on all aircraft flying over land or over surrounding ocean areas, tagging each aircraft with a symbol indicating whether the target is benign, unknown, or considered a threat.

Today’s sensor-fusion systems often are able to take inputs from ground-based sensors like the FPS-117 radar, shown at right.
Click here to enlarge image

“This is a state-of-the-art air defense system for Australia, and will form the Royal Australian Air Force’s network-centric capabilities,” says Steve Parker, vice president and general manager of Boeing Network and Space Systems in Australia. “Vigilare is very important for the Australian defense forces, and in the command, control, and communications space, is one of the most advanced systems in the world.”

Vigilare automates the fusion and distribution of air surveillance and tactical air control information in real time, Parker explains. The multi-control tracker for surveillance and air space battle management links to military surveillance radar systems, such as high-speed tactical air defense radars and the lower-speed Jindalee Operational Radar Network (JORN)–an over-the-horizon radar network that keeps constant watch of the ocean and island areas between Australia and the Asian continent. Also linking in to the Vigilare multi-sensor fusion system are civil air traffic control radars similar to those run by the U.S. Federal Aviation Administration in the U.S.

Vigilare also connects to aircraft, ships, and mobile radar systems via Link 16 and Link 11 military wireless networks. Included in these radar platforms are Australian patrol aircraft such as the Wedgetail–a Boeing 737 converted to an airborne early warning and control aircraft similar to the U.S. Air Force E-3 Sentry AWACS, as well as Australian P-3 Orion maritime patrol aircraft, explains Lee Davis, the Vigilare deputy program manager at Boeing Defence Australia.

Vigilare also connects to civil aviation authorities in Australia to blend aircraft flight plan and en-route information into the sensor mix. This information on up-to-date aircraft departures and flight plans enables Vigilare controllers to correlate departure times and planned routes with radar information they see on their screens. Military flight planning information–also part of Vigilare sensor inputs–contains information like position and logistics data, equipment on board, as well as air tasking orders and mission data.

“We have a large number of different sensor types, so we have developed a common internal data format for distributing that information within the system,” Parker says. Boeing developers considered, but rejected, the idea of translation services at sensor and agency sites, not only because of the complexity, but also because of the problem of translating data from sensors that operate at different speeds–such as tactical military radar and long-range, over-the-horizon radar. Instead, “at the sensor site we down-convert into a common Vigilare format and distribute that common format over high-speed networks.”

How human operators actually interact with Vigilare sensor-fused data involves two kinds of operator workstations–one for surveillance personnel, and the other for tactical air operations.

On surveillance workstations, “you see a geographic situational display, where you can zoom into the operational area you want,” Parker says. The operator sees the Australian land mass and its coastal regions, with an overlay of track data and attribute data formed as tags with information like aircraft type, range, speed, altitude, quality of the track, and the sensors contributing to that track.

The Australian Vigilare sensor-fusion system provides wide-area air surveillance, as well as tactical air control, based on inputs from a variety of air-, land-, and sea-based radar systems.
Click here to enlarge image

Surveillance workstations have a tailorable user interface that looks much like a Windows-style interface, which operators can customize down to symbology sets, colors, and tag styles. The surveillance workstations generally handle larger areas than the tactical air operations workstations because the surveillance specialists must determine if any entity on their displays requires further investigation, Parker says.

Tactical air operations workstations in the Vigilare system are far more focused on specific missions than the surveillance displays. Air operations specialists can assign missions to military patrol or surveillance aircraft automatically without voice commands via Link 16 and Link 11 military tactical data links. In this way, the Vigilare system can manage tactical engagements down to the level of system cueing commands.

“On the screen where air defense operators manage the engagement, the system will display symbology of those entities that have a mission assignment–down to vector commands, engagement geometry, and time to intercept,” Parker says. “We give that information to the cockpit of the aircraft involved for the duration of the mission.”

Tactical air operators also have the option of cueing a mid-air refueling tanker to an investigating aircraft that requires refueling, by providing the tanker with steering and other en-route information.

Although the Vigilare system was developed with Australia in mind, it could handle any region in the world, Parker says. “It looks a bit like Google Earth, so you can spin to any region on the globe and zoom in. In remote areas, it can connect to mobile radars via satellite to feed into host country operations. The system can accommodate deployable communications cabins suitable to anywhere in the world, connecting to the system in Australia via satellite.

Vigilare, furthermore, is built with commercial off-the-shelf (COTS) computer server equipment, such as Dell and Sun commercially available servers, and a Cisco-style IP networking infrastructure for affordability, Parker says. Data security in Vigilare is “quite advanced,” he says. “We receive information in multiple security domains and rebroadcast that data over multiple security domains. It is a very secure architecture, and handles multi-level security on the data.”

Cooperative Engagement Capability

Perhaps one of the most mature multi-sensor fusion systems available today is the Raytheon Cooperative Engagement Capability (CEC), an air surveillance and defense system which has been deployed on U.S. Navy ships and aircraft since about 1993. The system was designed by engineers at the Raytheon Co. Network Centric Systems division in St. Petersburg, Fla. CEC is a real-time sensor netting system that integrates different kinds of radar and other sensor data into a composite single integrated air picture (SIAP) to yield tracking data of sufficient fidelity to provide weapons firing solutions for ships, aircraft, and land vehicles.

CEC has its own proprietary anti-jam communications network, and Raytheon designers are reducing the size of its antennas and terminals, as well as extending the network’s range to more widely disseminate its data.

“CEC was originally designed to do–and what it is doing today, and we will do ever better in the future–is dealing with incomplete and sketchy information,” explains Patrick Speake, director of joint sensor networking at Raytheon Network Centric Systems. “We take the measurement data from various sensors and fuse that together such that the composite track you get from that fused data is better than the signal that any one sensor could give you.”

Here is a typical CEC scenario: The system might deploy on a Navy Aegis cruiser with the ship’s radar locked onto a target’s track and feeding that radar track information into the CEC. The CEC network, in touch with an orbiting E-2C early warning system aircraft, is able to feed the ship’s radar track data to the E-2C in real time. “Both have the same, exact, composite radar picture,” Speake says.

Not only is CEC integrated with Navy ships and aircraft, but it also is becoming part of the U.S. Marine Corps Tracking Network (CTN) for deployable land-based radar systems, as well as for the U.S. Army Joint Land Attack Cruise Missile Defense Elevated Netted Sensor (JLENS) system of aerostat-based networked radar that can be deployed in hot spots throughout the world to keep watch for low-flying cruise missiles.

Today, CEC technology consists of its anti-jam wireless network, a sensor-fusion processing computer engine and software, and the CEC antenna that transmits and receives the common CEC sensor picture to all platforms on its network. “CEC is line of sight only,” Speake says. “The E-2 or other air asset extends your line of sight. We have done some studies about extending the network via satellite, but there are some latency issues we would have to deal with. In the future, however, there could be an option for satellite communications.”

Over the past five years, Raytheon experts have reduced the size, weight, and power consumption of CEC equipment to reduce costs and enable the system to fit on a widening variety of platforms. “Our data processing terminal used to be the size of a double-wide refrigerator, and now is the size of a large microwave oven for shipboard applications,” Speake says. “For the airborne application, in the past the CEC processor was four ATR boxes that took up a lot of room on the E-2. Now it is down to a single ATR box, and has reduced a lot of weight on the aircraft.”

For JLENS, Raytheon is working with the Army not only to provide networked sensor pictures, but also to provide blimp-to-blimp communications among the system aerostat radar platforms. For the Marine Corps CTN, CEC networks with the Marine TPS-59 ground-based radar systems.

“We don’t have the same anti-jam capability in the CTN antenna,” Speake points out. “It is about one-tenth the size of what we have on an aircraft carrier; it’s a tradeoff.”

For the future, Raytheon engineers are considering shrinking the CEC equipment further to enable the system to fit on platforms like the Global Hawk unmanned aerial vehicle (UAV). “We would be very interested in a CEC system sized for a UAV,” Speake says, adding that Raytheon is talking to the U.S. Air Force about the possibility of fitting CEC to the E-3 Sentry airborne warning and control system (AWACS) aircraft.

Also for the future, Raytheon is “working with the Navy to develop some diverse new network architectures to allow us to get to much larger network sizes for additional assets like UAVs,” Speake says. “The Navy is looking at taking this internationally with some of our allies.”

Today, CEC is primarily concerned with aircraft and cruise missiles, but Speake says the system eventually could be adapted to track ballistic missile threats. “If there are new mission applications in the future, we can adjust our algorithms to accommodate them,” he says.

More Military & Aerospace Electronics Current Issue Articles
More Military & Aerospace Electronics Archives Issue Articles

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!