Intelligence in three dimensions: we live in a 3-D world, and so should computers

Oct. 1, 2006
Intelligent three-dimensional vision systems do work, yet their reputation for in-the-field use has been more science fiction than fact.

By Maureen Campbell

Intelligent three-dimensional vision systems do work, yet their reputation for in-the-field use has been more science fiction than fact.

Since the inception of the camera, military and intelligence agencies have relied on imagery to verify, inform, and visualize objects. While pictures can tell a story, the story is from the point of view from which the photo was taken, making them at best ambiguous and at worst misleading. When making critical decisions in tight timelines, this can be dangerous.

Threee-dimensional sensors make direct measurements that can help create an accurate model of the camera’s field of vision. Traditionally the sensor would take the measurements, create the model, and provide an image in a 2-D format. The misconception is that 3-D data should work like 2-D images. But collecting 3-D data is not like capturing 2-D images, so why mimic 2-D imagery procedures and results?

Three-dimensional sensors take direct measurements, removing much of the risk of misinterpretation and making 3-D data better suited to automated, intelligent processing than typical imaging artifacts. Allowing the sensor to think or work autonomously addresses and solves two critical military challenges when working in the field: overuse of bandwidth, and critical decision-making time.

Consider an image of trees with some sort of vehicle that appears to be hidden within the foliage. This image, which is based on data provided by information-acquisition software, must be analyzed, first to determine if a vehicle is present, and if it is, whether it is moving, in what direction, and at what speed. Most important, analysis must determine whether the vehicle is a threat.

This analysis not only takes up precious time, but also consumes considerable bandwidth in delivering the data, and significant computational time (whether machine or human) processing the data to extract the critical information.

A sweep of this same area with an intelligent 3-D vision system provides the required information much faster; because the sensor is capable of processing the data on-board it can tell definitively whether there is a vehicle, whether the vehicle is a potential threat, as well as the direction and speed in which the vehicle is heading. The sensor provides this critical information directly, enabling an operator to make an effective, timely decision.

In this situation, the 3-D sensor may not provide as much data to the operator as the camera does, but the quality of the data it provides is better. This concept of “More Information, Less Data” (MILD) is paramount to the intelligent 3-D sensor approach.

Eric Edwards of Xiphos Technologies in Montreal coined the MILD term. His work in developing small, efficient processors for autonomous mobile systems has convinced him that “In the end it doesn’t matter whether there is a person in the loop or not; there is always an issue of bandwidth. With today’s high-speed links and high-quality visual displays it is easy to forget that but, really, getting the most information into the smallest amount of data is always important.”

With the increasing reliance on mobile robotic platforms for air and ground intelligence, surveillance, and reconnaissance (ISR) applications, the need for sensory systems that can increase autonomy is growing rapidly. The goal for autonomy is to provide better and more rapid support to decision making while reducing bandwidth. Intelligent 3-D systems can contribute as command, control, communications, and computers (C4) ISR assets and as operational vehicle systems.

Automatic target recognition (ATR)

One of the premier tactical C4ISR functions is target recognition. Quickly identifying unknown vehicles and classifying them as a potential threat is literally a matter of life and death. This motivated Defense Research and Development (DRDC) Canada in Ottawa to do something about it. DRDC officials approached engineers from Neptec Design Group Ltd. in Ottawa to develop a fast and accurate system for recognizing vehicles using 3-D data.

The DRDC is an agency of the Canadian Department of National Defence that responds to the scientific and technological needs of the Canadian armed forces. Its mission is to ensure that Canadian forces remain scientifically and operationally relevant.

The project with Neptec culminated in field testing by the Canadian Army in 2003. Neptec experts built 3-D models of civilian and military vehicles using scans acquired with a triangulation laser scanner. A light direction and ranging (LIDAR) sensor helped acquire test scans of the target vehicles with varying range, pose, and degree of occlusion. Recognition and pose-estimation results were calculated from at least four different poses of each vehicle at each test range. Targets partially occluded by an artificial plane, vegetation and military camouflage netting were also tested.

The results were significant. Immune to several artifacts that often trick 2-D recognition systems, the system achieved recognition rates of up to 98 percent. These artifacts consisted of very large spatial resolution changes, obscuration, camouflage, and vehicle configuration changes. Moreover, the system recognized the vehicles and correctly estimated their location and orientation.

Operational problems

Another challenge related to target recognition is target tracking. Overcoming this challenge means perfecting the capability to follow a target and then calculating its position and orientation at an operationally useful rate.

Nowhere is this capability more in demand than in the area of space operations. In 2005, the international Integrated Space Transportation Plan listed the ability to perform autonomous rendezvous and docking as a strategic technology. In response to this imperative, Neptec engineers demonstrated a real-time 3-D tracking capability in May 2005 as part of their work on autonomous rendezvous and docking systems for space.

Using a laser camera, engineers demonstrated the ability to calculate the position and orientation of a true scale model of a Quicksat Satellite. Confirming these results, the Canadian Space Agency (CSA) in Longueuil, Quebec, also performed a tracking demonstration in their robotics facility.

Using a two-armed robot, the CSA mounted a sample target satellite on one arm and a gripper hand on the other. In operation, the 3-D sensor tracked the target satellite and provided real-time guidance data for the gripper hand which executed various maneuvers including tracking the satellite motion and grappling satellite.

Intelligence in three-dimensions

The test results prove that 3-D systems are capable and operationally useful. The results affirm that when three-dimensional image systems collect and exploit data rather than visually depict the data two-dimensionally, the systems generate impressive, powerful results.

This is important because intelligent 3-D systems should not and will not replace two-dimensional systems. Rather, a new segment for three-dimensional information is being created.

Three-dimensional systems don’t have to operate as “imagers.” Intelligent 3-D image systems are more comparable to traditional radar systems-they display key information in a simple, straightforward manner. “We aren’t interested in drawing pretty pictures. We are interested in extracting the most useful information from the least amount of data,” says Dr. Iain Christie, director of research and development at Neptec.

This attitude stems from Christie’s experience working with NASA astronauts. Preparing them to use Neptec’s machine- vision systems on board the NASA space shuttle and the International Space Station, Christie learned a lot about what operators typically want-as opposed to what the engineers want to give them.

“You know, we started out giving the astronauts complicated graphical displays with all sorts of information represented in multiple ways,” he says. “They didn’t really like them all that much. After a few iterations we ended with very simple displays that just put the data where they could see it.”

Dynamic change detection

Encouraged by the progress on target recognition and tracking, engineers are extending the principle of intelligent 3-D processing to dynamic change detection. In this application three-dimensional data helps identify dynamic elements in a scene or to identify areas of change in a scene that is scanned periodically from a moving sensor.

“The beauty of Dynamic Change Detection system is that it can operate from a mobile sensor with no requirement for accurate or repeatable positioning,” Christie explains. “This has some real advantages for autonomous operations. We think this technology will make an important contribution in areas like critical asset protection or improvised explosive device (IED) detection.”

Like the other 3-D techniques the Dynamic Change Detection system has also been designed for several platforms including: LIDAR, range-gated cameras, stereo camera and shape-from-motion systems.

Autonomous navigation system (ANS)

In the area of operational vehicle systems it seems natural that the 3-D intelligent systems will have a significant role to play in autonomous navigation.

Already, 3-D sensors are a fairly standard feature in this area and are typically only part of large systems of instruments sprinkled around the entire vehicle. The data from all of these sensors is almost always transmitted to one microprocessor where it is processed, fused, and interpreted.

The system works quite well-as can be seen from last year’s DARPA Grand Challenge. Now, the hurdle is to move toward systems that do not require the entire vehicle to be designed around its navigation system.

This may be another case where the information content of the 3-D data is not being fully exploited.

“What the vehicle designer really wants is a single system that tells him where he is and where he is going. Then the ANS becomes just another instrument that he has to integrate, instead of being a customized part of each vehicle design from the ground up,” says Sylvain Carriere, general manager of Neptec USA in Houston. “In a sense, we have been working on solving exactly that problem for the space business with our autonomous rendezvous and docking systems. We think the same approach could be applied to problems of terrestrial navigation as well.”

Maureen Campbell is a staff writer at Neptec Design Group Ltd. in Ottawa. For more information contact Neptec online at www.neptec.com.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!