By John Rhea
After decades of research and disappointing results, U.S. military experts start to see new directions for artificial intelligence in data processing, pattern recognition, and decision aids.
Artificial intelligence (AI) is getting down to business. The field is moving away from the grandiose attempts of the past that attempted to reproduce the spectrum of human intelligence in machine form. Now the focus is on specific applications employing AI principles to tackle relatively mundane tasks.
This is a natural evolution based on the availability of rapidly accelerating raw computer power. In fact, AI techniques are becoming so ubiquitous that the computers that now bear the label "Intel inside" could well be labeled "AI inside," says Alan Meyrowitz, director of the Navy Center for Applied Research in Artificial Intelligence at the Naval Research Laboratory (NRL) in Washington.
Navy experts say AI addresses the automation and extension of human intellectual skills — specifically human decision making and autonomous devices. Both have their civil and military counterparts.
In the former category, decision making, the military employs AI for planning and logistics functions. In the commercial world similar techniques help business leaders streamline retailing and financial operations by extracting relevant data — known in the AI business as data mining.
Autonomous devices, such as the robotic systems in automobile assembly lines, were well established in manufacturing industries before U.S. military experts launched their theoretical studies under the Strategic Computing Program at the Defense Advanced Research Projects Agency (DARPA) in the 1980s.
Now, with a combination of military-funded development programs and the availability of commercial off-the-shelf (COTS) technology, the military services are beginning to implement AI methods in such new generations of weapons platforms as unmanned aerial vehicles (UAVs) and autonomous submersibles to perform unmanned surveillance in shallow waters.
Robots for military and civil applications, such as the grippers on factory assembly lines, are even beginning to look anthropomorphic, Meyrowitz says.
This means that "AI has moved from science in the laboratory to technology built for useful things," Meyrowitz says. Moreover, two of the principal thrusts — data mining and machine vision — have taken on identities of their own outside the AI umbrella.
So-called "expert systems" are part of a rich AI history. In their simplest forms, this computing technique uses a data base of rules and rudimentary "if-then" logic processing to help humans determine the best courses of action under certain sets of circumstances.
Where do expert systems fit into the AI picture today? Much touted in the past in such programs as DARPA's Pilot's Associate, modern expert systems are evolving in two directions: what Meyrowitz calls rule-based and case-based. "All expert systems are AI, but not all AI is expert systems," he comments.
The traditional rule-based systems, which were supposed to capture the best expertise of pilots and other skilled personnel, tend to be more detailed and smaller in their scope of information, Meyrowitz says.
Case-based reasoning, which is the way humans learn, involves storing previous problems and solutions and then identifying a match when a new situation arises. Meyrowitz calls this approach "more abstract and higher level," as case-based expert systems extract rules from cases over time. Some computer scientists argue that this capability represents the beginning of "machine learning." Critical to the ability to employ this method is the availability of large, affordable computer memory — and this is where AI is riding the crest of today's commercial electronics technology.
This capability, in turn, raises the perennial question of man-in-the-loop. NASA leaders have wrestled with this issue since the inception of the space agency in 1958 in trying to determine which space missions should be manned and which unmanned.
Hans Moravec, the robotics expert at Carnegie Mellon University in Pittsburgh, has been an outspoken advocate of using as much machine automation in space as possible — including on the International Space Station. Moravec supports machine automation on the grounds that machines are cheaper to support in space than are humans since they do not require expensive life support. Conversely, humans still outperform robots on Earth except in applications that are time-critical or hazardous, he says.
Military service leaders face a comparable problem of sorting out how much robotics they want. Meyrowitz says the long-term challenge is to evolve toward automation in battle "with confidence."
Here is where COTS technology can help. Meyrowitz divides the issue into "advisory" and "automatic" applications of AI, and cites examples from the commercial world that could be valuable to the military.
The medical profession uses advisory AI techniques to match patients' symptoms with a set of possible actions to improve treatment. Automatic AI helps control industrial processes, such as shutting down an overheating furnace.
The reality is that man is always in the loop in what Meyrowitz calls a "continuous spectrum of various degrees of connection." He likens this to the division of labor in an office. And, like an office or any other workplace, systems must link into a network so they can work together. The consequent distributed AI systems exhibit what he calls a variation of social behavior.
The problem with AI in the past is that it was oversold, notes Dave McDaniel, president of Silver Bullet Solutions in Arlington, Va. "In the 1980s expert systems were all the rage," he says, but today the focus is on specific — if less glamorous — tasks.
McDaniel stresses the need for AI techniques in sensor fusion. Driven by dramatic improvements in data storage and communications links, these techniques are becoming available to the operators of fielded military systems.
For example, he points out, the Airborne Warning and Control Systems (AWACS) aircraft, Aegis warships, and reconnaissance satellites are all picking up signals from targets. The problem is with the ambiguity of those data.
Expert systems can help the operators by estimating the locations and intentions of enemy forces. Since such AI-based systems are likely to be networked together, this means that all the forces in a theater of operations can have a common operational picture. McDaniel cites the Navy's Cooperative Engagement Capability — better known as CEC — as an example.
Moreover, the same AI techniques employing expert systems and natural languages that drive the robotic systems in UAVs and autonomous ground vehicles share a technology base with industrial factories, he adds. In each case, the need for interpreting data and taking the appropriate action is functionally similar.
The COTS angle is born out by the experience of Steve Chien, principal scientist for automated planning and scheduling in the artificial intelligence group at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. "There have been a lot of AI success stories in recent years," he says, "but you hear less about the government applications because they are dwarfed by the commercial market.
"AI has taken off since 1995," Chien continues. He calls the situation an "explosion." He focuses on machine learning, knowledge discovery (which he calls a community of its own), and data mining. "Every company needs to do data mining, but how many science applications are there?"
JPL experts are implementing what is known as the Automated Scheduling and Planning Environment system, or ASPEN, as a sort of command and control architecture in which participating scientists can do data mining to achieve sensor fusion. It in effect gets all the participants on the same page so they can tell NASA what data products they want. The spacecraft knows the constraint parameters and automatically generates the data according to plan.
Experts credit ASPEN with reducing the acquisition planning time from a three-year process to one week for an Antarctic mapping mission conducted in September. Furthermore, ASPEN is to be used in an Explorer spacecraft mission conducted by the University of Colorado and scheduled for launch later this month.
The ultimate goal is the autonomous spacecraft, but Chien concedes, "We're not quite there yet." On the ground, however, JPL has used AI techniques effectively to improve the efficiency of its network.
One example is what JPL officials call their operations mission planner for its 26-meter antenna subnetwork. They have used AI since 1993 and have reduced the scheduling effort by 30 percent while doubling network support, according to Chien.
Another is an automated image processor for synthetic aperture radar data, which has cut manual inputs by a factor of 10 and reduced the amount of central processor time necessary for processing by 30 percent.
Also at JPL a program known as Continuous Activity Scheduling, Planning, Execution and Replanning, or CASPER, is the real-time version of ASPEN that does planning and execution, and is designed for flight use as part of the flight software. CASPER is for another University of Colorado mission scheduled for launch in the summer of 2002.
These planning and scheduling systems share something in common with the commercial world — they improve the logistics management, Chien says. "How do you get your groceries to the supermarket? Where do you produce? How do you ship?" he asks. Those are the kind of questions JPL managers ask, too, and Chien says COTS hardware is so readily available as to make hardware availability an irrelevant concern.
There are also COTS opportunities in knowledge discovery and data mining, he adds. The commonality here — and hence the available technology base — derives from such widespread commercial practices as the bar-coded merchandise in supermarkets. A principal objective in this application is to determine buying patterns to reduce costly inventory.
Similar methods are part of financial markets to help experts understand trends and direct-mail marketing to identify customers and thus customize e-mail marketing lists.
Looking to the future, Chien speculates that AI can help achieve automatic target recognition, or ATR. Yet the big need is to solve the sensor fusion problem. There is overlap here between the military's needs for UAVs and autonomous ground vehicles planned for urban warfare situations and the space scientists' needs for autonomous vehicles such as the recent Mars rover. "The spacecraft is the ultimate UAV," he says, and AI will play a key role in training.
Further evidence of the dual-use aspects of AI technology comes from Hatte Blejer, vice president and director of intelligent information systems at SRA International in Fairfax, Va. Her company is principally involved in providing intelligent information systems for civilian government agencies, such as the National Institutes of Health. She says she sees a logical progression of data mining into military applications.
She says the focus now is on recognizing sequences of events and discovering patterns. Digital text tends to be unstructured, and computers traditionally have needed structured information. In the past five years, according to Blejer, data mining has done just that.
A system that SRA experts developed for NASDAQ four years ago detected new patterns of fraud by perpetrators who, like enemy missiles, "fly below the radar." Without data mapping these incursions would go undetected, Blejer maintains. "There are not enough human eyes." The company has also worked on physical intrusion-detection systems for the military.
Moreover, these new data-mapping systems are hardware- and software-independent, she says: "NT, Unix, Linux, whatever." All the hardware and software are readily available off the shelf. The trick is to integrate all the components and thus add value in business-to-business as well as military applications.
Another facet of AI — artificial neural networks — is also susceptible to dual use, according to Gary Layton, marketing vice president at interBiz, a division of Computer Associates based in Islandia, N.Y. He says a neural network is software that functions much like the human brain in its capacity to learn, accumulate knowledge, and apply this knowledge to new situations.
Yet this requires substantial processor and memory assets, and the many of the new sophisticated neural network applications would not be possible without the current advances in the electronics industry. The company's package, known as Neugents, is based on this computer power.
Layton cites such diverse applications for the package as enabling chemical companies to improve their methods of mixing chemicals and the U.S. Army to analyze the causes of engine failure in tanks and thus improve the mean time between failure.
Navy evaluates shipboard network to coordinate
OWENS MILLS, Md. — A form of artificial intelligence is coordinating scheduling information aboard the guided missile destroyer USS McFaul (DDG 74) to coordinate scheduling information on the Norfolk, Va.-based Arleigh Burke-class warship.
A data synchronization software package known as ScoutWare from Aether Systems Inc. of Owens Mills, Md., ties together Palm hand-held terminals from Palm Inc. in Santa Clara, Calif., with a server from Clarinet Systems in San Jose, Calif. The ship's crewmembers use the system to send and receive e-mail, conduct training and evaluations, consolidate checklists and databases, and coordinate schedules.
The wireless system is initially limited to about 150 officers and sailors using the Palm devices on the destroyer, and the purpose is to improve response times to boost combat readiness.
By coordinating sensor data into a shared network, the crewmembers should have a better picture of the operations under way. The McFaul is one of the ships able to launch Tomahawk cruise missiles, and it also performs anti-submarine warfare missions.
All the hardware is commercial off-the-shelf, and the Navy is using the McFaul as the test ship for possible application of the software package throughout the Atlantic Fleet.
The U.S. Army has also been considering this technology for use with its medical research databases, and the Air Force is looking into it for inspections of aircraft on the flight line. — J.R.
AI techniques at Air Force Research Lab to extend target identification
DAYTON, Ohio — One of the ways artificial intelligence can extend military capabilities is in interpreting "non-literal sensors," says Dale Nelson, chief of the target recognition branch in the Sensors Directorate of the Air force Research Laboratory at Wright-Patterson Air Force Base in Dayton, Ohio.
"People don't see in X-ray or listen to sonar signals," he says, and that is a role he envisions for AI in target recognition. Synthetic aperture radar, or SAR, is not like anything on the human body. "SAR looks like a photo, but it's not," Nelson notes.
Yet the medical community uses similar AI techniques for X-ray interpretation, and the military can put this technology to work for its unique missions, such as distinguishing a tank from a school bus.
Nelson, who quips that AI is "what you haven't done yet," is investigating control programs based on what are known as genetic algorithms to enable aircraft to get smarter after each battle. Like the human genes for which they are named, genetic algorithms permit knowledge to be passed down from one generation of weapon system to another.
The effort also involves data mining. "The Air Force, like companies, has vast databases and we need to get useful patterns," Nelson says. The approach is to organize the data into a table in which each column is a different attribute of the target and each row is the target.
By using AI techniques to reduce the number of columns, the idea is to find the minimum number of features to identify all targets. He estimates this data reduction can cut the 128 candidate attributes to about 25 important attributes for target recognition.
"We've picked the low hanging fruit, and now we need automatic learning," Nelson notes. "The organism that doesn't learn is dead." — J.R.
Army scientist: computer power not the answer to all AI needs
FORT MONMOUTH, N.J. — Raw computing power alone is not the answer to applying artificial intelligence techniques to the military's needs, says a U.S. Army software expert.
Instead, specialists need a combination of AI techniques, such as neural nets and expert systems, says Gerald Powell, deputy director for the operations directorate at the Software Engineering Center of the Army's Communications and Electronics Command (CECOM) at Fort Monmouth, N.J.
The need for a combination of computational technologies is especially true for the Army's logistics planning to support the ammunition and petroleum needs of deployed forces, Powell says. Moreover, he insists, this technology can be shared with the commercial sector.
Despite the big AI push among the services and the Defense Advanced Research Projects Agency in the 1970s and '80s, Powell discerns a slowdown in the rate of research these days as the commercial sector takes the lead and the military attempts to focus on its unique requirements.
The solution he envisions is integrating such technologies as neural networks and genetic algorithms to tackle pressing problems like automatic target recognition. "We need to know what types of tanks, enemies and friendlies, are on the battlefield to avoid fratricide," Powell says.
As an example of why raw computing power is not the whole solution, he cites a 1956 study by Dartmouth University that found in the game of chess there were 10 to the 120th power possible moves. This is far beyond the capability of any human — or even any conceivable supercomputer that could be developed.
To illustrate that level of complexity, physicists estimate that the total number of subatomic particles in the universe is about 10 to the 85th power. Or, to put it even more dramatically, if there were 10 to the 86th power of elementary particles, there would be nine more universes just like this one. — J.R.
NASA robot in form of snake planned to penetrate inaccessible areas
MOUNTAIN VIEW, Calif. — NASA engineers are developing a new type of snake-shaped robot at the NASA Ames Research Center in Mountain View, Calif., to explore areas where a wheeled robotic rover might get stuck or topple over.
Engineers have built a mechanical prototype of what NASA leaders are calling the "snakebot," says Gary Haith, lead engineer on the project. Yet the NASA team is now working on the sensors and microcontrollers for operational use. Team members also plan to write software to enable the device to learn by experience in crawling over various surfaces and climbing over obstacles.
The NASA center is working with the nearby Xerox Palo Alto Research Center in Palo Alto, Calif., where Mark Yim developed a slightly different version known as the "polybot." The next step is to simulate the snakebot in a computer program to develop control procedures.
The purpose is not to replace NASA's existing wheeled robots, such as those used to explore the surface of Mars, but to complement them with a smaller, cheaper device that can operate independently in tight places. One of the advantages is the robot's ability to crawl off a spacecraft lander without a ramp, Haith says.
Another benefit is reduced weight, which is crucial in spacecraft missions. — J.R.