With robots like these, who needs humans?

March 25, 2014
THE MIL & AERO BLOG, 25 March 2014. U.S. military researchers have been reasonably successful over the past several years at designing robotic technology that enables unmanned aircraft, ships, and land vehicles to operate autonomously.
THE MIL & AERO BLOG, 25 March 2014. U.S. military researchers have been reasonably successful over the past several years at designing robotic technology that enables unmanned aircraft, ships, and land vehicles to operate autonomously.

Today's technology can enable autonomous vehicles to assess their own operating conditions independently of human operators and make rudimentary decisions on their own about the best ways to proceed with their missions.

Now researchers are ready to take the next step by developing technology that enables autonomous vehicles not only to work together with other autonomous craft, but also to work together as participating members of teams of autonomous vehicles and human operators.

Earlier this month military researchers have announced a couple of upcoming projects to enhance machine autonomy such that unmanned vehicles could work together as teams with or without input from human operators.

Related: Air Force asks for industry's help to enable autonomous vehicles to work together with humans

The Air Force Research Lab in Dayton, Ohio, issued a solicitation for the Formal Mission Specification and Synthesis Techniques program. This effort seeks not only to develop standardized frameworks for developing autonomous systems for military applications, but also to find ways to help humans collaborate with autonomous systems on complicated missions involving several different tasks.

This month's machine autonomy efforts don't end there. Next month scientists at the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., will brief industry about the Collaborative Operations in Denied Environment (CODE) program to enable surveillance and attack unmanned aerial vehicles (UAVs) to work together on missions involving electronic jamming, degraded communications, and other difficult operating conditions that could separate autonomous vehicles from their human operators.

The industry briefings on the CODE program will be in advance of the release of a formal solicitation expected next month that aims at enabling UAVs to work together in teams and take advantage of the relative strengths of each participating unmanned aircraft.

DARPA already has demonstrated technology that enables UAVs to refuel one another in mid-air with little or no intervention from human operators. Put all this together and military leaders will have some formidable technology.

Related: DARPA readies program to enable unmanned aircraft to share information and work together

It also sounds a bit like technological democracy. By that I mean that in the future humans might not be the undisputed masters of unmanned vehicles in all circumstances. In dangerous situations or emergencies humans could take charge, of course, but in routine operations it sounds like human operators simply would be team members.

Done right, it could help bring the together the best strengths of autonomous systems and their human operators. The kinds of capabilities this might bring to the table is limited only by the imagination.

Launch a long-endurance UAV on a persistent-surveillance mission, for example. This autonomous aircraft might be able to make judgments and alter its own operating areas based on where it's finding the most interesting action.

Related: Prosthetics meet robotics

This might free human operators to respond only to the most dire and immediate military or terrorist threats, rather than managing surveillance assets and second-guessing sensor-processing algorithms.

It's a far leap to get there, however. Machines that make their own decisions today are difficult for humans to trust -- particularly where lives on the line. Increasing machine intelligence might put the shoe on the other foot; imaging a smart UAV that didn't believe its human operator, or thought him a fool.

The Air Force Formal Mission Specification and Synthesis Techniques program is trying to take man/machine trust into account. It won't solve all the issues of man/machine trust, but it's a start.

I know we've all seen our share of science-fiction movies that depict machine intelligence gone wrong. But what if we can actually make it go right? Maybe human couples in the future won't be the only ones who occasionally need relationship therapy.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!