Artificial intelligence and embedded computing for unmanned vehicles

May 1, 2020
The latest generation of unmanned vehicles operating on land, in the air, and at sea no longer simply are remotely operated. These advanced systems have built-in intelligence to learn from their experiences and make their own decisions.

The two most prevalent terms in military and civilian technology represented little more than science fiction a generation ago. But today, unmanned vehicles and artificial intelligence (AI) command center stage in any discussion of future military requirements for platforms, tactics, techniques, and procedures. 

Unmanned vehicles, in the form of unmanned aerial vehicles (UAVs), arrived on the scene first, but how the military wants to use them and other platforms — unmanned ground vehicles (UGVs), unmanned surface vehicles (USVs), unmanned underwater vehicles (UUVs) and unmanned space vehicles (USVs) — in the future had to wait for at least rudimentary AI.

Each of those has its own unique operational environments that require specific AI capabilities to make autonomous underwater vehicles (AUVs) practical. Sandeep Neema, program manager in the U.S. Defense Advanced Research Project Agency (DARPA) Information Innovation Office (I2O) in Arlington, Va., says some of the most difficult unmanned technology challenges involve UUVs.

“While each evaluation environment is distinctive, undersea environments present a unique set of challenges,” Neema explains. “In these environments, things move much more slowly, missions can take longer due to harsh environmental conditions, and the limits of physics and navigation/sensing/communications issues exacerbate the challenges. Advanced autonomy could significantly aid operations in the underwater domain.”

Smaller, faster processors and enhanced onboard memory have expanded the capabilities of embedded computing greatly across the range of unmanned vehicles, but especially on smaller platforms like hand-launched UAVs, UUVs, and UGVs operating underground.

“Big data processing is increasingly being deployed in edge applications for autonomy, quick reaction capability, and untethered cognitive functionality remote from fixed resources,” explains John Bratton, director of product marketing at Mercury Systems in Andover, Mass. “Nowhere is this more pronounced than in the rapidly emerging and well-funded autonomous platform domain.”

To scale the data center across smart fog and edge layers requires their composing servers to become smaller, resilient to harsh environments, and human attempts to tamper with them. Distributed deployment requires servers to be miniaturized, while remaining well-cooled; protected from hostile environments and conditions; secure and resilient to reverse engineering, tampering, and cyber threats; trusted across hardware, software, middleware, and other intellectual property; deterministic for mission- and safety-critical effector control and edge layers; and affordable through the leverage of the best commercial intellectual property, independent research and development, and manufacturing capabilities.

The need for big processing

“As platforms become smarter and more capable, greater on-board AI and big processing in general is required to handle the torrents of sensor and situational
awareness data for autonomous decision-making and effector control,” Mercury’s Bratton says. “Effectors being the highly deterministic, reliable and safe vetronics, avionics and other safety- and mission-critical functions required for platform control and mission success within the defense domain. As the number of smart platforms grows, so does the need for a greatly expanded, distributed fog layer with big processing capability that safely and efficiently manages the increased traffic.”

The evolutionary range of artificial intelligence, from machine learning to total AI, requires more and faster embedded computing as its capability increases. As the size, weight, and power consumption (SWaP) of embedded computing improves at a rapid pace, so does the ability to place more and better levels of AI on smaller and smaller platforms.

“We can recreate in an Open VPX system at the tactical edge,” Bratton says. “Miniaturization and cooling are critical. You need very sophisticated cooling to remove the heat associated with smaller processors. The support they need includes the ability to reduce the footprint of the circuit board. Then you have to get the heat away from that and all the components it interacts with.”

Using independent research and development (IRAD) funds, the Leidos Innovations Center (LInC) in Reston, Va., is charged with advancing the state of the art of embedded computing and AI for unmanned vehicle applications.

“Embedded computing has gotten more and more advanced, especially SWaP constraints and being able to fit into smaller packages, says Richard Bowers, lead software engineer for unmanned surface vessels at Leidos. “The more advanced systems are much better at handling hard environments. We’re testing in the Arctic circle, in high sea states, ensuring whatever we build can work in any environment”

Field-programmable gate array (FPGA) embedded computing is a chief enabling technology for these kinds of unmanned vehicles. “We’ve been doing a lot with high-power FPGAs to improve our sensing capabilities, especially for smaller vehicles,” Bowers says. “Embedded computing has really been pushing the envelope of what’s possible in terms of fast response and high-level computing, which is giving us a lot more capability. We’re still using other embedded techniques — traditional computing in a smaller form factor — but the FPGAs are almost transformative rather than just shrinking the size of the computer.”

Harvesting commercial technology

Technology advances in computing, sensors, and other areas once were led by the military. Today’s techno-world, however, sees commercial companies pushing enabling technologies in applications ranging from smartphones to self-driving vehicles. Commercial companies today provide the fastest, least expensive path to solving military problems.

“The state-of-the-art today is in the commercial market,” notes Greg Tiedemann, product line director for mission systems at Mercury Systems. “There are companies that have developed very-low-power sensors behind cameras to do facial recognition, for example. How do we take those devices, put them behind huge imaging cameras, and look for objects on the ground or in the air? We’ve also deployed massive graphics processing units [GPUs] into exploitation applications. Those GPUs are very powerful to do some of the AI algorithms. So we’re doing a lot of work to apply what’s best in industry today to military problems.”

However, some chip makers do not want to sell chips directly — especially to the military — and have to support someone else putting those on an embedded card; instead, they will make the module themselves and sell those.

“There’s nothing magical to make AI deployable,” says David Jedynak, chief technology officer at the Curtiss-Wright Corp. Defense Solutions segment in Ashburn, Va. “It comes down to are there chips that can run what we need to run and fit on the platform? Are the parts available from industry and are we allowed to use them in the defense market? There are some chip makers in the broad tech industry that aren’t interested in the defense market and they just won’t talk to you. So we can’t just do anything we want with those chips. At the end of the day, it’s about the engineering support.

“The whole point of AI is the upper level DOD [U.S. Department of Defense] policy — the third offset strategy — which is why we are doing a lot of this. The DOD strategy is we are going to get machine learning and cyber-hardened equipment to the services, such as man-machine interfaces. That’s a huge driving policy force behind all this, getting AI to the battlefield to help the warfighter be more effective, using machine learning to provide greater capabilities beyond what the individual warfighter can do now.”

Embedded computing and AI

The military no longer can afford service-specific answers that may not work or may even be in conflict with inter-service and allied/coalition operations — especially given the rapid pace of technology development. That is markedly the case with embedded computing and AI.

“The point is, we try to get these capabilities into the warfighters’ hands as quickly as possible to save lives and make our defense more effective,” says Stephen
Kracinovich, director of autonomy strategy at the Naval Air Systems Command (NAVAIR) Aircraft Division at Patuxent River Naval Air Station, Md.

“We in naval aviation do a great job, but collaborating with industry, academia, other government entities and the other services and domains is part of our strategy to move forward,” Kracinovich says.

“To implement these capabilities, you have to have a business strategy that allows you to rapidly add new functionality.”

The trick is finding the right mix of defense industry expertise to meet design goals; no one company can go it alone. “No defense contractor can be the best of breed in everything,” Kracinovich points out. “So one goal is to make it possible to bring in third parties by designing our systems to rapidly take an automated capability and integrate it into our systems, whether from the original defense contractors or not. A lot of our warfighters know what’s out there and they expect it in the systems they use. So the idea of having a well-defined, modular architecture that allows automated capabilities and an acquisition strategy that allows us to bring in new capabilities as they come up is fundamental to what we’re doing.”

DARPA remains DOD’s primary source for advanced military technologies, and typically pursues what one former director called “Far-Out” technologies that might not become mainstream for decades. Yet in recent years, the agency has put more effort into technological breakthroughs and advanced prototypes that could be deployed to warfighters quickly. While that includes embedded computing and AI, Neema says one of the biggest areas of concern for AI is safety, to ensure that an unmanned vehicle with no human operator does what its operators intend.

Trusted artificial intelligence

That does not reflect a fear that AI might follow the path of “The Terminator’s” Skynet controller. Still, there is concern that one or more components might fail, and cause unintended consequences. That is the goal of two of Neema’s programs: Assured Autonomy, which is taking an assurance approach, fixing things they know are not working properly; and Symbiotic Design for Cyber-Physical Systems (SDCPS), which will launch later this year with a focus on using AI-based approaches to design systems and build more complex and innovative designs than today’s traditional designs.

“DARPA’s role is building some of the early stage technologies,” Neema says. “With Assured Autonomy, we focus on looking at safety and correctness of systems
that will use AI components. Past unmanned systems are, for the most part, remotely manned. To make them truly autonomous and unmanned, we need to use learning techniques in their operation. We currently don’t have the safety and correctness elements in place,” he explains.

“These kinds of systems have a complex design, which needs to be optimized for multiple applications,” Neema continues. “To get good, efficient, higher-performing designs, you need to co-optimize across all the designs.”

Within those safety and higher level design goals, DARPA is working to improve the SWaP parameters of embedded computing and enable the use of appropriate levels of AI in a range of unmanned vehicles — all sizes, all domains, all services, all environments, and all missions.

“We are able to put more powerful computing capabilities onboard now, but are limited by power and other constraints. From a software perspective, there are multiple classes we try to deploy on these systems — planning software, high-level control, etc. — but the state of the effort does not use AI. Collecting data onboard and bringing it back to a ground station is where we are today,” Neema says. “The main AI technique currently being used is to extrapolate data, using COTS components; other AI techniques employ machine learning to guide the vehicle in operation.

“Embedded computing is part of a larger system. The base layer is some degree of embedded control,” DARPA’s Neema continues. “The next step is the autonomy layer that provides some higher-level planning. These are the core in employing any unmanned system. AI is potentially game-changing. In a lot of manned systems, the high level integration of complex functions is provided by human operators. These typically are not possible to implement autonomously.”

AI in dogfighting

DARPA and the U.S. Air Force also are conducting three AlphaDogfight Trials, with eight teams in a virtual competition designed to demonstrate advanced AI algorithms that can perform simulated within-visual-range air combat maneuvering. The first two competitions were in November 2019 and January 2020, with the final in early April in Las Vegas at the Air Force’s innovation hub, AFWERX, and nearby Nellis Air Force Base.

“The Trials aim to energize and expand a base of AI developers and potential proposers prior to an anticipated algorithm-development solicitation to be released under DARPA’s Air Combat Evolution (ACE) program,” according to the lab. “ACE seeks to automate air-to-air combat and build human trust in AI as a step toward improved human-machine teaming. DARPA’s vision is that with trusted AI able to manage lower-order operations, pilots could focus on higher-order strategic challenges, such as orchestrating teams of unmanned aircraft across the battlespace under the Mosaic Warfare concept.

The AlphaDogfight Trials are related to the ACE program but are not formally part of it. Those participating in the Trials represent a wide range of research entities — Aurora Flight Sciences in Manassas, Va.; EpiSci Science Inc. in Poway, Calif.;, Georgia Tech Research Institute in Atlanta; Heron Systems Inc. in California, Md.; Lockheed Martin Corp. in Bethesda, Md.; Perspecta Labs in Basking Ridge, N.J; physicsAI in Pacifica, Calif.; and SoarTech in Ann Arbor, Mich.

“Warfighters trust things that work and this contest is the first step along the road to trusting this new kind of autonomy,” notes Lt. Col. Dan Javorsek, ACE program manager in DARPA’s Strategic Technology Office. “In the larger ACE program, we want to demonstrate that human pilots teamed with AI can achieve greater effects in aerial combat than either could achieve alone. Ultimately, ACE is about enabling human-machine teaming for complex air combat scenarios.”

In February, DOD officially adopted ethical principles guidelines for AI, based on recommendations Secretary of Defense Mark Esper received from the Defense Innovation Board in October 2019. Those recommendations were the result of 15 months of discussions with AI experts in commercial industry, government and academia, as well as public input.

Ethics and AI

“The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields and safeguard the rules-based international order,” Esper said at the time.

“AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behavior. The adoption of AI ethical principles will enhance the department’s commitment to upholding the highest ethical standards as outlined in the DOD AI Strategy, while embracing the U.S. military’s strong history of applying rigorous testing and fielding standards for technology innovations.”

According to DOD, the Department’s AI ethical principles encompass five major areas: responsible, equitable, traceable, reliable, and governable.

Responsible — DOD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment and use of AI capabilities

Equitable — The Department will take deliberate steps to minimize unintended bias in AI capabilities

Traceable — The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources and design procedure and documentation

Reliable — The Department’s AI capabilities will have explicit, well-defined uses and the safety, security and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles

Governable — The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior

While the new guidelines align with President Donald Trump’s 2019 American AI Initiative to advance trustworthy AI technologies and encourage U.S. allies to do the same, some nations — notably China, Russia, Iran, and North Korea — have not implemented similar principals. That could enable them to move forward more quickly with what is considered one of the most critical developments in human history, but with significantly higher risk of unintended consequences, especially with armed unmanned vehicles.

Leading AI development

“The United States currently leads in AI research, but the race is on to develop and wield AI advances,” warned retired Marine Corps Lt. Gen. Robert M. Shea, president of the Armed Forces Communications and Electronics Association (AFCEA), in July 2018. “China has made no secret of its long-term plans to lead the world in AI by 2025, at the latest. It has its eyes on the prize and considers AI a national priority. What’s bothersome about this is that China does not follow global behavioral norms.”

One of the most ambitious U.S. efforts is Sea Hunter II, being built by Leidos as the second fully autonomous vessel in a program to develop unmanned, AI-operated ships for the U.S. Navy. It is a trimaran — a main hull and two smaller outrigger hulls — capable of autonomous navigation as it spends weeks at sea. Its mission designs include tracking enemy submarines, removing mines, detecting torpedoes, and acting as a communication relay before it has to return to port — all at a fraction of the cost of a manned ship.

With updated embedded computing throughout, Sea Hunter II will incorporate what was learned from Sea Hunter I to further develop and mature autonomy, both as a stand-alone mission vessel and in cooperation with Sea Hunter I, which remains an active part of the program, as part of the Navy’s goal of deploying collaborative ships, manned and unmanned.

“We’re looking at attributable systems, unmanned systems that might go into harm’s way and might not come back, so there is a lot of push for lower cost, higher power systems. We look at virtualization systems and quantum computing,” Bowers says. “Customer demands are pushing forward on all kinds of AI — perception, decision-making, preventative maintenance, checking the health of the vehicle. There also is a lot of work on AI verification, making sure it’s doing the right thing. The harder you work to make a computer smart, the harder it is to figure out if it is doing the right thing.

“You can’t do without AI for a lot of solutions. I love the definition that AI is teaching a computer to do things that right now a person does better, Leidos’s Bowers continues. “You’re always trying to figure out how to do that. When doing unmanned systems, you are getting people off the plane and away from the vehicle. AI enables you to do that without having to do monitoring. We’re pushing for vehicles to do high-performance in harsh environments without having someone involved at every step, watching how everything works.”

Working as a team

Of growing importance as AI advances and becomes an integral part of unmanned vehicles across all domains is not only the ability of the different platforms to communicate and coordinate, regardless of service, but the ability of an AUV to learn on its own, without human intervention, then pass what it has learned on to other AUVs. While considered invaluable capabilities, they also represent a further removal of humans from the training and operations chain and an even greater demand for safety and assurance.

“We want to be able to extract information from their operations and utilize them in learning situations. But how do we maintain the safety guarantees as these systems learn and evolve?” DARPA’s Neema says. “That learning by one should be able to be shared is the expected goal, but it is not currently available.”

In combat or hazardous environments, fully autonomous platforms almost certainly will encounter times when communications with other unmanned vehicles, manned mission components, or higher command become compromised.

“One thing the AI community has not really understood about the unmanned environment is what happens when you have no access to communications,” says Karen Zita Haigh, fellow chief technologist at Mercury Systems. “For example, the Mars Rover has communications [with Earth], but with a significant temporal delay. Underwater, you have an acoustic modem, but the amount of data it can handle is very small and there also is latency. Being able to act without communications is critical.”

Leidos’s Bowers summed up the status and future of military embedded computing and AI research by corporations, government labs, and academia: “We push a lot of boundaries, but a lot of the really exciting work is classified.”

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!