AI in the military: opportunities – and challenges

Jan. 2, 2018

Artificial Intelligence (AI) has achieved a very high profile in the last two years.  For better or worse, the impact of AI in our everyday lives is ever-increasing; examples include targeted advertising on social media, smart assistants and of course the prospect of self-driving cars. But as well as these very visible applications, there are arguably more beneficial uses, such as medical diagnosis, language translation and security (both cyber and physical). There is no doubt, AI is going to continue to transform many aspects of society.

There are also many concerns regarding the rapid growth of AI, especially regarding its use for autonomous vehicles and robots: Elon Musk is of the opinion it is the “biggest risk we face as a civilisation” - which is slightly concerning, considering Tesla autopilots are an example of deployed AI…

This brings me to the use of AI in the military and its viability. While skirting around the moral argument around AI use cases in the military, there are potentially huge benefits to be gained from the use of AI for military robotic and autonomous systems. These can help keep our troops out of harm’s way and improve the nation’s security.

Synergies

The question is: how practical is the use of the technology in a military environment? You need to bear in mind that it is not connected to the cloud and requires reliable operation across a range of harsh environmental conditions, within a limited power envelope. The problem has synergies with what many car manufactures are facing today:  how do they package their “proof of concept” electronic solution into something that can be deployed efficiently, reliably, safely and securely in large numbers?

The problem is not easy, based on the fact that the current solutions are using “high-end” consumer GPUs such as the GTX1080 from NVIDIA. These powerful GPUs work great sitting in your PC or server rack, but they are absolutely not designed to operate reliably in the harsh conditions of the battlefield or onboard a deployed platform that is subject to enormous shock or vibration, or in high temperature environments such as in the desert. They also consume 200 Watts – unaffordable in power terms for the typical military platform.

What are needed are more rugged, lower power solutions that still provide enough performance to host sophisticated AI-based applications.

Abaco has developed two such platforms.

First, the GVC1000 offers a lower power, rugged system solution, utilizing NVIDIA’s Tegra TX2 technology. Its unique 10GBASE-T capability allows high speed data ingest from multiple Ethernet sensors and provides up to 1.3 TeraFLOPS of performance for neural networks within a 35 Watt power envelope.

Second, the GVC2000 also provides a rugged solution by pairing a 12-core Intel Xeon-D processor with a 640-core NVIDIA GPU, along with multiple I/O and storage options, to provide a flexible, high performance 2-TeraFLOPS compute platform that can mix AI-based applications with more traditional CPU based applications in a package that draws less than 200 Watts.

Ideal

Both are ideal as compute platforms for autonomous vehicles, or autonomous sub-components on a vehicle such as a turret.

But wait …. The hardware platform is only half the story here. Deploying AI applications is all about the software. Conveniently, both platforms fully support NVIDIA’s ComputeWorks product suite, providing libraries and tools to ease the development and deployment of AI and, more specifically, neural network solutions.

However: even with this toolset at your disposal, there are still a number of pain points in getting towards a deployed neural network. For example: to deploy a convolutional neural network-based computer vision application that has the ability to detect objects of interest, you need to go through a number of steps that are time-consuming and can be frustrating. These steps are:

  1. Collect and label data and import to deep learning framework.
  2. Pick a neural net model, train it and test its effectiveness
  3. Optimize for deployment (create inference engine).

Abaco has developed some unique software tools and capabilities within our AXIS ImageFlex product that help you breeze through aspects of these time-consuming stages, as well as visualize and fuse your sensor information. 

Put these hardware and software components together and you have a rugged, deployable platform for AI-based autonomy solutions. But: don’t just take my word for it. My team have recently developed and demonstrated these capabilities on an AI-enabled autonomous vehicle of our own, using these very same building blocks - take a look at Marvin in action here. We can help you do the same!

About the Author

David Tetley | Engineering Manager

David is engineering manager of Abaco Systems' HPEC Center of Excellence in Boston, responsible for both the AXIS software suite and HPEC system development. He has a background in military signal and image processing, starting his career as a scientific officer for the UK Ministry of Defence working with lasers and missile seeker technology, and then moving into software development in the days of the Texas instruments C40 and Analog Devices Sharc processors. As engineering manager in Boston, he has led the development of the AXIS software suite and helped shape Abaco’s HPEC strategy.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!