DARPA artificial intelligence project aims to help humans and machines get along better

Sept. 15, 2016
U.S. military researchers are launching an artificial intelligence and machine learning program to help humans and machines get along better than ever before. Officials of the U.S Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., have released a solicitation (DARPA-BAA-16-53) for the Explainable Artificial Intelligence (XAI) project.

ARLINGTON, Va. - U.S. military researchers are launching an artificial intelligence and machine learning program to help humans and machines get along better than ever before. Officials of the U.S Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., have released a solicitation (DARPA-BAA-16-53) for the Explainable Artificial Intelligence (XAI) project.

XAI centers on machine learning and human/computer interaction, and seeks to create a suite of machine learning techniques that produce explainable models that, when combined with explanation techniques, enable end users to understand, trust, and manage the emerging generation of artificial intelligence (AI) systems.

Dramatic success in machine learning has led to an explosion of AI capabilities that can produce autonomous systems that perceive, learn, decide, and act on their own. Although these systems offer tremendous benefits, their effectiveness is limited by the machine's inability to explain its decisions and actions to humans. XAI seeks machine-learning and computer/human interaction tools to enable an end user who depends on decisions, recommendations, or actions produced by an AI system to understand the rationale for the system's decisions.

Military researchers are trying to clear up any misunderstandings that might arise between humans and smart machines.

XAI tools will help provide end users with an explanation of individual decisions, enable users to understand the system's overall strengths and weaknesses, convey an understanding of how the system will behave in the future, and perhaps how to correct the system's mistakes.

The XAI project aims at three related research and development challenges: how to produce more models; how to design the explanation interface; and how to understand the psychological requirements for effective explanations.

For the first challenge, XAI seeks to develop machine-learning techniques to produce explainable models. For the second challenge, the program anticipates integrating state-of-the-art human-computer interaction (HCI) techniques with new principles, strategies, and techniques to generate effective explanations. For the third challenge, XAI plans to summarize, extend, and apply current psychological theories of explanation.

The program has two technical areas: one to develop an explainable learning system that contains an explainable model and an explanation interface; and the second that involves psychological theories of explanation. The XAI program will last four years and start in May 2017. Several contractors will be involved, including at least one with expertise in the psychology of explanation.

More information is online at http://bit.ly/2bOCwJg.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!