Ethical artificial intelligence (AI) must be responsible; equitable; traceable; reliable; and governable

Nov. 4, 2019
Principles tested in classified environment on how they compare what the military perceives as the current applications of AI on the battlefield.

WASHINGTON – A military advisory committee endorsed a list Oct. 31 of principles for the use of artificial intelligence (AI) by the U.S. Department of Defense (DOD), contributing to an ongoing discussion on the ethical use of AI and AI-enabled technology in combat and non-combat purposes. C4ISR.net reports. Continue reading original article

The Military & Aerospace Electronics take:

4 Nov. 2019 -- The report is the result of a 15-month study conducted by the board, which included collecting public commentary, holding listening sessions and facilitating roundtable discussions with AI experts. The DOD also formed a DOD Principles and Ethics Working Group to facilitate the DIB’s efforts.

For the purpose of the report, ethical AI was defined as “a variety of information processing techniques and technologies used to perform a goal-oriented task and the means to reason in the pursuit of that task,” which the DIB said is comparable to how the department has thought about AI over the last four decades.

On the list of principles are responsible; equitable; traceable; reliable; and governable.

Related: U.S military advisory board debates ethical use of artificial intelligence (AI) for military purposes

Related: Unmanned submarines seen as key to dominating the world’s oceans

Related: Shipboard electronics tune up for future conflicts

John Keller, chief editor
Military & Aerospace Electronics

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!