U.S military advisory board debates ethical use of artificial intelligence (AI) for military purposes

Nov. 4, 2019
Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of AI systems.

WASHINGTON – A Pentagon-appointed panel of tech experts says the Defense Department can and must ensure that humans retain control of artificial intelligence (AI) used for military purposes. Breaking Defense reports. Continue reading original article

The Military & Aerospace Electronics take:

4 Nov. 2019 -- “Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems,” the Defense Innovation Advisory Board stated as its first principle of ethical military AI.

Four other principles state that AI must be reliable, controllable, unbiased and makes decisions in a way that humans can actually understand. In other words, AI can’t be a “black box” of impenetrable math that makes bizarre decisions, like the Google image-recognition software that persistently classified black people as gorillas rather than human beings.

The board didn’t delve into the much-debated details of when, if ever, it would be permissible, for an algorithm to make the decision for military purposes to take a human life. “Our focus is as much on non-combat as on combat systems,” says board member Michael McQuade, VP for research at Carnegie Mellon University.

Related: Federal agencies move to explore artificial intelligence (AI) ethics and technical policy

Related: Artificial intelligence (AI) in unmanned vehicles

Related: Users of autonomous weapons with artificial intelligence must follow a technological code of conduct

John Keller, chief editor
Military & Aerospace Electronics

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!