Pentagon issues artificial intelligence (AI) ethics guidelines to safeguard access to the latest technology

Dec. 9, 2021
The Pentagon will require third-party developers to use these guidelines when building AI whether that AI is for an HR system or target recognition.

WASHINGTON – Thousands of Google employees protested in 2018 when they found out about their company’s involvement in Project Maven -- a controversial U.S. military effort to develop artificial intelligence (AI) to analyze surveillance video. MIT Technology Review reports. Continue reading original article

The Military & Aerospace Electronics take:

9 Dec. 2021 -- Officials of the U.S. Department of Defense (DOD) know they have a trust problem with Big Tech -- something they must tackle to maintain access to the latest technology.

In a bid to promote transparency, the Defense Innovation Unit, which awards DOD contracts to companies, has released what it calls “responsible artificial intelligence” guidelines that it will require third-party developers to use when building AI for the military, whether that AI is for an HR system or target recognition.

The AI ethics guidelines provide a step-by-step process for companies to follow during planning, development, and deployment. They include procedures for identifying who might use the technology, who might be harmed by it, what those harms might be, and how they might be avoided—both before the system is built and once it is up and running.

Related: Artificial intelligence (AI) in unmanned vehicles

Related: Artificial intelligence and machine learning for unmanned vehicles

Related: Ethical artificial intelligence (AI) must be responsible; equitable; traceable; reliable; and governable

John Keller, chief editor
Military & Aerospace Electronics

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!