Human reasoning necessary in a new era of artificial intelligence (AI) weapons, rather than outright ban

April 27, 2021
World treaties should permit AI weapons in surveillance, navigation, and target discrimination, but not in target acquisition and decision to kill.

WASHINGTON – U.S. President Joe Biden should not heed the advice of the National Security Commission on Artificial Intelligence (NSCAI) to reject calls for a global ban on autonomous weapons. Instead, Biden should work on an innovative approach to prevent humanity from relinquishing its judgement to algorithms during war. Eurasia Review reports. Continue reading original article

The Military & Aerospace Electronics take:

27 April 2021 -- The NSCAI maintains that a global treaty that prohibits the development, deployment and use of artificial intelligence (AI) enabled weapons systems is not in the interests of the United States and would harm international security. It argues that Russia and China are unlikely to follow such a treaty. A global ban, it argues, would increase pressure on law-abiding nations and would enable others to use AI military systems in an unsafe and unethical manner.

This is an unsophisticated way of thinking through a complex problem. Negotiations and conversations at the United Nations have been occurring on this matter since 2014. The voices of AI scientists, Nobel Peace Laureates and civil society were not represented as part of the NSCAI’s advice. If science argues against AI weapons, it is difficult to maintain that their development and use would benefit U.S. interests and international security.

Instead of following the NSCAI’s advice, President Biden could take the lead and create an innovative international treaty requiring human reasoning and control over AI military systems. This means that AI could continue to be used in some aspects of military operations including mobility, surveillance and intelligence, homing, navigation, interoperability and target image discrimination. But when it comes to target acquisition and the decision to kill, states would be required by the treaty to retain human decision-making. This positive obligation should be legally binding.

Related: Artificial intelligence (AI) represents the best of computing without the drawbacks of human reasoning

Related: DARPA VIP program asks industry to create artificial intelligence (AI) algorithms for human-like reasoning

Related: The next 'new frontier' of artificial intelligence

John Keller, chief editor
Military & Aerospace Electronics

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!