Army seeks ways to defeat artificial intelligence (AI) spoofing attacks in facial recognition systems

Feb. 20, 2020
Backdoors can be implanted into a database and labeled in a way that trains the algorithm to “break” when it comes across the image in the real world.

ADELPHI, Md. – The Army has many data problems. But when it comes to the data that underlies facial recognition, one sticks out: Enemies want to poison the well. Fedscoop reports. Continue reading original article

The Military & Aerospace Electronics take:

20 Feb. 2020 -- Adversaries are becoming more sophisticated at providing “poisoned,” or subtly altered, data that will mistrain artificial intelligence (AI) and machine learning algorithms.

To try and safeguard facial recognition databases from these so-called backdoor spoofing attacks, the Army is funding research to build defensive software to mine through its databases.

Since deep learning algorithms are only as good as the data they rely on, adversaries can use backdoor attacks to leave the Army with untrustworthy AI or even bake-in the ability to kill an algorithm when it sees a particular image, or “trigger.”

Related: Military organizes for cyber warfare

Related: Today's battle for the electromagnetic spectrum

Related: The new world of counter-drone technology

John Keller, chief editor
Military & Aerospace Electronics

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!