Industry asked for trusted computing shielding of artificial intelligence (AI) in information warfare

July 6, 2020
A deceptive information attack is an enemy attempt to alter information that an artificial intelligence system uses to learn, develop, and mature.

ARLINGTON, Va. – U.S. military researchers are reaching out to industry to prevent enemy attempts to corrupt or spoof artificial intelligence (AI) systems by subtly altering or manipulating information the AI system uses to learn, develop, and mature.

Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) issued a solicitation on Wednesday (DARPA-PA-19-03-09) for the Reverse Engineering of Deceptions (RED) project, which aims at reverse engineering the toolchains of information deception attacks.

A deceptive information attack describes enemy attempts subtly to alters or manipulates information used by a human or machine learning system to alter a computational outcome in the adversary’s favor.

Machine learning techniques are susceptible to enemy information warfare attacks at training time and when deployed. Similarly, humans are susceptible to being deceived by falsified images, video, audio, and text. Deception plays an increasingly central role in information warfare attacks.

Related: Research, applications, talent, training, and cooperation frame report on artificial intelligence (AI)

The Reverse Engineering of Deceptions (RED) effort will develop techniques that automatically reverse engineer the toolchains behind attacks such as multimedia falsification, enemy machine learning attacks, or other information deception attacks.

Recovering the tools and processes for such attacks provides information that may help identify an enemy. RED will seek to develop techniques that identify attack toolchains automatically, and develop scalable databases of attack toolchains.

RED Phase 1 will produce trusted-computing algorithms to identify the toolchains behind information deception attacks. The project's second phase will develop technologies for scalable databases of attack toolchains to support attribution and defense.

Related: Air Force researchers ask industry for SWaP-constrained embedded computing for artificial intelligence (AI)

The project also seeks to develop techniques that require little or no a-priori knowledge of specific deception toolchains; automatically cluster attack examples together to discover families of deception toolchains; generalize across several information deception scenarios like enemy machine learning and media manipulation; require just a few attacks to learn unique signatures; and scale to internet volumes of information.

Companies interested should upload 8-page proposals no later than 30 July 2020 to the DARPA BAA Website at https://baa.darpa.mil/. Email questions or concerns to Matt Turek, the DARPA RED program manager, at [email protected].

More information is online at https://beta.sam.gov/opp/f108cad02f824285af5ca85e1f7481f4/view.

About the Author

John Keller | Editor-in-Chief

John Keller is the Editor-in-Chief, Military & Aerospace Electronics Magazine--provides extensive coverage and analysis of enabling electronics and optoelectronic technologies in military, space and commercial aviation applications. John has been a member of the Military & Aerospace Electronics staff since 1989 and chief editor since 1995.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!