IARPA seeks to apply trusted computing to artificial intelligence and machine learning models

Dec. 14, 2018
WASHINGTON – U.S. intelligence experts are asking industry to develop trusted computing methods of safeguarding models used to create artificial intelligence and machine learning systems to ensure that models compromised by cyber attacks do not inadvertently reveal sensitive information.
WASHINGTON – U.S. intelligence experts are asking industry to develop trusted computing methods of safeguarding models used to create artificial intelligence and machine learning systems to ensure that models compromised by cyber attacks do not inadvertently reveal sensitive information.

Officials of the U.S. Intelligence Advanced Research Projects Agency (IARPA) in Washington began releasing details of the upcoming Secure, Assured, Intelligent Learning Systems (SAILS) program this week for industry comment. Solicitations will come later in early 2019.

Artificial intelligence and machine learning technologies can help streamline business processes and aid in decision making, yet these systems are vulnerable to cyber attacks that can compromise people's privacy, IARPA researchers say.

Attacks against privacy aim to reveal information used to train artificial intelligence and machine learning models -- particularly in what researchers refer to as model inversion attacks and membership inference attacks.

Model inversion attacks aim to reconstruct the data used to train a model, like a recognizable feature of an individual’s face. Membership inference attacks, meanwhile, aim to determine whether a specific person's data was used in training the model, which has the potential to reveal the identity of that person.

Related: DISA asks industry for trusted computing ways of using artificial intelligence (AI) to detect malware

The SAILS program is looking for ways to create artificial intelligence and machine learning models able to resist attacks against privacy, and give model creators confidence that their trained models will not inadvertently reveal sensitive information.

SAILS will focus on speech, text, and images as potential attack avenues. Those chosen for the program will develop training procedures, model architectures, or pre- and post-processing procedures to defend against attacks. New cyber defensives could including new model architectures, new training procedures, or new pre- /post-processing steps.

For now, IARPA researchers want industry to review comment on the draft broad agency announcement for the SAILS program, which can be found online at https://www.fbo.gov/index?s=opportunity&mode=form&tab=core&id=ff29abbc3c77100b0d55c441fbe9b0ff.

Comments and concerns are due by 31 Jan. 2019 by email at [email protected].

More information is online at https://www.fbo.gov/notices/ff29abbc3c77100b0d55c441fbe9b0ff.

Ready to make a purchase? Search the Military & Aerospace Electronics Buyer's Guide for companies, new products, press releases, and videos

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!