SRI to develop new algorithms to help detect and defeat false information disseminated in media reports.

Aug. 24, 2020
SemaFor will develop algorithms to detect falsified text, audio, images, and video to defend against large-scale automated disinformation attacks.

ARLINGTON, Va. – Intelligence experts at SRI International, Menlo Park, Calif., will help U.S. military researchers detect and defeat automated enemy disinformation campaigns launched by manipulating the Internet, news, and entertainment media.

Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., have announced an $11 million contract to SRI for the Semantic Forensics (SemaFor) project.

SemaFor will develop technologies to detect, attribute, and characterize multi-modal falsified media like text, audio, images, and video automatically to defend against large-scale automated disinformation attacks.

SRI International joins Kitware Inc. in Clifton Park, N.Y., which won an $11.9 million SemaFor contract on 29 July 2020, and PAR Government Systems Corp. in Rome, N.Y., which won an $11.9 million SemaFor contract from DARPA last June.

Related: ASTARTE project to create a strong common operating picture using artificial intelligence (AI) algorithms

Statistical detection techniques have been successful, yet media generation and manipulation technology is advancing rapidly. Purely statistical detection methods quickly are becoming insufficient for detecting falsified media.

Detection techniques that rely on statistical fingerprints, moreover, often can be fooled with limited additional resources like algorithm development, data, or computers.

Yet existing automated media manipulation and generation algorithms rely heavily on purely data driven approaches and are prone to making semantic errors. Faces generated by the Generative Adversarial Network (GAN), for example, may have semantic inconsistencies such as mismatched earrings, which provide an opportunity for defenders to gain an asymmetric advantage.

A suite of semantic inconsistency detectors would increase the burden on media falsifiers by requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies.

Related: General Dynamics receives $19.5 million contract to provide 15 SEWIP shipboard EW systems

SemaFor seeks to develop semantic technologies for analyzing media. Semantic detection algorithms will determine if media is generated or manipulated. Attribution algorithms will infer if media originates from a particular organization or individual. Characterization algorithms will reason about whether media was generated or manipulated for malicious purposes.

The results of detection, attribution, and characterization algorithms can help develop explanations for system decisions, and rank assets for analyst review. These SemaFor technologies will help identify, deter, and understand adversary disinformation campaigns.

On this contract SRI will do the work in Menlo Park, Calif.; Baltimore; Buffalo, N.Y.; and Pittsburgh, and should be finished by July 2024.

For more information contact SRI International online at, or DARPA at

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!