Researchers take aim at new technologies to detect and defeat disinformation based on media manipulation
SemaFor seeks to detect and defeat falsified text, audio, images, and video to defend against large-scale automated disinformation attacks.
ARLINGTON, Va. – U.S. military researchers will brief industry later this month on a project to detect and defeat automated enemy disinformation campaigns launched by manipulating the Internet, news, and entertainment media.
Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., will brief industry on the Semantic Forensics (SemaFor) from 8 a.m. to 1:30 p.m. on 28 Aug. 2019 at the DARPA Conference Center, 675 N. Randolph St., in Arlington, Va.
SemaFor will develop technologies to detect, attribute, and characterize falsified multi-modal media like text, audio, images, and video automatically to defend against large-scale automated disinformation attacks.
Statistical detection techniques have been successful, yet media generation and manipulation technology is advancing rapidly. Purely statistical detection methods quickly are becoming insufficient for detecting falsified media.
Detection techniques that rely on statistical fingerprints, moreover, often can be fooled with limited additional resources like algorithm development, data, or computers.
Yet existing automated media manipulation and generation algorithms rely heavily on purely data driven approaches and are prone to making semantic errors. Faces generated by the Generative Adversarial Network (GAN), for example, may have semantic inconsistencies such as mismatched earrings, which provide an opportunity for defenders to gain an asymmetric advantage.
A suite of semantic inconsistency detectors would increase the burden on media falsifiers by requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies.
SemaFor seeks to develop semantic technologies for analyzing media. Semantic detection algorithms will determine if media is generated or manipulated. Attribution algorithms will infer if media originates from a particular organization or individual. Characterization algorithms will reason about whether media was generated or manipulated for malicious purposes.
The results of detection, attribution, and characterization algorithms can help develop explanations for system decisions, and rank assets for analyst review. These SemaFor technologies will help identify, deter, and understand adversary disinformation campaigns.
A formal broad agency announcement for the SemaFor is expected for release on or near the date of the industry briefings.
Companies interested in attending the 28 Aug. SemaFor briefings should register online no later than 21 Aug. 2019 at www.schafertmd.com/darpa/i2o/semafor/pd.
Email questions or concerns to DARPA at SemaFor@darpa.mil. More information is online at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-19-66/listing.html.