ROME, N.Y. – U.S. Air Force trusted computing experts are launching a $100 million program to develop trusted computing hardware and software for secure, resilient and affordable command, control, communications, computers, and intelligence (C4I) and cyber-secure information processing.
Officials of the Air Force Research Laboratory Information Directorate in Rome, N.Y., issued a broad agency announcement (FA875025S7001) on Tuesday for the Foundations of Trusted Systems program.
This project asks industry for white papers on affordable hardware and software for command, control, communications, computers, and intelligence (C4I) that designed to ensure data integrity, security, authenticity, and resilience against cyber attacks and inadvertent software bugs.
Topics of interest include problems with today's processor designs, computer architectures, integrated circuits (ICs), operating systems, and software-development tools for trustworthiness.
Malicious software inclusions
This can involve systems that target malicious inclusions such as hardware Trojans, side-channel timing attacks, as well as designs for data integrity and code protection and verification. Included are software compilers, assemblers, linkers, binary checks, and source code analysis.
Also of interest is trusted and assured software tools, prototypes, and demonstrations that involve tools to guarantee trust; self-healing and repair to fight through cyber-attacks; continuous runtime verification and validation; trust in model-based software engineering for correctness of machine learning; modeling, analysis, and verification of autonomous software; and human understanding and trust of automated software development.
The project also concerns artificial intelligence (AI) in trusted computing and mitigate risks of bad information or bad decisions from designers.
Capturing software intent
This can include capturing software intent and constraints to reduce human-in-the-loop effort to adapt software to new requirements; assuring that adapted software meets user needs; using high-fidelity models to predict system performance; ways to monitor, validate, and assess machine learning models and detect security vulnerabilities; assessing the correctness of AI-based decisions and behaviors; and extracting models from source code.
Companies interested should email white papers by 2 Sept. 2026 for 2027 projects; by 3 Sept. 2027 for 2028 projects; and by 4 Sept. 2028 for 2029 projects to the Air Force's Esteban Garcia at [email protected].
Email technical questions or concerns to Esteban Garcia at [email protected], and contracting questions to Amber Buckley at [email protected]. More information is online at https://sam.gov/workspace/contract/opp/66ef722a26be400b9de90d6fa79a538d/view.