ARLINGTON, Va. – U.S. military researchers are asking industry to enhance the assurance level and ability to scale of artificial intelligence (AI) systems.
Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., have issued a solicitation (DARPA-PA-25-07-02) for the Compositional Learning-And-Reasoning for AI Complex Systems Engineering (CLARA) program.
CLARA aims to create high-assurance, broadly applicable AI systems of systems that involve machine learning and automated reasoning -- also known as knowledge representation and reasoning, subsystems.
Contractors will be expected to combine higher order logic, probabilistic logic, logical expressivity, hierarchically structured knowledge representation, and interoperable integration of automated reasoning and machine learning, and involve neural networks, Bayesian machine learning, reinforcement learning, generalized additive models, logic programs, classical logic, and answer set programs.
Need for high-assurance AI
The military's need for high-assurance AI is slowing its widespread adoption, and the trade-off between machine learning and automated reasoning is impeding high assurance, DARPA researchers explain. Machine learning, moreover, has weak assurance because it's difficult to explain.
There are several examples of systems that give preliminary hope that automated reasoning-based machine learning systems can be built, but they have significant limitations, and have yet to overcome applicability and tractability challenges.
CLARA has two technical areas: ways to develop high-assurance machine learning and automated reasoning; and developing a software composition library. An organization may not be selected for both technical areas.
Approaches for high-assurance machine learning and automated reasoning will develop a theory for high-assurance machine learning and automated reasoning. This work will include algorithms; expressive and syntactic generalizations and extensions and specializations and restrictions, with corresponding characterizations and guarantees for verifiability; computational tractability and scalability; and translations among machine learning and automated reasoning.
Open-source software
Participants will create initial open-source software to demonstrate their approaches, including automated reasoning-based machine learning inferencing and training.
Participants also will tackle automated reasoning-based machine learning training for large problems, and demonstrate scalability of this training, as well as ensure automated reasoning-based-machine learning models can adapt quickly to previously unseen data through minimal additional training data or human and manual editing.
Companies interested should submit responses to the DARPA BAA Tool online at https://baa.darpa.mil no later than 10 April 2026.
Email questions or concerns to Benjamin Grosof, the DARPA CLARA program manager, at [email protected]. More information is online at https://sam.gov/workspace/contract/opp/3530b2c0a68d4de786079e7305d4f625/view.