PICATINNY ARSENAL, N.J. – U.S. Army researchers are asking for industry's help in developing artificial intelligence (AI) assurance technologies in algorithms and AI models, resiliency of deep-learning algorithms, and mitigation of attacks on military AI-based systems.
Officials of the Armaments Center of the Army Combat Capabilities Development Command (DEVCOM) at Picatinny Arsenal, N.J., issued a broad agency announcement (W15QKN-26-S-1AZR) last week for the DEVCOM AC Emerging Technologies project.
Key aspects include how to identify data poisoning, information input manipulation, and how to identify critical features. Current AI models have shown weakness to specialized enemy attacks, which lead to invalid and detrimental results for the user, Army researchers say.
Army experts also are interested in how to understand and explain an AI model for its adherence to trust, ethical, and safe use. This should include an understanding of why an AI model generates specific outputs and confidence in that reason.
Understanding AI systems
Many AI models do not have the ability to provide enough understanding for a user to understand how it reached a specific decision, researchers point out.
The Army also wants to learn how to develop and certify data sets for AI training, validation, and testing. Areas of interest include certifying that data sets are complete, and ensuring that AI systems are safe for fielding in applications like image recognition, decision making, and resource management.
Also of interest is research into alternative software quality models for systems that rely on AI -- including algorithm and AI model development, resiliency of deep-learning algorithms, and mitigating cyber attacks against AI.
Models should develop metrics for evaluating machine learning solutions where traditional software traceability is not available. Metrics should help ensure that requirements are satisfied, as well as list all potential failures if the system behaves in unexpected ways.
AI certification
Research should give software developers a path to certifying the quality of AI and machine-learning software without being hindered by a lack of traceability. Also of interest is a new acceptance standard to evaluate AI-based systems in the absence of concrete traceability methods from requirements, to source code, to function.
Companies interested should email white papers no later than 4 March 2031 to the Army's Kelly Lynch at [email protected]. Those submitting promising white papers may be invited to submit full proposals.
Email administrative questions or concerns to Kelly Lynch at [email protected]. Email technical questions to Jessica Gondela at [email protected]. More information is online at https://sam.gov/workspace/contract/opp/b75c77d156b14c84af07401ed51ec7f3/view.