Lockheed Martin, BAE to defend military artificial intelligence (AI) from data poisoning and cyber attack

SABER will assess vulnerabilities of AI-enabled autonomous ground and aerial systems that could be deployed within the next one to three years.
March 12, 2026
2 min read

Key Highlights

Questions and answers:

  • What is the goal of the DARPA SABER project? The Securing Artificial Intelligence for Battlefield Effective Robustness (SABER) project aims to develop methods and tools to assess the cyber vulnerabilities of military AI-enabled systems.
  • Which companies are participating in the DARPA SABER program? Lockheed Martin and BAE Systems are participating, with work led by Lockheed Martin Advanced Technology Laboratories in Cherry Hill, N.J., and the BAE Systems Electronic Systems segment in Merrimack, N.H.
  • Why is DARPA concerned about the security of AI-enabled battlefield systems? Military AI systems can be vulnerable to cyber threats such as data poisoning, adversarial patches, and model-stealing attacks, which could allow adversaries to manipulate or compromise AI decision-making.

ARLINGTON, Va. – Researchers at Lockheed Martin Corp. are joining a U.S. military project to develop new ways of assessing the vulnerabilities of military artificial intelligence (AI) to enemy cyber attack.

Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., announced a $4.7 million contract last month to the Lockheed Martin Advanced Technology Laboratories in Cherry Hill, N.J., for the for the Securing Artificial Intelligence for Battlefield Effective Robustness (SABER) project.

Lockheed Martin joins the BAE Systems Electronic Systems segment in Merrimack, N.H., on the SABER project. BAE Systems won a $3.9 million SABER contract in early February.

Today there are no ways to assesses deployed military AI-enabled systems for their vulnerabilities to cyber attack, DARPA officials warn; the security risks of AI-enabled battlefield systems remain unknown.

Counter-AI techniques

To rectify this, Lockheed Martin and BAE Systems engineers will develop counter-AI techniques, tools, and technical competency to assess AI-enabled battlefield systems.

AI technology has reached a level of maturity sufficient to integrate the technology into U.S. military systems, and could give battlefield advantage by helping improve the speed, quality, and accuracy of decision-making while enabling machine autonomy and automation.

Yet AI has been shown a vulnerability to an adversary's taking control of its data input, which can lead to data poisoning, physically constrained adversarial patches for evasion, and model stealing attacks.

AI cyber vulnerabilities

For the SABER project, Lockheed Martin and BAE Systems will assess the potential vulnerabilities of AI-enabled autonomous ground and aerial systems that could be deployed within the next one to three years.

DARPA experts want Lockheed Martin and BAE Systems to develop physical, adversarial AI, cyber security, and electronic warfare (EW) techniques to perform these AI cyber vulnerability assessments. More SABER contracts may be awarded.

For more information contact Lockheed Martin Advanced Technology Laboratories online at www.lockheedmartin.com/en-us/capabilities/research-labs/advanced-technology-labs.html, BAE Systems Electronic Systems at www.baesystems.com/en-us/who-we-are/electronic-systems, or DARPA at www.darpa.mil/research/programs/saber-securing-artificial-intelligence.

About the Author

John Keller

Editor-in-Chief

John Keller is the Editor-in-Chief, Military & Aerospace Electronics Magazine--provides extensive coverage and analysis of enabling electronics and optoelectronic technologies in military, space and commercial aviation applications. John has been a member of the Military & Aerospace Electronics staff since 1989 and chief editor since 1995.

Sign up for our eNewsletters
Get the latest news and updates

Voice Your Opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!