Wanted: framework on the ethical use of artificial intelligence (AI) and machine autonomy for the military

Feb. 13, 2024
ASIMOV performers will develop prototype modeling environments to explore military scenarios for machine automation and its ethical difficulties.

ARLINGTON, Va. – U.S. military researchers are asking industry to explore the ethics and technical challenges of using artificial intelligence (AI) and machine autonomy in future military operations.

Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) have released a broad-agency announcement for the Autonomy Standards and Ideals with Military Operational Values (ASIMOV) project.

ASIMOV aims to develop benchmarks to measure the ethical use of future military machine autonomy, and the readiness of autonomous systems to perform in military operations.

The rapid development of machine autonomy and artificial intelligence (AI) technologies needs ways to measure and evaluate the technical and ethical performance of autonomous systems. ASIMOV will develop and demonstrate autonomy benchmarks, and is not developing autonomous systems or algorithms for autonomous systems.

Related: Artificial intelligence (AI) in unmanned vehicles

The ASIMOV program intends to create the ethical autonomy language to enable the test community to evaluate the ethical difficulty of specific military scenarios and the ability of autonomous systems to perform ethically within those scenarios.

ASIMOV performers will develop prototype modeling environments to explore military scenarios for machine automation and its ethical difficulties. If successful, ASIMOV will build some of the standards against which future autonomous systems may be judged.

ASIMOV will autonomy benchmarks -- not autonomous systems or algorithms for autonomous systems -- will include an ethical, legal, and societal implications group to advise the performers and provide guidance throughout the program.

ASIMOV contractors will develop prototype generative modeling environments to explore scenario iterations and variability across increasing ethical difficulties. If successful, ASIMOV will build the foundation for defining the benchmark with which future autonomous systems may be gauged.

Related: Unmanned submarines seen as key to dominating the world’s oceans

ASIMOV will use the Responsible AI (RAI) Strategy and Implementation (S&I) Pathway published in June 2022 as a guideline for developing benchmarks for responsible military AI technology. This document lays out the five U.S. military responsible AI ethical principles: responsible, equitable, traceable, reliable, and governable.

A measurement and benchmarking framework of military machine autonomy will help inform military leaders as they develop and scale autonomous systems -- much like Technology Readiness Levels (TRLs) developed in the 1970s that today are used widely.

ASIMOV is a two-phase, 24-month program. Companies interested were asked to submit abstracts by 12 Feb. 2024, and full proposals by 28 March 2024 to the Broad Agency Announcement Tool (BAAT) online at www.baa.darpa.mil.

Email questions or concerns to DARPA at [email protected]. More information is online at https://sam.gov/opp/bebfb61ed56e4d78bdefde9575b2d256/view.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!