Researchers ask industry for new test and measurement to determine system risk in software assurance

ARLINGTON, Va. – U.S. military researchers are turning to industry to find ways of automating the software assurance process to enable certifiers rapidly to determine whether software system risk is acceptable.

Researchers ask industry for new test and measurement to determine system risk in software assurance
Researchers ask industry for new test and measurement to determine system risk in software assurance
ARLINGTON, Va. – U.S. military researchers are turning to industry to find ways of automating the process of software assurance to enable certifiers rapidly to determine whether software system risk is acceptable.

Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., on Friday released a solicitation (HR001119S0057) for the Automated Rapid Certification Of Software (ARCOS) project.

The goal of ARCOS is to automate the evaluation of software assurance evidence to enable certifiers to determine rapidly that system risk is acceptable. The process of determining that a system’s risk is acceptable is referred to as certification, DARPA officials say.

Current certification practices are antiquated and unable to scale with the amount of software deployed by the U.S. Department of Defense (DOD), researchers say. Two factors prevent scaling: human evaluators to determine if the system meets certification criteria; and little way to decompose evaluations.

Using humans to evaluate software assurance evidence, moreover, results in superficial, incomplete, and unacceptably long evaluations, DARPA researchers say.

Related: F-35 Joint Strike Fighter benefits from modern software testing, quality assurance

The amount of evidence necessary from test and measurement to determine software conformance to certification can be overwhelming to human subject matter experts, who have biases that influence their approach to evaluations. Because certification requirements may be vague or poorly written, evaluators often must interpret what is intended. Combined, these factors result in inconsistencies over time and across evaluations. In addition, there is no means today to compose principled and trustworthy evaluations.

Composed evaluations, however, could enable experts to evaluate software subsystems or components independently, capitalize on the results of those evaluations as assurance evidence. This would amortize the effort of evaluating any component over all systems using that component.

Current practice requires re-evaluation of components and their assurance evidence in every system that employs them. The inability to use a divide-and-conquer approach to certification of large systems increases wastes money and time.

Two factors can help speed software certification through the automation of evaluations. First, DOD leaders they want their contractors to modernize their engineering processes in the DOD Digital Engineering Strategy, which seeks to move away from document-based engineering processes and towards design models that are to be the authoritative source of truth for systems.

Related: Securing safety-critical software for avionics and other mission-critical systems

Such a future does not lend itself to current certification practices, but it will facilitate the automated evaluation of assurance, DARPA officials say.

Second, advances in several technologies suggest that automated evaluation of assurance evidence for software certification is possible. Model-based design technology, including probabilistic model checking, may help software certifiers quantify uncertainty.

So-called big code analytics can help apply semantic-based analytics to software and its artifacts. Mathematically rigorous analysis and verification can help develop software that demonstrably is correct and sound. Assurance-case languages help produce machine-readable arguments on how software fulfills its certification goals.

If successful, ARCOS technologies will move to military program offices that need to reduce certification costs, improve their software evaluations, and better understand their software risks. This technology also should be of interest to contractors who write software for program offices that have adopted ARCOS.

Related: Navy asks Progeny Systems to develop submarine combat system software that controls weapons

The project seeks to enable an app store approach to outfitting platforms for missions by assurance composition of apps that are added to a baseline platform. The ultimate goal: continuous certification and mission risk evaluation; a compositional certification is a necessary first step.

ARCOS seeks develop the capability to automatically evaluate evidence that software systems meet their certification criteria and generate assurance case arguments. Substantiation of these arguments comes from analysis of four types of evidence: test; simulation and emulation; analytical; and software quality assurance.

To develop trust in ARCOS, assurance arguments need to be compelling for a knowledgeable human evaluator by helping to assess the validity of the generated arguments. These approaches could help assist or automate validity assessments, which not only will check the logical validity of the argument, but also will increase confidence in these arguments, derived from the supporting evidence.

The ARCOS program has four technical areas (TAs): evidence generation; evidence curation; assurance generation; and quantitative assessment. Companies selected for TA4 cannot work in any other technical area.

Related: Green Hills Software unveils added high-assurance software components certified for Integrity-178B RTOS

Evidence curation will develop a common representation that can capture all forms of assurance case evidence. Assurance generation has two goals: developing technology that automatically builds assurance cases for certification criteria; and developing trustworthy technology for validating and assessing the confidence of an assurance case argument. Quantitative assessment will provide progressively challenging sets of artifacts of software systems to measure the progress of ARCOS technologies.

ARCOS is a four-year program, divided into three phases. The first and second phases will be 18 months each, and the third phase will be 12 months long for a total program length of four years. DARPA anticipates several awards for TA1 and TA3, as well as single awards for TA2 and TA4.

Companies interested should submit abstracts no later than 24 May 2019 to the DARPA BAA Website at https://baa.darpa.mil. Submit full proposals no later than 9 July 2019 at the DARPA BAA Website at https://baa.darpa.mil.

Email questions or concerns to Raymond Richards, the DARPA ARCOS program manager at ARCOS@darpa.mil. More information is online at https://www.fbo.gov/spg/ODA/DARPA/CMO/HR001119S0057/listing.html.

Ready to make a purchase? Search the Military & Aerospace Electronics Buyer's Guide for companies, new products, press releases, and videos

More in Computers