Formal Methods for V&V and T&E of Autonomous Systems

From Murray Wiki
Jump to navigationJump to search

The goal of this project to advance the mathematical and algorithmic foundations of test and evaluation by com- bining advances in formal methods for specification and verification of reactive, distributed systems with algorithmic design of multi-agent test scenarios, and algorithmic evaluation of test results. Building on previous results in synthesis of formal contracts for performance of agents and subsystems, development of barrier certificate methods for provably safe performance of nonlinear control systems, and experience in the development of operational autonomous systems we are creating a mathematical framework for specifying the desired characteristics of multi-agent systems involving cooperative, adversarial, and adaptive interactions, develop algorithms for verification and validation (V&V) as well as test and evaluation (T&E) of the specifications, and per- form proof-of-concept demonstrations that demonstrate the use of formal methods for V&V and T&E of autonomous systems. If successful, our results will provide more systematic methods for describing the desired properties of military systems in complex environments; new algorithms for verification of system-level designs against those properties, synthesis of test plans, and analysis of test results; de- sign rules that allow adaptation and machine learning to be integrated without compromising system safety and performance specifications; and demonstration of the envisioned techniques in a multi-agent, free-flight arena that will provide experimental validation of the results.

Current participants:

Additional participants:

  • Prithvi Akella (PhD student, ME)

Collaborators:

  • Aaron Ames (ME/CDS)

Past participants:

Objectives

Afosr-testandeval.png

The output of the project will be a set of tools (theory and algorithms) that address these challenges by demonstrating the use of formal techniques for specification, design, verification, and testing of autonomous systems operating in heterogeneous, multi-agent, adversarial environments. More specifically, we aim to accomplish the following specific objectives:

  • Create a mathematical framework for specification of desired characteristics of multi-agent autonomous and semi-autonomous systems that account for cooperative, adversarial, and adaptive interactions between agents.
  • Develop algorithms and software toolboxes for creation of layered models that are suitable for al- gorithmic verification of performance that incorporate existing test data as well as generate test plans and performance monitors for using in operational test and evaluation.
  • Demonstrate key concepts on a proof-of-concept experimental testbed that demonstrates the integration of test data with formal methods for specification and verification of autonomous and semi-autonomous system performance.

References



Research supported by the AFOSR Test and Evaluation program, grant FA9550-19-1-0302.

  • Agency: AFOSR
  • Grant number: FA9550-19-1-0302
  • Start date: 1 July 2019
  • End date: 30 Apr 2023
  • Support: 1.5 postdocs, 2 graduate students
  • Reporting: Annual program review + report