SURF 2020: Test and Evaluation for Autonomy

From Murray Wiki
Revision as of 20:17, 8 December 2019 by Abadithe (talk | contribs)
Jump to navigationJump to search

Autonomous systems are an emerging technology with potential for growth and impact in safety-critical applications such as self-driving cars, space missions, distributed power grid. In these applications, a rigorous, proof-based framework for design, test and evaluation of autonomy is necessary.

The architecture of autonomous systems can be represented in a hierarchy of levels (see Figure 1) with a discrete decision-making layer at the top and low-level controllers at the bottom. In this project, we will be focusing on testing at the top layer, that is, testing for discrete event systems. A test is a sequence of environmental inputs to the system with the objective of finding faults in the system. From test data, we can evaluate whether the system has passed the test. One of the difficulties with testing autonomous systems is that under the same environmental conditions, the system might choose to take different actions. Here are a few different possibilities for a SURF project:

Architecture auton sys.png

1)How do we leverage test data to design the next set of tests? Specifically, for each trace of system actions, can we generate a set of environment traces to form the next round of tests?

2)Use test data to identify if the fault was caused because of information not captured in the discrete system model.

It would be useful for the SURF student to know MATLAB and Python.

References: 1. Bartocci, Ezio, et al. "Specification-based monitoring of cyber-physical systems: a survey on theory, tools and applications." Lectures on Runtime Verification. Springer, Cham, 2018. 135-175. Link