Difference between revisions of "SURF 2020: Test and Evaluation for Autonomy"

From Murray Wiki
Jump to navigationJump to search
m
Line 16: Line 16:
  
 
1. Bartocci, Ezio, et al. "Specification-based monitoring of cyber-physical systems: a survey on theory, tools and applications." Lectures on Runtime Verification. Springer, Cham, 2018. 135-175. [http://www-verimag.imag.fr/PEOPLE/maler/Papers/monitor-RV-chapter.pdf Link]
 
1. Bartocci, Ezio, et al. "Specification-based monitoring of cyber-physical systems: a survey on theory, tools and applications." Lectures on Runtime Verification. Springer, Cham, 2018. 135-175. [http://www-verimag.imag.fr/PEOPLE/maler/Papers/monitor-RV-chapter.pdf Link]
 +
 +
2. Wongpiromsarn, Tichakorn, et al. "TuLiP: a software toolbox for receding horizon temporal logic planning." Proceedings of the 14th international conference on Hybrid systems: computation and control. ACM, 2011. [https://user.eng.umd.edu/~mumu/files/wtoxm_HSCC2011.pdf Link]

Revision as of 14:32, 9 December 2019

Autonomous systems are an emerging technology with potential for growth and impact in safety-critical applications such as self-driving cars, space missions, distributed power grid. In these applications, a rigorous, proof-based framework for design, test and evaluation of autonomy is necessary.

The architecture of autonomous systems can be represented in a hierarchy of levels (see figure below) with a discrete decision-making layer at the top and low-level controllers at the bottom. In this project, we will be focusing on testing at the top layer, that is, testing for discrete event systems. A test is a sequence of environmental inputs to the system with the objective of finding faults in the system. From test data, we can evaluate whether the system has passed the test. One of the difficulties with testing autonomous systems is that under the same environmental conditions, the system might choose to take different actions.

Figure 1: Architecture of autonomous systems

Here are a few different possibilities for a SURF project:

1)How do we leverage test data to design the next set of tests? Specifically, for each trace of system actions, can we generate a set of environment traces to form the next round of tests?

2)Use test data to identify if the fault was caused because of information not captured in the discrete system model.

It would be useful for the SURF student to know MATLAB and Python.

References:

1. Bartocci, Ezio, et al. "Specification-based monitoring of cyber-physical systems: a survey on theory, tools and applications." Lectures on Runtime Verification. Springer, Cham, 2018. 135-175. Link

2. Wongpiromsarn, Tichakorn, et al. "TuLiP: a software toolbox for receding horizon temporal logic planning." Proceedings of the 14th international conference on Hybrid systems: computation and control. ACM, 2011. Link