Difference between revisions of "SURF 2020: Test and Evaluation for Autonomy"
(One intermediate revision by one other user not shown) | |||
Line 1: | Line 1: | ||
'''[[SURF 2020|2020 SURF]]: project description''' | |||
* Mentor: Richard M. Murray | |||
* Co-mentor: Apurva Badithela | |||
Autonomous systems are an emerging technology with potential for growth and impact in safety-critical applications such as self-driving cars, space missions, distributed power grid. In these applications, a rigorous, proof-based framework for design, test and evaluation of autonomy is necessary. | Autonomous systems are an emerging technology with potential for growth and impact in safety-critical applications such as self-driving cars, space missions, distributed power grid. In these applications, a rigorous, proof-based framework for design, test and evaluation of autonomy is necessary. | ||
Line 34: | Line 38: | ||
5. Wongpiromsarn, Tichakorn, et al. "TuLiP: a software toolbox for receding horizon temporal logic planning." Proceedings of the 14th international conference on Hybrid systems: computation and control. ACM, 2011. [https://user.eng.umd.edu/~mumu/files/wtoxm_HSCC2011.pdf Link] | 5. Wongpiromsarn, Tichakorn, et al. "TuLiP: a software toolbox for receding horizon temporal logic planning." Proceedings of the 14th international conference on Hybrid systems: computation and control. ACM, 2011. [https://user.eng.umd.edu/~mumu/files/wtoxm_HSCC2011.pdf Link] | ||
6. Breach, | 6. Donzé, Alexandre. "Breach, a toolbox for verification and parameter synthesis of hybrid systems." International Conference on Computer Aided Verification. Springer, Berlin, Heidelberg, 2010. [https://link.springer.com/content/pdf/10.1007/978-3-642-14295-6_17.pdf Link] |
Latest revision as of 21:32, 15 December 2019
2020 SURF: project description
- Mentor: Richard M. Murray
- Co-mentor: Apurva Badithela
Autonomous systems are an emerging technology with potential for growth and impact in safety-critical applications such as self-driving cars, space missions, distributed power grid. In these applications, a rigorous, proof-based framework for design, test and evaluation of autonomy is necessary.
Background
The architecture of autonomous systems can be represented in a hierarchy of levels (see figure below) with a discrete decision-making layer at the top and low-level controllers at the bottom. In this project, we will be focusing on testing at the top layer, that is, testing for discrete event systems. The requirements of the system under consideration can be represented as specifications in a formal language. Most commonly in theory, specifications are written in temporal logic languages such as Linear Temporal Logic (LTL), Signal Temporal Logic (STL), Computational Tree Logic (CTL), etc (See references 2 and 3 for an overview on writing specifications in temporal logic). In addition to the system and its specifications, we have an environment for which corresponding specifications can be written. The environment constitutes of other agents modeled in the problem statement that do not constitute the system. See Figure 2 below for an example scenario of the system / environment.

If the system and environment specifications are given as GR(1) LTL specifications, it is possible to synthesize correct-by-construction controllers using TuliP (Reference 5). More background on GR(1) formalism can be found in Reference 4. In this work, we use GR(1) specifications to describe system and environment requirements.
Project Scope
In testing, we check if the system has satisfied its specifications for a finite number of possible combinations of initial conditions, model parameters and environment actions. From test data, we can then evaluate whether the system has passed the test. The challenge is how do we algorithmically generate the "best" (finite) set of tests for an autonomous system given the specifications of the system and the environment? To this end, we would like the SURF student to develop a Test Plan Monitor for discrete decision-making behaviors. The test-plan monitor will be used for the following purposes:
1) Monitor the execution of the system.
2) Validate the discretized system model (also known as system abstraction), and assumptions about the environment.
3) Choose environment actions in test based on information gathered on system actions.
One useful reference is Breach, a MATLAB-based monitoring tool that accepts STL specifications and the system state and output any violations of the specifications.
It would be useful for the SURF student to know MATLAB and Python.
References:
1. Bartocci, Ezio, et al. "Specification-based monitoring of cyber-physical systems: a survey on theory, tools and applications." Lectures on Runtime Verification. Springer, Cham, 2018. 135-175. Link
2. RM Murray, U. Topcu, N. Wongpiromsarn. HYCON-EECI, 2013 Link
3. On Signal Temporal Logic. A. Donze, 2014 Link
4. RM Murray, U. Topcu, N. Wongpiromsarn. HYCON-EECI, 2013 Link
5. Wongpiromsarn, Tichakorn, et al. "TuLiP: a software toolbox for receding horizon temporal logic planning." Proceedings of the 14th international conference on Hybrid systems: computation and control. ACM, 2011. Link
6. Donzé, Alexandre. "Breach, a toolbox for verification and parameter synthesis of hybrid systems." International Conference on Computer Aided Verification. Springer, Berlin, Heidelberg, 2010. Link