Difference between revisions of "SURF 2022: Specification Monitor for Testing of Autonomous Systems"
|Line 1:||Line 1:|
Mentor: Richard Murray
Co-mentor: Josefine Graebener
Revision as of 19:02, 20 December 2021
2022 SURF project description
- Mentor: Richard Murray
- Co-mentor: Josefine Graebener
Testing of autonomous vehicles (AVs) is a very time and cost intensive effort, which needs to be repeated after every system modification . Thus finding a way to improve the efficiency of testing is a very valuable step on the path to more autonomy. We propose a framework which `merges' multiple unit tests into one fewer tests, which guarantee to cover what is tested in the unit tests.
This framework uses a model of the system to find the merged test via a simulation and tree search, this model is non-deterministic, but expected to be perfect. But realistically , this system model will not cover the entire system in all possible situations in the real world -- due to the gap between simulation and real world -- therefore the execution of the test could not result in the desired outcome when it is run on the actual hardware. While executing the testing campaign, we need to find a way to automatically evaluate the tests --- whether it satisfied the test specification -- for example testing a left turn -- and whether the system behaved as expected -- for example safe and comfortable driving -- and then learn from the test outcomes to improve the future testing campaign.
The summer project will be implementing a `monitor', which visualizes whether the actual test fulfilled the desired outcome and implement it on the Duckietown hardware . The test monitor needs to show the satisfaction or violation of the system specification and the test specification. This test monitor will enable learning from previously run tests and improve the testing suite by modifying the following tests in case the hardware did not perform as expected in the test. After completing the monitor, the output can be used to generate an improved testing campaign and determine if improvements to the testing campaign could be made.
Familiarity with robotic hardware (we are using Duckiebots DB21), Python 3, ROS, and Docker would be beneficial.
 Kalra, N., & Paddock, S. M. (2016). Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?. Transportation Research Part A: Policy and Practice, 94, 182-193.
 Paull, L., Tani, J., Ahn, H., Alonso-Mora, J., Carlone, L., Cap, M., ... & Censi, A. (2017, May). Duckietown: an open, inexpensive and flexible platform for autonomy education and research. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1497-1504). IEEE.