SURF 2022: Evaluating Redundancy Between Test Executions for Autonomous Vehicles
2022 SURF: Evaluating Redundancy between Test Cases
- Mentor: Richard Murray
- Co-mentors: Apurva Badithela, Josefine Graebener
Project Description
For autonomy to be deployed in safety-critical settings, operational testing is imperative. However, principled methods for generating operational tests is still a young but growing research area? Since the autonomous systems are complex and the domain of their operating environments is typically very large, it is not possible to exhaustively check or verify the autonomous systems' behavior. Instead, we need an automated paradigm to select a small number of tests that are the most informative of the system. In this work, we want to formally characterize the notion of redundancy between two test executions.
Testing autonomous systems requires defining the test environment, which comprises of test agents, obstacles, and test harnesses on the system under test. Test cases of varying complexity (length of the test, number of test agents and their strategies) could offer the same information on the system's ability to satisfy a requirement. Consider the following example of testing a miniature self-driving car on the Duckietown platform. The autonomous car to be tested has a controller that navigates indefinitely around a loop --- it needs to do lane following, avoid colliding with other cars and take unprotected left turns at intersections after reading appropriate road signs. The figure to the right shows a duckiebot on a simple layout; other duckiebots and mini road signs can be easily augmented to this setup. The duckiebot under test has an off-the-shelf controller implemented on-board for indefinite navigation around the track. In addition to the hardware setup, we have access to a simulator of the hardware setup that could potentially be useful in designing our experiments.
For example, tests can generally be classified into four different categories --- open-loop test in a static environment, open-loop test in a dynamic environment, reactive test in a static environment, and a closed-loop test in a reactive environment.
For this SURF, we would like to implement a few tests in test environments of varying complexity on the Duckietown hardware. We would then like to characterize the test scenarios for when two tests are redundant and when they are not. We can show this by defining and computing a notion of information gain over the test and show that a more complex test may or may not offer any new insight regarding system performance.
Requisites
- Experience coding in Python
- Willing to learn development on Docker and Github
- Interest in hands-on robotics experience
What you can expect from this SURF
- Work closely with graduate students on test case generation for autonomy
- Hands-on experience with autonomous robots
- Coming up with theoretical insights (ex: Proving results on which class of systems and test paradigms are equivalent)
- Writing open-source code to implement algorithms to demonstrating these ideas
References
1. Duckietown. https://docs.duckietown.org/daffy/duckietown-robotics-development/out/index.html