SURF 2022: Evaluating Redundancy Between Test Executions for Autonomous Vehicles
From Murray Wiki
Jump to navigationJump to search
Revision as of 16:46, 20 December 2021 by Abadithe (Created page with "2022 SURF: Evaluating Redundancy between Test Cases - Mentor: Richard Murray - Co-mentors: Apurva Badithela, Josefine Graebener ==Project Description== * Test cases of v...")
2022 SURF: Evaluating Redundancy between Test Cases
- Mentor: Richard Murray
- Co-mentors: Apurva Badithela, Josefine Graebener
- Test cases of varying complexity (length of the test, number of test agents and their strategies) could offer the same information on the system's ability to satisfy a requirement.
- For example, tests can generally be classified into four different categories --- open-loop test in a static environment, open-loop test in a dynamic environment, reactive test in a static environment, and a closed-loop test in a reactive environment.
- For this SURF, we would like to implement a few tests in test environments of varying complexity on the Duckietown hardware. We would then like to characterize the test scenarios for when two tests are redundant and when they are not. We can show this by defining and computing a notion of information gain over the test and show that a more complex test may or may not offer any new insight regarding system performance.