Difference between revisions of "Limits of probabilistic safety guarantees when considering human uncertainty"

From Murray Wiki
Jump to navigationJump to search
(Created page with "{{Paper |Title=Limits of probabilistic safety guarantees when considering human uncertainty |Authors=Richard Cheng, Richard M Murray, Joel W Burdick |Source=Submitted, 2021 Co...")
 
 
Line 9: Line 9:
|Tag=CMB21-cdc
|Tag=CMB21-cdc
|Funding=GA Autonomy
|Funding=GA Autonomy
|Flags=NCS
}}
}}

Latest revision as of 02:05, 5 September 2021

Title Limits of probabilistic safety guarantees when considering human uncertainty
Authors Richard Cheng, Richard M Murray, Joel W Burdick
Source Submitted, 2021 Conference on Decision and Control (CDC)
Abstract When autonomous robots interact with humans, such as during autonomous driving, explicit safety guarantees are crucial in order to avoid potentially life-threatening accidents. Many data-driven methods have explored learning probabilistic bounds over human agents' trajectories (i.e. confidence tubes that contain trajectories with probability ), which can then be used to guarantee safety with probability . However, almost all existing works consider . The purpose of this paper is to argue that (1) in safety-critical applications, it is necessary to provide safety guarantees with , and (2) current learning-based methods are ill-equipped to compute accurate confidence bounds at such low . Using human driving data (from the highD dataset), as well as synthetically generated data, we show that current uncertainty models use inaccurate distributional assumptions to describe human behavior and/or require infeasible amounts of data to accurately learn confidence bounds for . These two issues result in unreliable confidence bounds, which can have dangerous implications if deployed on safety-critical systems.
Type Conference paper
URL http://www.cds.caltech.edu/~murray/preprints/aaaYY-place.pdf
Tag CMB21-cdc
ID 2021c
Funding GA Autonomy
Flags NCS