Learning pose estimation for UAV autonomous navigation and landing using visual-inertial sensor data: Difference between revisions
From Murray Wiki
Jump to navigationJump to search
(Created page with "{{Paper |Title=Learning pose estimation for UAV autonomous navigation and landing using visual-inertial sensor data |Authors=Francesca Baldini, Animashree Anandkumar, Richard...") |
No edit summary |
||
| Line 1: | Line 1: | ||
{{Paper | {{Paper | ||
|Title=Learning pose estimation for UAV autonomous navigation and landing using visual-inertial sensor data | |Title=Learning pose estimation for UAV autonomous navigation and landing using visual-inertial sensor data | ||
|Authors=Francesca Baldini, Animashree Anandkumar, Richard M Murray | |Authors=Francesca Baldini, Animashree Anandkumar, Richard M. Murray | ||
|Source=2020 American Control Conference (ACC) | |Source=2020 American Control Conference (ACC) | ||
|Abstract=In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV's absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing. | |Abstract=In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV's absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing. | ||
Latest revision as of 18:18, 9 October 2022
| Title | Learning pose estimation for UAV autonomous navigation and landing using visual-inertial sensor data |
|---|---|
| Authors | Francesca Baldini, Animashree Anandkumar and Richard M. Murray |
| Source | 2020 American Control Conference (ACC) |
| Abstract | In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV's absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing. |
| Type | Conference paper |
| URL | https://authors.library.caltech.edu/100568/1/1912.04527.pdf |
| DOI | |
| Tag | BAM20-acc |
| ID | 2019i |
| Funding | NSF VeHICaL |
| Flags |