Risk-Averse Planning Under Uncertainty: Difference between revisions
From Murray Wiki
Jump to navigationJump to search
(Created page with "{{Paper |Title=Risk-Averse Planning Under Uncertainty |Authors=Mohamadreza Ahmadi, Masahiro Ono, Michel D. Ingham, Richard M. Murray, Aaron D. Ames |Source=2020 American Contr...") |
No edit summary |
||
| Line 7: | Line 7: | ||
|Type=Conference paper | |Type=Conference paper | ||
|ID=2019l | |ID=2019l | ||
|Tag=ahm+20-acc | |||
}} | }} | ||
Latest revision as of 05:57, 26 May 2020
| Title | Risk-Averse Planning Under Uncertainty |
|---|---|
| Authors | Mohamadreza Ahmadi, Masahiro Ono, Michel D. Ingham, Richard M. Murray and Aaron D. Ames |
| Source | 2020 American Control Conference (ACC) |
| Abstract | We consider the problem of designing policies for partially observable Markov decision processes (POMDPs) with dynamic coherent risk objectives. Synthesizing risk-averse optimal policies for POMDPs requires infinite memory and thus undecidable. To overcome this difficulty, we propose a method based on bounded policy iteration for designing stochastic but finite state (memory) controllers, which takes advantage of standard convex optimization methods. Given a memory budget and optimality criterion, the proposed method modifies the stochastic finite state controller leading to sub-optimal solutions with lower coherent risk. |
| Type | Conference paper |
| URL | https://arxiv.org/abs/1909.12499 |
| DOI | |
| Tag | ahm+20-acc |
| ID | 2019l |
| Funding | |
| Flags |