# Difference between revisions of "SURF 2013: Robot Motion Planning with Complex Tasks"

(Murray moved page Robot Motion Planning with Complex Tasks to SURF 2013: Robot Motion Planning with Complex Tasks) |
|||

Line 1: | Line 1: | ||

− | + | '''Title''': Robot Motion Planning with Complex Tasks | |

+ | |||

+ | '''Mentor''': Richard Murray | ||

+ | |||

+ | '''Co-mentor''': Eric Wolff | ||

+ | |||

+ | '''Overview''': | ||

+ | For the widespread use of robots and autonomous vehicles, it is necessary for humans to naturally specify tasks for robots and have the robots automatically synthesize relevant control policies. Recent work specifies these complex tasks with temporal logics and automatically creates control policies that satisfy the task if possible. Typically, this approach involves first creating a finite state abstraction (e.g., a graph) based on the original continuous system and then using graph search techniques to create a control policy. This project will expand current capabilities by including stochastic systems where actuator noise and uncertainty are important. Additionally, it will allow the minimization of cost functions (such as time or fuel) to be included along with the logical tasks. All software will be released as part of the Temporal Logic Planning Toolbox (TuLiP)[1]. Finally, there will be an opportunity to implement these algorithms on real robots in the lab as time permits. | ||

+ | |||

+ | '''Goals''': | ||

+ | This project will (i) expand the available solvers for stochastic systems [2] and optimal control problems [3], (ii) empirically analyze the performance of these solvers on robot motion planning problems, and (iii) implement the above algorithms on real robots in the lab. Open research questions include: determining good finite state abstractions of stochastic systems (theoretical performance bounds and computational techniques) and creating new state abstraction algorithms that utilize information from a cost function. | ||

+ | |||

+ | '''Required Skills''': Experience with the Python language or the ability to learn it in a short time is required. Familiarity with or interest to learn automata theory, formal languages, and model checking (see the first 5 lectures in [4]) is desired. Some hands-on experience with robots (mechatronics) and the Robot Operating System (ROS) is desired but not necessary. | ||

+ | |||

+ | |||

+ | '''References'''': | ||

+ | |||

+ | [1] T. Wongpiromsarn, U. Topcu, N. Ozay, H. Xu, and R.M. Murray, TuLiP: a software toolbox for receding horizon temporal logic planning, International Conference on Hybrid Systems: Computation and Control, 2011 (software available at http://tulip-control.sourceforge.net). | ||

+ | |||

+ | [2] E.M. Wolff, U. Topcu, R.M. Murray, Robust control of uncertain Markov decision processes with temporal logic specifications. (http://www.cds.caltech.edu/~ewolff/publications.html). | ||

+ | |||

+ | [3] E.M. Wolff, U. Topcu, R.M. Murray, Optimal control with weighted average costs and temporal logic specifications. (http://www.cds.caltech.edu/~ewolff/publications.html). | ||

+ | |||

+ | [4] EECI, "Specification, Design, and Verification of Distributed Embedded Systems" course website, http://www.cds.caltech.edu/~murray/wiki/index.php/HYCON-EECI,_Spring_2012. |

## Revision as of 17:57, 22 December 2012

**Title**: Robot Motion Planning with Complex Tasks

**Mentor**: Richard Murray

**Co-mentor**: Eric Wolff

**Overview**:
For the widespread use of robots and autonomous vehicles, it is necessary for humans to naturally specify tasks for robots and have the robots automatically synthesize relevant control policies. Recent work specifies these complex tasks with temporal logics and automatically creates control policies that satisfy the task if possible. Typically, this approach involves first creating a finite state abstraction (e.g., a graph) based on the original continuous system and then using graph search techniques to create a control policy. This project will expand current capabilities by including stochastic systems where actuator noise and uncertainty are important. Additionally, it will allow the minimization of cost functions (such as time or fuel) to be included along with the logical tasks. All software will be released as part of the Temporal Logic Planning Toolbox (TuLiP)[1]. Finally, there will be an opportunity to implement these algorithms on real robots in the lab as time permits.

**Goals**:
This project will (i) expand the available solvers for stochastic systems [2] and optimal control problems [3], (ii) empirically analyze the performance of these solvers on robot motion planning problems, and (iii) implement the above algorithms on real robots in the lab. Open research questions include: determining good finite state abstractions of stochastic systems (theoretical performance bounds and computational techniques) and creating new state abstraction algorithms that utilize information from a cost function.

**Required Skills**: Experience with the Python language or the ability to learn it in a short time is required. Familiarity with or interest to learn automata theory, formal languages, and model checking (see the first 5 lectures in [4]) is desired. Some hands-on experience with robots (mechatronics) and the Robot Operating System (ROS) is desired but not necessary.

**References'**:

[1] T. Wongpiromsarn, U. Topcu, N. Ozay, H. Xu, and R.M. Murray, TuLiP: a software toolbox for receding horizon temporal logic planning, International Conference on Hybrid Systems: Computation and Control, 2011 (software available at http://tulip-control.sourceforge.net).

[2] E.M. Wolff, U. Topcu, R.M. Murray, Robust control of uncertain Markov decision processes with temporal logic specifications. (http://www.cds.caltech.edu/~ewolff/publications.html).

[3] E.M. Wolff, U. Topcu, R.M. Murray, Optimal control with weighted average costs and temporal logic specifications. (http://www.cds.caltech.edu/~ewolff/publications.html).

[4] EECI, "Specification, Design, and Verification of Distributed Embedded Systems" course website, http://www.cds.caltech.edu/~murray/wiki/index.php/HYCON-EECI,_Spring_2012.