Finite-Horizon Optimal Control and Stabilization of Time-Scalable Systems
From Murray Wiki
Jump to navigationJump to search
Alex Fax and Richard Murray
2000 Conference on Decision and Control
In this paper, we consider the optimal control of time-scalable systems. The time-scaling property is shown to convert the PDE associated with the Hamilton-Jacobi-Bellman (HJB) equation to a purely spatial PDE. Solution of this PDE yields the value function at a fixed time, and that solutio n can be scaled to find the value function at any point in time. Furthermore, in certain
cases the
unscaled control law stabilizes the system, and the unscaled value function acts
as a
Lyapunov function for that system. For the example of the nonholonomic integrator, this PDE is solved, and the resulting optimal trajectories coincide with the known solution to that problem.
- Conference
Submission: http://www.cds.caltech.edu/~murray/preprints/fm00a-cdc.pdf
- Project(s): Template:HTDB funding::AFOSR