CDS 110b: Optimal Control
See current course homepage to find most recent page available. |
Course Home | L7-2: Sensitivity | L8-1: Robust Stability | L9-1: Robust Perf | Schedule |
This lecture provides an overview of optimal control theory. Beginning with a review of optimization, we introduce the notion of Lagrange multipliers and provide a summary of the Pontryagin's maximum principle.
Lecture Outline
- Introduction: two degree of freedom design and trajectory generation
- Review of optimization: necessary conditions for extrema, with and without constraints
- Optimal control: Pontryagin Maximum Principle
- Examples: bang-bang control and Caltech ducted fan (if time)
Lecture Materials
References and Further Reading
- Notes on Pontryagin's Maximum Principle (courtesy of Doug MacMynowski) - this comes from a book on dynamic programming (DP) and uses a slightly different notation than we used in class.
Frequently Asked Questions
Q: What do you mean by penalizing something, from Q>=0 "penalizes" state error?
According to the form of the quadratic cost function J, there are three quadratic terms such as XTQX, UTRU, and X(T)TP1X(T). When Q>=0 and if Q is relative big, the value of X will have bigger contribution to the value of J. In order to keep J small, X must be relative small. So selecting a big Q can keep X in small value regions. This is what the "penalizing" means.
So in the optimal control design, the relative values of Q, R, and P1 represent how important X, U, and X(T) are in the designer's concerns.
Zhipu Jin,13 Jan 03