CDS 110b: Optimal Control: Difference between revisions
(→References and Further Reading: updated LS95 info + link) |
|||
Line 16: | Line 16: | ||
== References and Further Reading == | == References and Further Reading == | ||
* {{cds110b- | * {{cds110b-pdfs|LS95-optimal.pdf|Excerpt from Lewis and Srymos}} - This excerpt is from [http://books.google.com/books?ie=UTF-8&hl=en&vid=ISBN0471033782&id=jkD37elP6NIC Lewis and Syrmos, 1995] and gives a derivation of the necessary conditions for optimaliity. Other parts of the book can be searched via [http://books.google.com Google Books] and purchased online. | ||
* [http://www.cds.caltech.edu/~macmardg/cds110b/pontryagin.pdf Notes on Pontryagin's Maximum Principle] (courtesy of Doug MacMynowski) - this comes from a book on dynamic programming (DP) and uses a slightly different notation than we used in class. | * [http://www.cds.caltech.edu/~macmardg/cds110b/pontryagin.pdf Notes on Pontryagin's Maximum Principle] (courtesy of Doug MacMynowski) - this comes from a book on dynamic programming (DP) and uses a slightly different notation than we used in class. | ||
Revision as of 07:44, 4 January 2006
See current course homepage to find most recent page available. |
Course Home | L7-2: Sensitivity | L8-1: Robust Stability | L9-1: Robust Perf | Schedule |
This lecture provides an overview of optimal control theory. Beginning with a review of optimization, we introduce the notion of Lagrange multipliers and provide a summary of the Pontryagin's maximum principle.
Lecture Outline
- Introduction: two degree of freedom design and trajectory generation
- Review of optimization: necessary conditions for extrema, with and without constraints
- Optimal control: Pontryagin Maximum Principle
- Examples: bang-bang control and Caltech ducted fan (if time)
Lecture Materials
References and Further Reading
- Excerpt from Lewis and Srymos - This excerpt is from Lewis and Syrmos, 1995 and gives a derivation of the necessary conditions for optimaliity. Other parts of the book can be searched via Google Books and purchased online.
- Notes on Pontryagin's Maximum Principle (courtesy of Doug MacMynowski) - this comes from a book on dynamic programming (DP) and uses a slightly different notation than we used in class.
Frequently Asked Questions
Q: What do you mean by penalizing something, from Q>=0 "penalizes" state error?
According to the form of the quadratic cost function , there are three quadratic terms such as , , and . When and if is relative big, the value of will have bigger contribution to the value of . In order to keep small, must be relatively small. So selecting a big can keep in small value regions. This is what the "penalizing" means.
So in the optimal control design, the relative values of , , and represent how important , , and are in the designer's concerns.
Zhipu Jin,13 Jan 03