CDS 110b: Linear Quadratic Optimal Control
See current course homepage to find most recent page available. |
CDS 110b | Schedule | Project | FAQ | Reading |
This Wednesday lecture provides an overview of optimal control theory. Beginning with a review of optimization, we introduce the notion of Lagrange multipliers and provide a summary of the Pontryagin's maximum principle.
Course Materials
- Notes on optimal control
- Notes on linear quadratic regulators
- MP3 of Wednesday lecture, 7 Feb 2007
- MP3 of Friday lecture, 9 Feb 2007
- dfan_lqr.m - Ducted fan LQR example
- Homework #5 (due 14 Feb @ 5 pm)
References and Further Reading
- Excerpt from LS95 on optimal control - This excerpt is from Lewis and Syrmos, 1995 and gives a derivation of the necessary conditions for optimaliity. A few pages have been left out from the middle that contained some additional examples (which you can find in similar books in the library, if you are interested). Other parts of the book can be searched via Google Books and purchased online.
- Notes on Pontryagin's Maximum Principle - these come from a set of lecture notes on optimization and control by Richard Weber at Cambridge University. The notes are based on dynamic programming (DP) and uses a slightly different notation than we used in class.
Frequently Asked Questions
Q: In the example on Bang-Bang control discussed in the lecture, how is the control law for obtained?
Pontryagin's Maximum Principle says that has to be chosen to minimise the Hamiltonian for given values of and . In the example, . At first glance, it seems that the more negative is the more will be minimised. And since the most negative value of allowed is , . However, the co-efficient of may be of either sign. Therefore, the sign of has to be chosen such that the sign of the term is negative. That's how we come up with .
Shaunak Sen, 12 Jan 06
Q: Notation question for you: In the Lecture notes from Wednesday, I'm assuming that is the final time and (superscript T) is a transpose operation. Am I correct in my assumption?
Yes, you are correct.
Jeremy Gillula, 07 Jan 05
Q: What do you mean by penalizing something, from Q>=0 "penalizes" state error?
According to the form of the quadratic cost function , there are three quadratic terms such as , , and . When and if is relative big, the value of will have bigger contribution to the value of . In order to keep small, must be relatively small. So selecting a big can keep in small value regions. This is what the "penalizing" means.
So in the optimal control design, the relative values of , , and represent how important , , and are in the designer's concerns.
Zhipu Jin,13 Jan 03