CDS 110b: Optimal Control: Difference between revisions
Line 11: | Line 11: | ||
== Frequently Asked Questions == | == Frequently Asked Questions == | ||
'''Q: In problem 2.4(d) of the homework, to what positive value should the parameter b be set?''' | |||
<blockquote> | |||
<p>Use b = 1 for part d when solving for and comparing the two trajectories found symbolically in previous parts. </p> | |||
<p>Julia Braman, 18 Jan 08</p> | |||
</blockquote> | |||
'''Q: In the example on Bang-Bang control discussed in the lecture, how is the control law for <math>u</math> obtained?''' | '''Q: In the example on Bang-Bang control discussed in the lecture, how is the control law for <math>u</math> obtained?''' |
Revision as of 01:04, 19 January 2008
CDS 110b | Schedule | Project | Course Text |
This lecture provides an overview of optimal control theory. Beginning with a review of optimization, we introduce the notion of Lagrange multipliers and provide a summary of the Pontryagin's maximum principle.
- Lecture notes: optimal control
- Homework 2 (due 22 Jan @ 5 pm): problems 2.3, 2.4a-d, 2.6
References and Further Reading
- R. M. Murray, Optimization-Based Control. Preprint, 2008: Chapter 2 - Optimal Control
- Excerpt from LS95 on optimal control - This excerpt is from Lewis and Syrmos, 1995 and gives a derivation of the necessary conditions for optimaliity. A few pages have been left out from the middle that contained some additional examples (which you can find in similar books in the library, if you are interested). Other parts of the book can be searched via Google Books and purchased online.
- Notes on Pontryagin's Maximum Principle - these come from a set of lecture notes on optimization and control by Richard Weber at Cambridge University. The notes are based on dynamic programming (DP) and uses a slightly different notation than we used in class.
Frequently Asked Questions
Q: In problem 2.4(d) of the homework, to what positive value should the parameter b be set?
Use b = 1 for part d when solving for and comparing the two trajectories found symbolically in previous parts.
Julia Braman, 18 Jan 08
Q: In the example on Bang-Bang control discussed in the lecture, how is the control law for obtained?
Pontryagin's Maximum Principle says that has to be chosen to minimise the Hamiltonian for given values of and . In the example, . At first glance, it seems that the more negative is the more will be minimised. And since the most negative value of allowed is , . However, the co-efficient of may be of either sign. Therefore, the sign of has to be chosen such that the sign of the term is negative. That's how we come up with .
Shaunak Sen, 12 Jan 06
Q: Notation question for you: In the Lecture notes from Wednesday, I'm assuming that is the final time and (superscript T) is a transpose operation. Am I correct in my assumption?
Yes, you are correct.
Jeremy Gillula, 07 Jan 05