CDS 110b: Linear Quadratic Optimal Control

From Murray Wiki
Jump to navigationJump to search
WARNING: This page is for a previous year.
See current course homepage to find most recent page available.
CDS 110b Schedule Project FAQ Reading

This Wednesday lecture provides an overview of optimal control theory. Beginning with a review of optimization, we introduce the notion of Lagrange multipliers and provide a summary of the Pontryagin's maximum principle.

Course Materials

References and Further Reading


Frequently Asked Questions

Q: In the example on Bang-Bang control discussed in the lecture, how is the control law for obtained?

Pontryagin's Maximum Principle says that has to be chosen to minimise the Hamiltonian for given values of and . In the example, . At first glance, it seems that the more negative is the more will be minimised. And since the most negative value of allowed is , . However, the co-efficient of may be of either sign. Therefore, the sign of has to be chosen such that the sign of the term is negative. That's how we come up with .

Shaunak Sen, 12 Jan 06

Q: Notation question for you: In the Lecture notes from Wednesday, I'm assuming that is the final time and (superscript T) is a transpose operation. Am I correct in my assumption?

Yes, you are correct.

Jeremy Gillula, 07 Jan 05

Q: What do you mean by penalizing something, from Q>=0 "penalizes" state error?

According to the form of the quadratic cost function , there are three quadratic terms such as , , and . When and if is relative big, the value of will have bigger contribution to the value of . In order to keep small, must be relatively small. So selecting a big can keep in small value regions. This is what the "penalizing" means.

So in the optimal control design, the relative values of , , and represent how important , , and are in the designer's concerns.

Zhipu Jin,13 Jan 03