CDS 110b: Optimal Control

From Murray Wiki
Revision as of 21:03, 2 January 2006 by Murray (talk | contribs)
Jump to navigationJump to search
WARNING: This page is for a previous year.
See current course homepage to find most recent page available.
Course Home L7-2: Sensitivity L8-1: Robust Stability L9-1: Robust Perf Schedule

This lecture provides an overview of optimal control theory. Beginning with a review of optimization, we introduce the notion of Lagrange multipliers and provide a summary of the Pontryagin's maximum principle.

Lecture Outline

  1. Introduction: two degree of freedom design and trajectory generation
  2. Review of optimization: necessary conditions for extrema, with and without constraints
  3. Optimal control: Pontryagin Maximum Principle
  4. Examples: bang-bang control and Caltech ducted fan (if time)

Lecture Materials

References and Further Reading

Frequently Asked Questions

Q: What do you mean by penalizing something, from Q>=0 "penalizes" state error?

According to the form of the quadratic cost function J, there are three quadratic terms such as , , and . When and if Q is relative big, the value of x will have bigger contribution to the value of J. In order to keep J small, x must be relatively small. So selecting a big Q can keep x in small value regions. This is what the "penalizing" means.

So in the optimal control design, the relative values of Q, R, and represent how important X, U, and X(T) are in the designer's concerns.

Zhipu Jin,13 Jan 03