CDS 110b: Linear Quadratic Regulators: Difference between revisions

From Murray Wiki
Jump to navigationJump to search
No edit summary
Line 3: Line 3:


* {{cds110b-wi08 pdfs|L3-1_lqr.pdf|Lecture Presentation}}  
* {{cds110b-wi08 pdfs|L3-1_lqr.pdf|Lecture Presentation}}  
* Homework 3 (due 30 Jan 08): TBD
* {{cds110b-wi08 pdfs|hw3.pdf|Homework 3}} - due 30 Jan 08


== References and Further Reading ==
== References and Further Reading ==

Revision as of 04:44, 24 January 2008

CDS 110b Schedule Project Course Text

This lecture provides a brief derivation of the linear quadratic regulator (LQR) and describes how to design an LQR-based compensator. The use of integral feedback to eliminate steady state error is also described.

References and Further Reading

  • R. M. Murray, Optimization-Based Control. Preprint, 2008: Chapter 2 - Optimal Control
  • Lewis and Syrmos, Section 3.4 - this follows the derivation in the notes above. I am not putting in a scan of this chapter since the course text is available, but you are free to have a look via Google Books.
  • Friedland, Ch 9 - the derivation of the LQR controller is done differently, so it gives an alternate approach.

Frequently Asked Questions

Q: What do you mean by penalizing something, from Q>=0 "penalizes" state error?

According to the form of the quadratic cost function , there are three quadratic terms such as , , and . When and if is relative big, the value of will have bigger contribution to the value of . In order to keep small, must be relatively small. So selecting a big can keep in small value regions. This is what the "penalizing" means.

So in the optimal control design, the relative values of , , and represent how important , , and are in the designer's concerns.

Zhipu Jin,13 Jan 03