# CDS 110b: Linear Quadratic Regulators

This lecture provides a brief derivation of the linear quadratic regulator (LQR) and describes how to design an LQR-based compensator. The use of integral feedback to eliminate steady state error is also described.

• R. M. Murray, Optimization-Based Control. Preprint, 2008: Chapter 2 - Optimal Control
• Lewis and Syrmos, Section 3.4 - this follows the derivation in the notes above. I am not putting in a scan of this chapter since the course text is available, but you are free to have a look via Google Books.
• Friedland, Ch 9 - the derivation of the LQR controller is done differently, so it gives an alternate approach.

Q: What do you mean by penalizing something, from ${\displaystyle Q_{x}\geq 0}$ "penalizes" state error?
According to the form of the quadratic cost function ${\displaystyle J}$, there are three quadratic terms such as ${\displaystyle x^{T}Q_{x}x}$, ${\displaystyle u^{T}Q_{u}u}$, and ${\displaystyle x(T)^{T}P_{1}x(T)}$. When ${\displaystyle Q_{x}\geq 0}$ and if ${\displaystyle Q_{x}}$ is relative big, the value of ${\displaystyle x}$ will have bigger contribution to the value of ${\displaystyle J}$. In order to keep ${\displaystyle J}$ small, ${\displaystyle x}$ must be relatively small. So selecting a big ${\displaystyle Q_{x}}$ can keep ${\displaystyle x}$ in small value regions. This is what the "penalizing" means.
So in the optimal control design, the relative values of ${\displaystyle Q_{x}}$, ${\displaystyle Q_{u}}$, and ${\displaystyle P_{1}}$ represent how important ${\displaystyle X}$, ${\displaystyle U}$, and ${\displaystyle X(T)}$ are in the designer's concerns.