CDS 110b: Receding Horizon Control

From Murray Wiki
Revision as of 20:09, 30 January 2008 by Murray (talk | contribs)
Jump to navigationJump to search
CDS 110b Schedule Project Course Text

This set of lectures provides an introduction to receding horizon control and its use for two degree of freedom control design.

  • Lecture slides on RHC overview (Mon)
  • Lecture notes on RHC analysis (Wed) - the notation in these notes is slightly different than the text. In lecture, I will use the textbook notation.
  • HW #4 (due 6 Feb 08): Problems 3.1, 3.3 and 3.2. Students working on the course project should do problems 3.1 and 3.3 only.

References and Further Reading

  • R. M. Murray, Optimization-Based Control. Preprint, 2008: Chapter 3 - Receding Horizon Control
  • Online Control Customization via Optimization-Based Control, R. M. Murray et al. In Software-Enabled Control: Information Technology for Dynamical Systems, T. Samad and G. Balas (eds.), IEEE Press, 2001. This paper talks about the CLF-based nonlinear RHC approach and its application on the Caltech ducted fan using NTG.

  • Constrained model predictive control: Stability and optimality, D. Q. Mayne, J. B. Rawlings, C. V. Rao and P. O. M. Scokaert. Automatica, 2000, Vol. 36, No. 6, pp. 789-814. This is one of the most referenced comprehensive survey papers on MPC. Gives a nice overview about its history and explains the most important issues and various approaches.

Frequently Asked Questions

Q: How do you do trajectory optimization using differential flatness

The basic idea in using flatness for optimal trajectory generation is to rewrite the cost function and constraints in terms of the flat outputs and then parameterize the flat outputs in terms of a set of basis functions:

Here, , are the basis functions (eg, ) and are constant coefficients.

Once you have parameterized the flat outputs by , you can convert all expressions involving into functions involving . This process is described in a more detail in the lectures notes (Section 4).

Q: Is the condition given by Jadbabaei and Hauser and example of a CLF or the definition of a CLF?

I was a bit sloppy defining CLFs in lecture. The formal definition is given in the lectures notes (Section 2.2, Defn 1). Briefly, given a system

we say that a (smooth) function is a control Lyapunov function (CLF) if

  • for all
  • if and only if
  • The derivative of along trajectories of the system satisfies
for all

The condition for stability given in lecture is that there exists a CLF for the system that in addition satisfies the relationship

along the trajectories of the system. Thus we have to have the derivative of be sufficiently negative definite in order to insure that the terminal cost provides stability.

Q: Why do receeding horizon trajectories need to go to zero (on slide 4)?

It is common in control problems to assume that the desired trajectory goes to zero as its desired end state. This is implicitly the case whenever you see an integral cost of the form or a terminal cost , both of which are minimized when is zero. There are two ways to think about this:

  • If we wish to move to a different (equilibrium) point , we can always change the state to and then the new state has zero as the desired equilibrium point.
  • If we want to track a trajectory (not constant), then we can solve the problem for the error system given by substrating the desired state.

This is explained in more dtail in the lecture notes on LQR control (Section 3).