CDS 110b: Receding Horizon Control

From Murray Wiki
Revision as of 04:31, 19 January 2006 by Murray (talk | contribs)
Jump to navigationJump to search
WARNING: This page is for a previous year.
See current course homepage to find most recent page available.
Course Home L7-2: Sensitivity L8-1: Robust Stability L9-1: Robust Perf Schedule

This lecture presents an overview of receding horizon control (RHC). In addition to providing a summary of the available theoretical results, we introduce the concept of differential flatness for simplifying RHC problems and provide an example of RHC control on the Caltech ducted fan. .

Lecture Outline

  1. Receding Horizon Control
    • Problem Formulation
    • Stability theorems
  2. Differential Flatness and Trajectory Generation
    • Definitions
    • Properties
    • Examples
  3. Examples: Caltech ducted fan, satellite formation flight, multi-vehicle testbed

Lecture Materials

References and Further Reading

  • Flat systems, equivalence and trajectory generation, Phillipe Martin, Richard Murray, Pierre Rouchon, CDS Technical Report, 2003 - this is a very detailed report on differential flatness, including the various conditions that are known for checking for flatness. You shouldn't need nearly this level of detail to do the homework set and understand the basic concepts, but its available if you want to become an expert.

Frequently Asked Questions

Q: How come there is no MP3 recording for today's lecture?

Technology glitch. My MP3 recorder didn't start up correctly and so I don't have any audio record of the lecture. I have found and fixed the problem, but if you didn't attend today's lecture you'll have to rely on the lecture slides, notes, and friends.

Q: How is differential flatness defined?

A system of the form

is said to be differentially flat if there exists an integer and a (smooth) function of the form

such that all solutions of the differential equation can be written in terms of and a finite number of its derivatives with respect to time. In other words, and satisfying the dynamics of the system have the form

for some integer and smooth functions and . The variable is often called the flat output and if a system is differentially flat then the number of flat outputs is equal to the number of inputs to the system.

Checking a system for flatness is difficult, but there are certain classes of systems for which there are necessary and sufficient conditions. Usually you find the flat outputs by a combination of physical insight and trial and error.

References:

Q: How do you do trajectory optimization using differential flatness

The basic idea in using flatness for optimal trajectory generation is to rewrite the cost function and constraints in terms of the flat outputs and then parameterize the flat outputs in terms of a set of basis functions:

Here, , are the basis functions (eg, ) and are constant coefficients.

Once you have parameterized the flat outputs by , you can convert all expressions involving into functions involving . This process is described in a more detail in the lectures notes (Section 4).

Q: Is the condition given by Jadbabaei and Hauser and example of a CLF or the definition of a CLF?

I was a bit sloppy defining CLFs in lecture. The formal definition is given in the lectures notes (Section 2.2, Defn 1). Briefly, given a system

we say that a (smooth) function is a control Lyapunov function (CLF) if

  • for all
  • if and only if
  • The derivative of along trajectories of the system satisfies
for all

The condition for stability given in lecture is that there exists a CLF for the system that in addition satisfies the relationship

along the trajectories of the system. Thus we have to have the derivative of be sufficiently negative definite in order to insure that the terminal cost provides stability.

Q: Why do receeding horizon trajectories need to go to zero (on slide 4)?

It is common in control problems to assume that the desired trajectory goes to zero as its desired end state. This is implicitly the case whenever you see an integral cost of the form or a terminal cost , both of which are minimized when is zero. There are two ways to think about this:

  • If we wish to move to a different (equilibrium) point , we can always change the state to and then the new state has zero as the desired equilibrium point.
  • If we want to track a trajectory (not constant), then we can solve the problem for the error system given by substrating the desired state.

This is explained in more dtail in the lecture notes on LQR control (Section 3).