CDS 110b: Kalman Filtering

From Murray Wiki
Jump to navigationJump to search
WARNING: This page is for a previous year.
See current course homepage to find most recent page available.
Course Home L7-2: Sensitivity L8-1: Robust Stability L9-1: Robust Perf Schedule

In this lecture we introduce the optimal estimation problem and describe its solution, the Kalman (Bucy) filter.

Lecture Outline

  1. State Space Computation for Stochastic Response
  2. Optimal Estimation
  3. Kalman Filter

Lecture Materials

References and Further Reading

Frequently Asked Questions

Q: How do you determine the covariance and how does it relate to random processes

The covariance of two random variables \(x\) and \(y\) is given by

\( E\{(x - \mu) (y - \mu)\} = \int_{-\infty}^\infty \int_{-\infty}^\infty (x - \mu) (y - \mu) p(x, y) dx dy \)

For the case when \(x = y\), the covariance \(P(x, y)\) is called the variance, \(\sigma^2\).

For a random process, \(x(t)\), with zero mean, we define the covariance as

\( P(t) = E\{x(t) x^T(t)\}. \)

If \(x\) is a vector of length \(n\), then the covariance matrix is an \(n \times n\) matrix with entries

\( E\{x_i(t) x_j(t)\} = \int_{-\infty}^\infty \int_{-\infty}^\infty x_i x_j p(x_i, x_j; t, t) dx_i dx_j \)

where \(p(x_i, x_j; t, t)\) is the joint distribution desity function between \(x_i\) and \(x_j\).

Intuitively, the covariance of a vector random process \(x(t)\) describes how elements of the process vary together. If the covariance is zero, then the two elements are independent.

Q: you asked what the estimator for the ducted fan would show (compared to eigenvalue placement). What should we be looking at and how would we be making those guesses?

This was not such a great question because you didn't have enough information to really make an informed guess. The main feature that is surprising about the result is that the convergence rate is much slower than eigenvalue placement.