# Difference between revisions of "CDS 110b: Random Processes"

(created page) |
|||

(2 intermediate revisions by the same user not shown) | |||

Line 12: | Line 12: | ||

== Lecture Materials == | == Lecture Materials == | ||

* {{cds110b-pdfs| | * {{cds110b-pdfs|stochastic.pdf|Lecture Notes on Stochastic Systems}} | ||

* Reading: Friedland, Chapter 10 | * Reading: Friedland, Chapter 10 | ||

* {{cds110b-pdfs|hw4.pdf|HW 4}} - due 1 Feb | * {{cds110b-pdfs|hw4.pdf|HW 4}} - due 1 Feb | ||

Line 21: | Line 21: | ||

== Frequently Asked Questions == | == Frequently Asked Questions == | ||

'''Q: Can you explain the jump from pdfs to correlations in more detail?''' | |||

<blockquote> | |||

The probability density function (pdf), <math>p(x; t)</math> tells us how the value of a random process is distributed at a particular ''time'': | |||

<center><math> | |||

P(a \leq x(t) \leq b) = \int_a^b p(x; t) dx. | |||

</math></center> | |||

You can interpret this by thinking of <math>x(t)</math> as a separate random variable for each time <math>t</math> | |||

The ''correlation'' for a random process tells us how the value of a random process at one time, <math>t_1</math> is related to the value at a different time <math>t_2</math>. This relationship is probabalistic, so it is also described in terms of a distribution. In particular, we use the ''joint probability density function'', <math>p(x_1, x_2; t_1, t_2)</math> to characterize this: | |||

<center><math> | |||

P(a_1 \leq x_1(t_1) \leq b_1, a_2 \leq x_2(t_2) \leq b_2) = \int_{a_1}^{b_1} \int_{a_2}^{b_2} p(x_1, x_2; t_1, t_2) dx_1 dx_2 | |||

</math></center> | |||

Given any random process, <math>p(x_1, x_2; t_1, t_2)</math> descibes (as a density) how the value of the random variable at time <math>t_1</math> is related (or "correlated") with the value at time <math>t_2</math>. We can thus describe a random process according to its joint probability density function. | |||

In practice, we don't usually describe random processes in terms of their pdfs and joint pdfs. It is usually easier to describe them in terms of their statistics (mean, variance, etc). In particular, we almost never describe the correlation in terms of joint pdfs, but instead use the ''correlation function'': | |||

<center><math> | |||

\rho(t, \tau) = E\{x(t) x(\tau)\} = \int_{-\infty}^\infty \int_{-\infty}^\infty x_1 x_2 p(x_1, x_2; t, \tau) dx_1 dx_2 | |||

</math></center> | |||

The utility of this particular function is seen primarily through its application: if we know the correlation for one random process and we "filter" that random process through a linear system, we can compute the correlation for the corresponding output process (we'll see this in the [[CDS 110b: Stochastic Systems|next lecture]]). |

## Latest revision as of 01:49, 30 January 2006

WARNING: This page is for a previous year.See current course homepage to find most recent page available. |

Course Home | L7-2: Sensitivity | L8-1: Robust Stability | L9-1: Robust Perf | Schedule |

This lecture presents an introduction to random processes.

## Lecture Outline

- Quick review of random variables
- Random Processes
- Definition
- Properties
- Example

## Lecture Materials

- Lecture Notes on Stochastic Systems
- Reading: Friedland, Chapter 10
- HW 4 - due 1 Feb

## References and Further Reading

- Hoel, Port and Stone, Introduction to Probability Theory - this is a good reference for basic definitions of random variables
- Apostol II, Chapter 14 - another reference for basic definitions in probability and random variables

## Frequently Asked Questions

**Q: Can you explain the jump from pdfs to correlations in more detail?**

The probability density function (pdf), tells us how the value of a random process is distributed at a particular

time:You can interpret this by thinking of as a separate random variable for each time

The

correlationfor a random process tells us how the value of a random process at one time, is related to the value at a different time . This relationship is probabalistic, so it is also described in terms of a distribution. In particular, we use thejoint probability density function, to characterize this:Given any random process, descibes (as a density) how the value of the random variable at time is related (or "correlated") with the value at time . We can thus describe a random process according to its joint probability density function.

In practice, we don't usually describe random processes in terms of their pdfs and joint pdfs. It is usually easier to describe them in terms of their statistics (mean, variance, etc). In particular, we almost never describe the correlation in terms of joint pdfs, but instead use the

correlation function:The utility of this particular function is seen primarily through its application: if we know the correlation for one random process and we "filter" that random process through a linear system, we can compute the correlation for the corresponding output process (we'll see this in the next lecture).