Almost sure convergence: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Hendra I. Nurdin
m (typo)
imported>Hendra I. Nurdin
m (tweaking)
Line 15: Line 15:
In this section, a formal definition of almost sure convergence will be given for complex vector-valued random variables, but it should be noted that a more general definition can also be given for random variables that take on values on more abstract sets. To this end, let <math>(\Omega,\mathcal{F},P)</math> be a [[measure space|probability space]] (in particular, <math>(\Omega,\mathcal{F}</math>) is a [[measurable space]]). A (<math>\mathbb{C}^n</math>-valued) '''random variable''' is defined to be any [[measurable function]] <math>X:(\Omega,\mathcal{F})\rightarrow (\mathbb{C}^n,\mathcal{B}(\mathbb{C}^n))</math>, where <math>\mathcal{B}(\mathbb{C}^n)</math> is the [[Borel set]] of <math>\mathbb{C}^n</math>. A formal definition of almost sure convergence can be stated as follows:  
In this section, a formal definition of almost sure convergence will be given for complex vector-valued random variables, but it should be noted that a more general definition can also be given for random variables that take on values on more abstract sets. To this end, let <math>(\Omega,\mathcal{F},P)</math> be a [[measure space|probability space]] (in particular, <math>(\Omega,\mathcal{F}</math>) is a [[measurable space]]). A (<math>\mathbb{C}^n</math>-valued) '''random variable''' is defined to be any [[measurable function]] <math>X:(\Omega,\mathcal{F})\rightarrow (\mathbb{C}^n,\mathcal{B}(\mathbb{C}^n))</math>, where <math>\mathcal{B}(\mathbb{C}^n)</math> is the [[Borel set]] of <math>\mathbb{C}^n</math>. A formal definition of almost sure convergence can be stated as follows:  


A sequence <math>X_1,X_2,\ldots,X_n,\ldots</math> of random variables is said to '''converge almost surely''' to a random variable <math>Y</math> if <math>\mathop{\lim}_{n \rightarrow \infty}X_k(\omega)=Y(\omega)</math> for all <math>\omega \in \Lambda</math>, where <math>\Lambda \subset \Omega</math> is some set satisfying <math>P(\Lambda)=1</math>. An equivalent definition is that the sequence <math>X_1,X_2,\ldots,X_n,\ldots</math>  converge almost surely to <math>Y</math> if <math>\mathop{\lim}_{n \rightarrow \infty}X_k(\omega)=Y(\omega)</math> for all <math>\omega \in \Omega \backslash \Lambda'</math>, where <math>\Lambda'</math> is some set with <math>P(\Lambda')=0</math>. This convergence is often expressed as:
A sequence <math>X_1,X_2,\ldots,X_n,\ldots</math> of random variables is said to '''converge almost surely''' to a random variable <math>Y</math> if <math>\mathop{\lim}_{n \rightarrow \infty}X_k(\omega)=Y(\omega)</math> for all <math>\omega \in \Lambda</math>, where <math>\Lambda \subset \Omega</math> is some measurable set satisfying <math>P(\Lambda)=1</math>. An equivalent definition is that the sequence <math>X_1,X_2,\ldots,X_n,\ldots</math>  converge almost surely to <math>Y</math> if <math>\mathop{\lim}_{n \rightarrow \infty}X_k(\omega)=Y(\omega)</math> for all <math>\omega \in \Omega \backslash \Lambda'</math>, where <math>\Lambda'</math> is some measurable set with <math>P(\Lambda')=0</math>. This convergence is often expressed as:


<math>\mathop{\lim}_{k \rightarrow \infty} X_n = Y\,\,P-a.s </math> or <math>\mathop{\lim}_{k \rightarrow \infty} X_n = Y a.s</math>.
<math>\mathop{\lim}_{k \rightarrow \infty} X_n = Y\,\,P-a.s </math> or <math>\mathop{\lim}_{k \rightarrow \infty} X_n = Y a.s</math>.

Revision as of 07:37, 16 October 2007

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

Almost sure convergence is one of the four main modes of stochastic convergence. It may be viewed as a notion of convergence for random variables that is similar to, but not the same as, the notion of pointwise convergence for real functions.




Definition

In this section, a formal definition of almost sure convergence will be given for complex vector-valued random variables, but it should be noted that a more general definition can also be given for random variables that take on values on more abstract sets. To this end, let be a probability space (in particular, ) is a measurable space). A (-valued) random variable is defined to be any measurable function , where is the Borel set of . A formal definition of almost sure convergence can be stated as follows:

A sequence of random variables is said to converge almost surely to a random variable if for all , where is some measurable set satisfying . An equivalent definition is that the sequence converge almost surely to if for all , where is some measurable set with . This convergence is often expressed as:

or .

Important cases of almost sure convergence

If we flip a coin n times and record the percentage of times it comes up heads, the result will almost surely approach 50% as .

This is an example of the strong law of large numbers.


References

  1. D. Williams, Probability with Martingales, Cambridge : Cambridge University Press, 1991.
  2. P. Billingsley, Probability and Measure (2 ed.), ser. Wiley Series in Probability and Mathematical Statistics. Wiley, 1986.
  3. E. Wong and B. Hajek, Stochastic Processes in Engineering Systems, Springer-Verlag, New York, 1985.

See also

Related topics