Probability distribution: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Aleksander Stos
m (first link)
imported>Ragnar Schroder
(Making clear the distinction between discrete and continuous distributions, and adding a few descriptions.)
Line 1: Line 1:
Random variables have [[probability]] distributions which represent the expected results of an experiment repeated multiple times. As a simple example, consider the expected results for a coin toss experiment.  While we don't know the results for any individual toss of the coin, we can expect the results to average out to be heads half the time and tails half the time (assuming a fair coin).   
Random variables have [[probability]] distributions which represent the expected results of an experiment repeated multiple times. As a simple example, consider the expected results for a coin toss experiment.  While we don't know the results for any individual toss of the coin, we can expect the results to average out to be heads half the time and tails half the time (assuming a fair coin).   


The following are several important probability distributions
There are two main classes of probability distributions:  Discrete and continuous.  Discrete distributions describe variables that take on discrete values only (typically the positive integers),  while continuous distributions describe variables that can take on arbitrary values in a continuum (typically the real numbers).
 
The following are the most important discrete probability distributions


Bernoulli - Each experiment is either a 1 with probability p or a 0 with probability 1-p.  For example, when tossing a fair coin you can assign the value of 1 to either heads or tails.  After many coin tosses you would expect the number of results for heads to equal the tails results, thus the heads probability is p=50% and 1-p=50% for the tails probability.  
Bernoulli - Each experiment is either a 1 with probability p or a 0 with probability 1-p.  For example, when tossing a fair coin you can assign the value of 1 to either heads or tails.  After many coin tosses you would expect the number of results for heads to equal the tails results, thus the heads probability is p=50% and 1-p=50% for the tails probability.  


Binomial
An experiment where the outcome follows the Bernoulli distribution is called a Bernoulli trial.
 
Binomial - Each experiment consists of a series of identical Bernoulli trials,  f.i. tossing a fair coin n times, and summing the outcomes.
 
Uniform - Each experiment has a certain finite number of possible outcomes,  each with the same probability.  Throwing a fair die, f.i., has six possible outcomes,  each with the same probability.  The Bernoulli distribution with a fair coin is another example.
 
Poisson - Given an experiment where we have to wait for an event to happen,  and the expected remainding waiting time is independent of how long we've already waited.  Then the number of events per unit time will be a Poisson distributed variable.


Geometric
Geometric
Line 11: Line 19:
Negative Binomial
Negative Binomial


Poisson


Uniform


Exponential
The following are several important continuous probability distributions
 
Uniform continuous
 
Exponential -  Given a sequence of events,  and the waiting time between two consequitive events is independent of how long we've already waited,  the time between events follows the exponential distribution.


Gaussian (or normal)
Gaussian (or normal)

Revision as of 23:35, 24 June 2007

Random variables have probability distributions which represent the expected results of an experiment repeated multiple times. As a simple example, consider the expected results for a coin toss experiment. While we don't know the results for any individual toss of the coin, we can expect the results to average out to be heads half the time and tails half the time (assuming a fair coin).

There are two main classes of probability distributions: Discrete and continuous. Discrete distributions describe variables that take on discrete values only (typically the positive integers), while continuous distributions describe variables that can take on arbitrary values in a continuum (typically the real numbers).

The following are the most important discrete probability distributions

Bernoulli - Each experiment is either a 1 with probability p or a 0 with probability 1-p. For example, when tossing a fair coin you can assign the value of 1 to either heads or tails. After many coin tosses you would expect the number of results for heads to equal the tails results, thus the heads probability is p=50% and 1-p=50% for the tails probability.

An experiment where the outcome follows the Bernoulli distribution is called a Bernoulli trial.

Binomial - Each experiment consists of a series of identical Bernoulli trials, f.i. tossing a fair coin n times, and summing the outcomes.

Uniform - Each experiment has a certain finite number of possible outcomes, each with the same probability. Throwing a fair die, f.i., has six possible outcomes, each with the same probability. The Bernoulli distribution with a fair coin is another example.

Poisson - Given an experiment where we have to wait for an event to happen, and the expected remainding waiting time is independent of how long we've already waited. Then the number of events per unit time will be a Poisson distributed variable.

Geometric

Negative Binomial


The following are several important continuous probability distributions

Uniform continuous

Exponential - Given a sequence of events, and the waiting time between two consequitive events is independent of how long we've already waited, the time between events follows the exponential distribution.

Gaussian (or normal)

Gamma

Rayleigh

Cauchy

Laplacian