Probability distribution: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Ragnar Schroder
(Rewrote superfluous reference to the frequentist viewpoint.)
imported>Ragnar Schroder
(Adding links to indepth articles about the individual distributions)
Line 8: Line 8:
The following are the most important discrete probability distributions
The following are the most important discrete probability distributions


Bernoulli - Each experiment is either a 1 ("success") with probability p or a 0 ("failure") with probability 1-p.  An example would be tossing a coin.  If the coin is fair,  your probability for "success" will be exactly 50%.
[[Bernoulli distribution]] - Each experiment is either a 1 ("success") with probability p or a 0 ("failure") with probability 1-p.  An example would be tossing a coin.  If the coin is fair,  your probability for "success" will be exactly 50%.


An experiment where the outcome follows the Bernoulli distribution is called a Bernoulli trial.  
An experiment where the outcome follows the Bernoulli distribution is called a Bernoulli trial.  


Binomial - Each experiment consists of a series of identical Bernoulli trials,  f.i. tossing a coin n times, and counting the number of successes.  
[[Binomial distribution]] - Each experiment consists of a series of identical Bernoulli trials,  f.i. tossing a coin n times, and counting the number of successes.  


Uniform - Each experiment has a certain finite number of possible outcomes,  each with the same probability.  Throwing a fair die, f.i., has six possible outcomes,  each with the same probability.  The Bernoulli distribution with p=0.5 is another example.  
[[Uniform distribution]] - Each experiment has a certain finite number of possible outcomes,  each with the same probability.  Throwing a fair die, f.i., has six possible outcomes,  each with the same probability.  The Bernoulli distribution with p=0.5 is another example.  


Poisson - Given an experiment where we have to wait for an event to happen,  and the expected remainding waiting time is independent of how long we've already waited.  Then the number of events per unit time will be a Poisson distributed variable.
[[Poisson distribution]] - Given an experiment where we have to wait for an event to happen,  and the expected remainding waiting time is independent of how long we've already waited.  Then the number of events per unit time will be a Poisson distributed variable.


Geometric
[[Geometric distribution]] -


Negative Binomial
[[Negative Binomial distribution]] -




The following are several important continuous probability distributions


The following are several important continuous probability distributions
[[Gaussian distribution]] - Also known as the normal distribution. 


Gaussian (or normal)
[[Uniform continuous distribution]] -


Uniform continuous
[[Exponential distribution]] -  Given a sequence of events,  and the waiting time between two consequitive events is independent of how long we've already waited,  the time between events follows the exponential distribution.


Exponential - Given a sequence of events,  and the waiting time between two consequitive events is independent of how long we've already waited,  the time between events follows the exponential distribution.
[[Gamma distribution]] -  


Gamma
[[Rayleigh distribution]] -


Rayleigh
[[Cauchy distribution]] -


Cauchy
[[Laplacian distribution]] -


Laplacian


[[Category:Mathematics Workgroup]]
[[Category:Mathematics Workgroup]]

Revision as of 00:05, 25 June 2007

Random variables have probability distributions which represent the expected results of an experiment repeated multiple times. As a simple example, consider the expected results for a coin toss experiment. While we don't know the results for any individual toss of the coin, we can expect the results to average out to be heads half the time and tails half the time (assuming a fair coin).

There are two main classes of probability distributions: Discrete and continuous. Discrete distributions describe variables that take on discrete values only (typically the positive integers), while continuous distributions describe variables that can take on arbitrary values in a continuum (typically the real numbers).

In more advanced studies, one also comes across hybrid distributions.


The following are the most important discrete probability distributions

Bernoulli distribution - Each experiment is either a 1 ("success") with probability p or a 0 ("failure") with probability 1-p. An example would be tossing a coin. If the coin is fair, your probability for "success" will be exactly 50%.

An experiment where the outcome follows the Bernoulli distribution is called a Bernoulli trial.

Binomial distribution - Each experiment consists of a series of identical Bernoulli trials, f.i. tossing a coin n times, and counting the number of successes.

Uniform distribution - Each experiment has a certain finite number of possible outcomes, each with the same probability. Throwing a fair die, f.i., has six possible outcomes, each with the same probability. The Bernoulli distribution with p=0.5 is another example.

Poisson distribution - Given an experiment where we have to wait for an event to happen, and the expected remainding waiting time is independent of how long we've already waited. Then the number of events per unit time will be a Poisson distributed variable.

Geometric distribution -

Negative Binomial distribution -


The following are several important continuous probability distributions

Gaussian distribution - Also known as the normal distribution.

Uniform continuous distribution -

Exponential distribution - Given a sequence of events, and the waiting time between two consequitive events is independent of how long we've already waited, the time between events follows the exponential distribution.

Gamma distribution -

Rayleigh distribution -

Cauchy distribution -

Laplacian distribution -