Citizendium - building a quality free general knowledge encyclopedia. Click here to join and contribute—free
Many thanks December donors; special to Darren Duncan. January donations open; need minimum total $100. Let's exceed that.

Donate here. By donating you gift yourself and CZ.


Entropy of a probability distribution

From Citizendium, the Citizens' Compendium

Jump to: navigation, search
This article is a stub and thus not approved.
Main Article
Talk
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and not meant to be cited; by editing it you can help to improve it towards a future approved, citable version. These unapproved articles are subject to a disclaimer.

The entropy of a probability distribution is a number that describes the degree of uncertainty or disorder the distribution represents.

Contents

Examples

Assume we have a set of two mutually exclusive propositions (or equivalently, a random experiment with two possible outcomes). Assume all two possibilities are equally likely.

Then our advance uncertainty about the eventual outcome is rather small - we know in advance it will be one of exactly two known alternatives.

Assume now we have a set of a million alternatives - all of them equally likely - rather than two.

It seems clear that our uncertainty now about the eventual outcome will be much bigger.

Formal definitions

  1. Given a discrete probability distribution function f, the entropy H of the distribution (measured in bits) is given by H=-\sum_{\forall i : f(x_i) \ne 0}^{} f(x_{i}) log_{2} f(x_{i} )
  2. Given a continuous probability distribution function f, the entropy H of the distribution (again measure in bits) is given by H=-\int_{\ x: f(x) \ne 0 } f(x) log_{2} f(x) dx

Note that some authors prefer to use other units than bit to measure entropy, the formulas are then slightly different. Also, the symbol S is sometimes used, rather than H.

See also

References

External links

Views
Personal tools