Applied statistics: Difference between revisions
imported>Nick Gardner |
imported>Nick Gardner |
||
Line 28: | Line 28: | ||
The advanced mathematical analysis that has enabled more complex statistical problems to be tackled, has been the work of geniuses such as Bernoulli, Laplace and Pascal, but the skills required for the effective use of statistics are different from those required for the understanding of the mathematical derivation of their theorems. Awareness of the tools of inference that are available<ref>[http://www.statsoft.com/textbook/stathome.html. ''Electronic Statistics Textbook'', StatSoft, Inc. (2007)]</ref> has to be combined with an appreciation of the extent to which they can safely be applied to a particular problem - if indeed they can be so applied, bearing in mind the financial disasters that have resulted from the mistaken reliance upon statistics in situations containing deterministic risks. <ref> Among the factors held to have contributed the [[Crash of 2008]] was the use of judgement-free statistical risk assessments in a situation containing deterministic risks, such as the bursting of a real estate bubble [[http://en.citizendium.org/wiki/Crash_of_2008/Tutorials]]</ref>. The user who plans to employ those tools for the analysis of data must also be prepared to spend a good deal of time acquiring a grasp of the theorems of [[statistics theory]], and mastering the intricacies of the [[free statistical software]] that is available for that purpose. Managers who supervise such work, and users of its application, may seek to be excused from such expenditure of effort, but cannot escape responsibilty for acquiring an understanding of statistical concepts that is at least sufficient for an awareness of the limitations of such analysis. But, as the statistician, M J Moroney has advised, there can never be any question of making a decision solely on the basis of a statistical test: an engineer doing a statistical test must remain an engineer, an economist must remain an economist, a pharmacist a pharmacist<ref>M J Moroney: ''Facts from Figures'',page 218, Penguin 1951</ref>. | The advanced mathematical analysis that has enabled more complex statistical problems to be tackled, has been the work of geniuses such as Bernoulli, Laplace and Pascal, but the skills required for the effective use of statistics are different from those required for the understanding of the mathematical derivation of their theorems. Awareness of the tools of inference that are available<ref>[http://www.statsoft.com/textbook/stathome.html. ''Electronic Statistics Textbook'', StatSoft, Inc. (2007)]</ref> has to be combined with an appreciation of the extent to which they can safely be applied to a particular problem - if indeed they can be so applied, bearing in mind the financial disasters that have resulted from the mistaken reliance upon statistics in situations containing deterministic risks. <ref> Among the factors held to have contributed the [[Crash of 2008]] was the use of judgement-free statistical risk assessments in a situation containing deterministic risks, such as the bursting of a real estate bubble [[http://en.citizendium.org/wiki/Crash_of_2008/Tutorials]]</ref>. The user who plans to employ those tools for the analysis of data must also be prepared to spend a good deal of time acquiring a grasp of the theorems of [[statistics theory]], and mastering the intricacies of the [[free statistical software]] that is available for that purpose. Managers who supervise such work, and users of its application, may seek to be excused from such expenditure of effort, but cannot escape responsibilty for acquiring an understanding of statistical concepts that is at least sufficient for an awareness of the limitations of such analysis. But, as the statistician, M J Moroney has advised, there can never be any question of making a decision solely on the basis of a statistical test: an engineer doing a statistical test must remain an engineer, an economist must remain an economist, a pharmacist a pharmacist<ref>M J Moroney: ''Facts from Figures'',page 218, Penguin 1951</ref>. | ||
A useful contribution of statistics theory to the interpretation of results obtained from a sample is its quantification of the intuitive concept of "significance" in a way that enables an objective answer to be given to the question of how likely it is that, what might appear to be information, is really only a matter of chance (although the way that question is usually put is "whether the result is significant at the 5 per cent level"). If - and only if - it can be established by other methods that the sample used was not biassed, then one of a variety of statistical tests can be used to answer that question<ref>For example the procedure of the tutorial ''Tests of Significance'' and its following chapters , in Stat Trek Statistics Tutorials[http://stattrek.com/Lesson5/HypothesisTesting.aspx] </ref>. | A useful contribution of statistics theory to the interpretation of results obtained from a sample is its quantification of the intuitive concept of "significance" in a way that enables an objective answer to be given to the question of how likely it is that, what might appear to be information, is really only a matter of chance (although the way that question is usually put by statisticians is "whether the result is significant at the 5 per cent level"). If - and only if - it can be established by other methods that the sample used was not biassed, then one of a variety of statistical tests can be used to answer that question<ref>For example the procedure of the tutorial ''Tests of Significance'' and its following chapters , in Stat Trek Statistics Tutorials[http://stattrek.com/Lesson5/HypothesisTesting.aspx] </ref>. When established, the conclusion is best reported in jargon-free English, using a phrase such as "this result could arise by chance once in twenty trials". | ||
Revision as of 03:51, 2 July 2009
Applied statistics provide both a familiar source of information and a notorious source of error and misinformation. Popular errors commonly arise from misplaced confidence in intuitive interpretations, in addition to which some serious errors have arisen from misuse by mathematicians and other professionals. Deliberate misinterpretation of statistics by politicians and marketing professionals is so much a popular commonplace that its genuine use is often treated with suspicion. Its use is nevertheless unavoidable and its misinterpretation can usually be avoided given a grasp of a few readily understood concepts.
(terms shown in italics are defined in the glossary on the related articles subpage).
Overview: the basics
Statistics are observations that are recorded in numerical form. It is essential to their successful handling to accept that statistics are not facts and therefore incontrovertible, but observations about facts and therefore fallible. The reliability of the information that they provide depends not only upon their successful interpretation, but also upon the accuracy with which the facts are observed and the extent to which they truly represent the subject matter of that information. An appreciation of the means by which statistics are collected is thus an essential part of the understanding of statistics and is least as important as a familiarity with the tools that are used in its interpretation.
The basic laws of chance from which much of statistics theory has been derived are little more than a formalisation of intuitive concepts, and the use of the resulting algorithms for the solution of many everyday statistical problems should require only a grasp of basic mathematical principles. Failures of interpretation by professional users suggest, however, that "probability blindness" is an inherent characteristic of the human brain that prevents the effective employment of intuition for that purpose.
Success in the use of more advanced statistics theory depends not so much upon mathematical ability as upon well-considered discrimination in the application of its theorems.
The collection of statistics
The methodology adopted for the collection of observations has a profound influence upon the problem of extracting useful information from the resulting statistics. That problem is at its easiest when the collecting authority can minimise disturbing influences by conducting a "controlled experiment"[1]. A range of more complex methodologies (and associated software packages) referred to as "the design of experiments" [2] is available for use when the collecting authority has various lesser degrees of control. The object of the design in each case is to facilitate the testing of an hypothesis by helping to exclude the influence of factors that the hypothesis does not take into account. At the furthest extreme from the controlled experiment, no such help can be provided through the physical elimination of extraneous influences - and, if they are to be eliminated, it must be done after they have been identified by a purely analytical technique termed the "analysis of variance"[3] For example, the rôle of the authorities that collect economic statistics is necessarily passive, and the testing of economic hypotheses involves the use of a version of the analysis of variance termed "econometrics"[4] (sometimes confused with economic modelling, which is a purely deterministic technique).
The taking of samples[5] reduces the cost of collecting observations and increases the opportunities to generate false information. One source of error arises from the fact that every time a sample is taken there will be a different result. That source of error is readily quantified as the sample's standard error, or as the confidence interval within which the mean observation may be expected to lie[6]. That source of error cannot be eliminated, but it can be reduced to an acceptable level by increasing the size of the sample. The other source of error arises from the likelihood that the characteristics of the sample differ from those of the "population" that it is intended to represent. That source of error does not diminish with sample size and cannot be estimated by a mathematical formula. Careful attention to what is known about the composition of the "population" and the reflection of that composition in the sample is the only available precaution. The composition of the respondents to an opinion poll, for example, is normally chosen to reflect as far as possible the composition of the intended "population" as regards sex, age, income bracket etc. The remaining difference is referred as the sample bias, and undetected bias has sometimes been a major source of misinformation.
The use by statisticians of the term "population" refers, not to people in general, but to the category of things or people about which information is sought. A precise definition of the target population is an essential starting point in a statistical investigation, and also a possible source of misinformation. Difficulty can arise when, as often happens, the definition has to be arbitrary. If the intended population were the output of the country's farmers, for example, it might be necessary to draw an arbitrary dividing line between farmers and owners of smallholdings such as market gardens. Any major change over time in the relative output of farm products by the included and excluded categories might then lead to misleading conclusions. Technological change, such as the change from typewriters to word processors has sometimes given rise to serious difficulties in the construction of the price indexes used in the correction of GDP for inflation[7]. Since there is no objective solution to those problems, it is inevitable that national statistics embody an element of judgement exercised by the professional statisticians in the statistics authorities.
National statistics also embody some further arbitrary or subjective adjustments that are intended to increase their usefulness, but constitute another possible source of error. The early provisional release of results involves the arbitrary imputation of figures in the place of late or invalid responses to enquiries, and substantial amendments to the published figures sometimes then continue for some months after their initial release. Decisions are also taken from time to time to exclude or include transitory changes such as whether the transfer abroad for repair of a multi-million dollar airliner is to be recorded as an export and its return as an import. Judgmental adjustments are also made to resolve conflicts between duplicate measures of national output. National statistics authorities employ research teams for the purpose of maintaining and improving the reliability of their output in those and other respects.
Politicians in the major democracies have seldom had any influence upon the collection and publication of national statistics, and most countries have sought to allay suspicions to the contrary by delegating those functions to public bodies that are free from possible government influence.
Statistical inference
Although statistics is sometimes thought of as a branch of mathematics, some of its findings can be successfully interpreted by verbal inference, and there are others that only require the use of a few simply-expressed rules (such as those set out in paragraph 1 of the tutorials subpage). However, there is evidence to suggest that most people confidently prefer an intuitive approach, unaware of the "probability blindness"[8][9] that is characteristic of the human brain. Educated professionals seem not to be immune from overconfidence in that respect, of which there have been several examples involving the medical profession> For example, the following question was put to the staff and students of the Harvard Medical School :If a test of a disease that has a prevalence rate of 1 in 1000 has a false positive rate of 5%, what is the chance that a person who has been given a positive result actually has the disease. - 45 per cent gave the intuitive answer of 95% when the true answer is 2% [10] (see paragraph 2 of the tutorials subpage). No harm was done by those mistakes, but similar overconfidence by an eminent expert cost the English mother, Sally Clark, her liberty (see paragraph 3 of the tutorials subpage).
The advanced mathematical analysis that has enabled more complex statistical problems to be tackled, has been the work of geniuses such as Bernoulli, Laplace and Pascal, but the skills required for the effective use of statistics are different from those required for the understanding of the mathematical derivation of their theorems. Awareness of the tools of inference that are available[11] has to be combined with an appreciation of the extent to which they can safely be applied to a particular problem - if indeed they can be so applied, bearing in mind the financial disasters that have resulted from the mistaken reliance upon statistics in situations containing deterministic risks. [12]. The user who plans to employ those tools for the analysis of data must also be prepared to spend a good deal of time acquiring a grasp of the theorems of statistics theory, and mastering the intricacies of the free statistical software that is available for that purpose. Managers who supervise such work, and users of its application, may seek to be excused from such expenditure of effort, but cannot escape responsibilty for acquiring an understanding of statistical concepts that is at least sufficient for an awareness of the limitations of such analysis. But, as the statistician, M J Moroney has advised, there can never be any question of making a decision solely on the basis of a statistical test: an engineer doing a statistical test must remain an engineer, an economist must remain an economist, a pharmacist a pharmacist[13].
A useful contribution of statistics theory to the interpretation of results obtained from a sample is its quantification of the intuitive concept of "significance" in a way that enables an objective answer to be given to the question of how likely it is that, what might appear to be information, is really only a matter of chance (although the way that question is usually put by statisticians is "whether the result is significant at the 5 per cent level"). If - and only if - it can be established by other methods that the sample used was not biassed, then one of a variety of statistical tests can be used to answer that question[14]. When established, the conclusion is best reported in jargon-free English, using a phrase such as "this result could arise by chance once in twenty trials".
Accuracy and reliability
Applications
Surveys
Quality control
Econometrics
Forecasting
Risk management
Notes and references
- ↑ In a controlled experiment, a "control group", that is in all relevant respects similar to the experimental group, receive a "placebo", while the experimental group receive the treatment that is on trial
- ↑ Valerie Easton and John McCall: The Design of Experiments and ANOVA. STEPS 1997
- ↑ Anova Manova"
- ↑ Econometrics 2005
- ↑ Valerie Easton and John McCall: Sampling, STEPS 1997,
- ↑ Robin Levine-Wissing and David Thiel, Confidence Intervals, AP Statistics Tutorial
- ↑ See the article on Gross domestic product
- ↑ Daniel Kahneman and Amos Tversky: "Prospect Theory". Econometrica" vol 47 No2 1979
- ↑ Massimo Piattelli-Palmarini: Inevitable Illusions: How Mistakes of Reason Rule Our Minds, Chapter 4, "Probability Illusions", Wiley, 1994
- ↑ Michael Eysenck and Mark Keane Cognitive Psychology page 483 [1]
- ↑ Electronic Statistics Textbook, StatSoft, Inc. (2007)
- ↑ Among the factors held to have contributed the Crash of 2008 was the use of judgement-free statistical risk assessments in a situation containing deterministic risks, such as the bursting of a real estate bubble [[2]]
- ↑ M J Moroney: Facts from Figures,page 218, Penguin 1951
- ↑ For example the procedure of the tutorial Tests of Significance and its following chapters , in Stat Trek Statistics Tutorials[3]