Talk:Public opinion poll
- Never mind; I figured out how to do it last night. Shamira Gelbman 17:38, 17 May 2009 (UTC)
Old coverage bias section
Moving this here for now:
Another source of error is the use of samples that are not representative of the population as a consequence of the methodology used, as was the experience of the Literary Digest in 1936. For example, telephone sampling has a built-in error because in many times and places, those with telephones have generally been richer than those without. Alternately, in some places, many people have only mobile telephones. Because pollers cannot call mobile phones (it is unlawful to make unsolicited calls to phones where the phone's owner may be charged simply for taking a call), these individuals will never be included in the polling sample. If the subset of the population without cell phones differs markedly from the rest of the population, these differences can skew the results of the poll. Polling organizations have developed many weighting techniques to help overcome these deficiencies, to varying degrees of success. Several studies of mobile phone users by the Pew Research Center in the U.S. concluded that the absence of mobile users was not unduly skewing results, at least not yet.  Shamira Gelbman 04:27, 15 May 2009 (UTC)
Moving this here for now (maybe to eventually be moved to the Straw poll article, but more likely not worth it since it's verbatim from Wikipedia): The first known example of an opinion poll was a local straw vote conducted by a newspaper The Harrisburg Pennsylvanian in 1824; it showed Andrew Jackson leading John Quincy Adams by 335 votes to 169 in the contest for the presidency. Such straw votes—unweighted and unscientific— gradually became more popular; but they remained local, usually city-wide phenomena. Shamira Gelbman 17:39, 17 May 2009 (UTC)
Moving this here for now:
Thus, comparisons between polls often boil down to the wording of the question. On some issues, question wording can result in quite pronounced differences between surveys.  This can also, however, be a result of legitimately conflicted feelings or evolving attitudes, rather than a poorly constructed survey. One way in which pollsters attempt to minimize this effect is to ask the same set of questions over time, in order to track changes in opinion. Another common technique is to rotate the order in which questions are asked. Many pollsters also split-sample. This involves having two different versions of a question, with each version presented to half the respondents.
The most effective controls, used by attitude researchers, are:
- asking enough questions to allow all aspects of an issue to be covered and to control effects due to the form of the question (such as positive or negative wording), the adequacy of the number being established quantitatively with psychometric measures such as reliability coefficients, and
- analyzing the results with psychometric techniques which synthesize the answers into a few reliable scores and detect ineffective questions.
These controls are not widely used in the polling industry.
Shamira Gelbman 15:02, 27 June 2009 (UTC)