The Dark Side of the Polls
Polling is a time-honored cornerstone of our political system, and candidates and officeholders alike value these tools to help discern public opinion and shape strategy. However, opinion polls can tell an inaccurate story if they are not conducted properly.
A poll asks a sample of people for their thoughts and feelings on a particular issue, such as who they plan on voting for in an upcoming election. When executed properly, the goal is to create an accurate prediction that analyzes a random selection of participants with strict rules about sample size, margins of error, and so on. Such polls are crucial because politicians rely heavily on these studies to make strategic campaign decisions.
For evidence of the need for strict protocol in polling, look no further than The Literary Digest’s infamous prediction regarding the 1936 Presidential Election. The magazine ran one of the most significant opinion polls of all time, sampling 2.4 million people on who they planned on voting for in the 1936 Presidential contest. The survey predicted that 57% would vote for Republican nominee Alfred Landon. The poll had one of the smallest margins of error, but it was still wrong because President Franklin D. Roosevelt won reelection with 60% of the vote. Fortunately, pollsters have learned from previous mistakes such as these, and the mistakes can teach valuable lessons.
Many polls that make inaccurate predictions are often branded as having skewed demographics in their samples—whether that means too many young people or not enough low-income voters were polled, for example—but these complainants are usually not the problem. While some polls may actually have a weak polling sample, most studies up-weigh or down-weigh electoral targets to the point that it distorts their findings. But what does that mean?
Let’s presume that a pollster takes a sample of 100 voters, where 15 participants are white females. However, if census data shows that 30% of voters are white females, then a pollster might consider doubling the weight of the 15 participants in their final calculations. This allows the poll sample to better represent the actual electorate.
If pollsters want to weight their results to match the entire voting population, then ideally, they would alter the results to match the latest census data. They do this by distributing results geographically, keeping more responses from populous states and cities, and then pollsters would adjust results to match the demographic distributions of age, race, and sex. However, there will always be a some difference in the comparison of polls and the census. When the gap grows, that is when you can start to make definitive claims that the survey is off.
The most prominent concern with weighing votes is mistaking weighting for the number of Republicans, Democrats, and others. Party identification is correlated with voting, but pollsters can miss the meaningful numbers of party members for the whole population. Without that, the weighting sample results is a guessing game rather than good theory. Some pollsters use the numbers from an exit poll from a previous election, but the numbers of people who consider themselves members of a party changes constantly.
Another problem with weighing results is placing too much or too little weight on a single characteristic. Different elections have different characteristic correlations, and there is not complete agreement on what factors should be weighted more than others.
While polls are a strong tool, like anything else, mistakes do happen. Previous polling mishaps do not mean we should abandon polling altogether. Every poll has error, either from statistical noise or factors that are more difficult to quantify such as nonresponse bias. The goal is to reduce that error to create accurate predictions. As technology advances and our knowledge of probability theory matured, the ability to foretell the outcome of an election seems destined to become ever more reliable.