Why Political Polls Get Misread — and Misused
Opinion polls are among the most cited and most misunderstood features of British political journalism. A poll showing one party ahead by several points will be breathlessly reported as a sign of seismic shifts, while the methodology and margin of error are buried or omitted entirely. Learning to read polls critically is an essential skill for any informed citizen.
What a Poll Actually Measures
A political opinion poll is a snapshot in time, not a prediction. It captures the expressed views of a sample of respondents on a specific question, on a specific day, using a specific method. Any headline figure comes with caveats that are often stripped out in media reporting.
The key questions to ask about any poll are:
- Who was surveyed? Online panel polls and telephone polls can produce different results for structural reasons.
- How large was the sample? Standard UK polls typically survey 1,000–2,000 people. Larger samples reduce the margin of error.
- How was the sample weighted? Pollsters weight responses to reflect the broader population — decisions about how to weight are consequential and contested.
- When was it conducted? A poll taken before a major news event may be outdated by the time it is published.
- Who commissioned it? Polls commissioned by political parties or campaign groups should be treated with extra scrutiny.
Understanding Margins of Error
Every poll has a margin of error — typically ±2–3 percentage points for a standard UK poll. This means that if a poll shows Party A on 42% and Party B on 39%, the "lead" of 3 points is within the margin of error and statistically, the two parties could be tied. Headlines that treat such a lead as definitive are misleading.
Voting Intention vs. Seat Projections
National voting intention polls measure the share of votes a party might receive across the whole country. Because of First-Past-the-Post, however, vote share does not translate directly to seats. A party can win a large vote share with relatively few seats (as smaller parties frequently discover) or win a large majority of seats with a modest vote share advantage. Seat projections based on polling require additional modelling and carry their own uncertainties.
The Herding and House Effects Problem
Pollsters are aware of each other's findings and, consciously or not, may adjust their methodology to avoid being a significant outlier — a phenomenon known as herding. This can mean that the full range of uncertainty is understated when multiple polls cluster together. Separately, different polling firms have consistent tendencies to show particular parties slightly higher or lower — known as house effects — reflecting differences in methodology and weighting choices.
Polling Failures and What We Learned
British polling has had notable failures — including the 1992 general election, the 2015 general election, and the 2016 Brexit referendum. Each prompted industry reviews and methodology changes. The lesson is not that polls are useless, but that they should be treated as one indicator among many, ideally tracked over time through polling averages rather than any single survey.
A Practical Framework
When you encounter a political poll, apply this checklist:
- Check the sample size and fieldwork dates.
- Look up the polling firm's house effects and track record.
- Compare with other recent polls rather than treating it in isolation.
- Remember it measures stated intention, not guaranteed behaviour on polling day.
- Be especially sceptical of polls conducted for partisan clients.
Polls, used carefully, are a valuable window into public opinion. Used carelessly, they distort political debate and can even influence the very outcomes they purport to measure.