History shows us the polls narrow during election campaigns. The commentariat’s simplistic analysis has been thus: because the polls have been 20 points different for some time, despite what may happen during any General Election campaign, that is where they will be at the end. This is most likely not true and bit by bit some media are beginning to question this lazy logic. So where will the polls be on polling day?
Targets
The cold hard numbers tell us that Labour must be 12.5 points ahead in the polls to achieve a one seat majority. There seems little doubt that the Tories will lose the General Election, that much is clear. But really the big issue at play here is where will Labour end up when the polls close at 2200 hrs on 4 July. Will they have in excess of that 12.5% lead? If they don't, and historical precedent shows us this might be difficult, then we could be in hung Parliament territory. Of course all this is calculated assuming a ‘uniform national swing’ across all constituencies which (a) is a slightly misleading ‘average’ which never happens, and (b) is highly contested right now for a whole load of reasons.
More hard facts: for Labour to win they need a 12.7% swing. That would be the largest swing in British political history; all of Blair’s 10.2% from 1997 and half of Thatcher’s 5.5% from 1983. By some measure that would make Keir Starmer the political equivalent of Jesus. And maybe he is. That biblical ‘Revelation’ will come in the early hours on 5 July.
Why do polls move
First, voters don’t pay much deep attention to politics until standing in a polling booth in a village hall and actually having to put an X in a box on a voting slip. So what they think and say when it doesn’t matter too much rather comes into sharper focus by the end of an election campaign. It tends to be that those who want to get rid of the sitting government are consistently clear in their view and are very happy to tell pollsters so. Whereas those who may well be inclined to vote for the sitting government, tend to be more circumspect. Hence the polls showing Tory voters on strike for the last 18 months or so.
Second, political parties publish manifestos telling us what they plan to do. Some things we voters like and some we don’t, so some of us change our views during elections.
Third, there are also the gaffes that can directly affect campaigns – that 1992 Sheffield rally, the 2010 Gordon Brown ‘bigot-gate’ moment, 2015’s ‘Ed stone’, the 2017 Theresa May implosion etc. These key events shape the narrative which affects voter opinion.
Magic numbers
Then there are the vagaries of polling itself. Be under no illusion, polling is a dark art not an exact science. In essence it works like this:
Our mythical polling company forms a ‘voter panel’ by asking a whole load of random strangers who tend to be very digitally active to take part in online polls, for a small fee each time they participate. (Long gone are the days of random digit dialling telephone polls which were the gold standard for many years and now, because of their cost, rarely occur). So we come to ‘polling problem No 1’: a self-selecting sample of bizarre people who want to do this form our panel. (Question: do any of our dear readers actually know anyone who has agreed to be on a polling company’s panel? No, neither do the REC team!)
Then each time our mythical polling company wants to run a poll (YouGov does this daily), they send out their questions to their panel needing to achieve a balanced sample of around 1000 people. (1000 seems to be the magic number which means people won’t criticise it. In reality for the UK population size, a sample of 500 will actually suffice). To achieve this balanced 1000 sample therefore, we need a few thousand people to respond. And that’s ‘polling problem No 2’: a smaller self-selecting sample of a larger self-selecting sample actually answers the polling questions. These people are now the super weirdos of the original self-selecting sample.
But that balanced 1000 sample needs to be…well…’balanced’… to reflect the make up of the UK population: male/female, different age cohorts, geographic spread, religious belief, past voting history etc. And that’s where the mathematical jiggery pokery begins. To build the balanced model of the 1000, our mythical pollsters start ‘weighting’ their sample: ‘we have a 60/40 imbalance to men, so let’s dial down the men’s numbers and dial up the women’s ones in our sample. And we have way too many under 25s and way too few over 65s, so let’s fiddle the numbers a bit more to correct for that’. And so on for religion, ethnicity, geographic spread etc until we have a 1000 person sample that accurately reflects the UK population. Then what about all those people who say ‘don’t know’, many of whom will go on to vote? How pollsters adjust for ‘don’t knows’ explains much of the variation we’ve seen in polls.
Like any model, the assumptions inevitably skew the results. And so it is with polling samples. Which is why right now we actually have different pollsters telling us that Labour are both 15 and 30 points ahead of the Tories, and a whole load of other pollsters in between. They can’t all be right!
And that 15 point range is simply enormous. So commentators tend to use the ‘poll of polls’(ie what’s the average).
And that’s ‘polling problem No 3’ right there: the algorithms are never right and continually fiddled with. Which leads us straight to ‘polling problem No 4’: we end up with an average of a manipulated algorithm of a self-selecting sample of a self-selecting sample.
So once again, polling is a dark art not an exact science.
Herding
But then we come to ‘polling problem No 5’: herding. No pollster wants to be the outlier on election day and thus potentially stratospherically wrong in the British Polling Council’s review after the election. Being the least accurate pollster in the most recent election is not good for business. So they all constantly fiddle even more with their samples, particularly during election campaigns: ‘pollster A is a bit different to us, what have they done to their algorithm that gives them that result?
We’d better adjust ours quick. But we’re getting heat that we are over sampling stay at home mums in the south west, so we’d better adjust our sample to take account of that’. And on and on it goes. Thus what actually happens during every election is that all the pollsters progressively ‘herd’ towards the average as the campaign rumbles on. Which means on judgement day after the election they can at best claim ‘we were better than many competitors’ or at worst say ‘we were as accurate (or perhaps inaccurate!) as everyone else’.
Polls versus previous elections
And here’s the really tricky ‘funky maths versus reality’ point: the polls have been consistently telling us for 18 months that Labour are 20 points ahead. But in the last two elections, where real voters actually put their crosses in boxes on election slips in village halls across the UK, the local elections in May 23 and May 24, Labour was only 7-9 points ahead. Oops!
And bear in mind, that those most motivated to vote in by-elections and local elections, when an anti-sitting government protest vote is more prevalent and quite cool, tend to deliver results that inflate the opposition parties’ votes and underrepresent those of the sitting government of the day, hence big by-election and local election swings that never get replicated in General Elections. Double oops!
So truthfully, with also a very significant Parliamentary boundary review having occurred since the last General Election, this election thus being fought in many constituencies on totally new untested boundaries, who knows how things will pan out by 4 July!
Comments