In this special episode of the Governance Podcast, we’re partnering with Andrew Blick of the KCL Centre for British Politics and Government to discuss all things public opinion with Roger Mortimore, Professor at King’s College London and Director of Political Analysis at Ipsos Mori. As a leading social scientist behind the UK general election exit poll, Professor Mortimore takes us through the origins, mechanics and surprising realities of predicting election outcomes.
Subscribe on iTunes and Spotify
Roger Mortimore is Ipsos MORI’s Director of Political Analysis, and has worked in the MORI/Ipsos MORI political team since 1993. Since 2012 he has also been Professor of Public Opinion and Political Analysis in the department of political economy at King’s College London.
Roger researches political and social attitudes, especially but not exclusively related to voting and elections; and he is responsible for the Political Monitor Aggregate, a data set consisting of more than half a million interviews stretching back to 1996. He is also the best point of contact for exploring any of Ipsos MORI’s historical archive of survey data, covering records of almost every survey which MORI and Ipsos MORI have conducted, on a wide range of subjects, since MORI was founded in 1969.
01:20: What is an exit poll?
5:51: You said that more money is spent on the one exit poll than is spent on polling through the whole campaign, which shows that the people paying for it obviously place a high premium on this but who are the customers? Who is paying for this?
7:28: In the end there is only one exit poll, or one publicly available exit poll that we know for certain exists.
8:12: In the context of the UK and what we call the ‘first past the post’ electoral system, what particular challenges does that system present as opposed to a proportional system?
10:20: What is success in the context of an exit poll?
14:12: I also suspect, for instance, that in 1997, whether you were 10, 20 seats out, when Labour were going to win a huge majority and that was pretty widely expected, doesn’t really matter that much. It’s in an era where, for the time being, results have been very tight and winning a workable majority is much more challenging. Suddenly you’re expected to produce this pin point accuracy.
15:46: If you have unlimited time, money, etc, what might be done differently?
19:53: General elections are obviously to a large extent about parties, so I want to ask about how this figures into what you’re doing. If there are one or more parties that have not contested a general election before and they are now running a significant number of candidates, how do you deal with that?
28:47: So you must get to learn a lot about the geography and profile of the United Kingdom for this job.
30:25: There are historic examples of electoral pacts between parties. The most famous one is probably the 1918 election where Lloyd George and the liberals who followed him into his government, splitting from the Asquith liberals, had an arrangement with the conservatives that in predetermined seats they would not run candidates against each other. Were this to come up again in a future general election, how might an exit poll try and model that?
32:54: Again a similar question going back to the electoral system, we have a phenomenon of tactical voting… how do you account for it?
36:06: It’d be interesting to talk about how you came to be in this post. What was your path to who you are now?
37:40: When did exit polling start?
42:21: What actually happens on the ground on election day?
47:54: A word you mentioned a lot is ‘computer.’ I suppose in 1970 I suspect there was a computer of some kind involved, but even in the time you’ve been doing it there must have been some significant changes in the technology. Has it made it easier or has it just increased people’s expectations?
49:21: Can you recommend a good book on exit polling for our listeners?
Read the Full Transcript
Andrew Blick: Welcome to the Governance podcast at the Centre for the Study of Governance and Society at King’s College London. My name is Andrew Blick, and I’m a reader in politics and contemporary history at the Department of Political Economy. I’m also director of the Centre for British Politics and Government. We’re very pleased to welcome my colleague, Professor Roger Mortimore to the podcast today. Roger is Professor of public opinion and political analysis here at King’s. And he’s also director of political analysis at MORI/Ipsos, the well known research company. Thanks so much for joining us today, Roger. the subject of today’s podcast is exit polling, and I understand in the past you have had a leading role in the conduction?
Roger Mortimore: That’s right. I’ve been one of the many members of the team that produces the exit poll with every general election for the broadcasters that are shown at 10 o’clock at night on Election Day.
Andrew Blick: Right, so the one that we see go out at the beginning of the election.
Roger Mortimore: That’s exactly it. Yeah.
Andrew Blick: Like on on BBC One, ITV, Sky News, …
Roger Mortimore: Yes, all at the same time, all using the same poll.
Andrew Blick: Okay, good. So to start off with a simple question, what is an exit poll?
Roger Mortimore: It’s a term that you sometimes hear used wrongly, but correctly and strictly, an exit poll is a poll that talks to voters immediately after they vote as they’re exiting the polling stations, and asks them how they voted. And that, that basically is the only real kind of exit poll. You do see other kinds of polls that are done on the day asking people over the telephone how they voted, and it’s a similar thing, but that doesn’t count as an exit poll.
Andrew Blick: So in it, you’ve partly answered this, but what is the difference between an exit poll and a more regular opinion poll that we see in the press on a weekly basis?
Roger Mortimore: The most important difference is that the regular opinion polls are done before people vote: How do you expect to vote? How will you vote? And of course, some people will change their mind after they’ve been asked that. An exit poll is after the vote, we’re actually asking them not of their opinion, but the actual fact: what have you just done?
Andrew Blick: And beyond that, is there a technical difference or methodological difference?
Roger Mortimore: There are lots of methodological differences. And the most important one, I guess is that simply for practical reasons opinion polls before elections are usually arranged to be efficient. So we, it’s not quite cutting corners, but we do things a little bit more cheaply than you… otherwise you could you couldn’t do that many. The one exit poll that we do for the broadcasters, probably more money is spent on that than all the other opinion polls in the election put together. It’s a huge, expensive exercise. And within that budget, we can use the most complicated and expensive methodologies. We use random sampling for the key part of the selection of voters that we talk to, and we’ll talk about that in a moment. And because we’re talking to people who just voted we know that they voted, and that’s the biggest problem that regular opinion polls have before the election. We’re asking people how you’re going to vote, we’ve also got to decide whether they’re going to vote or not, because obviously a lot of the people we talked to won’t vote in the end. And that is probably the biggest problem that pre-election polls have: making their results match the outcome of the result, that they’re including people who don’t vote.
Andrew Blick: So there are some significant qualitative differences, some of the same methodologies, but there are some questions…
Roger Mortimore: They’re the same methodology in the sense that we’re asking various questions and writing down the answer and adding them up. Beyond that, they are very different. A regular opinion poll, you’re phoning somebody up, or you’re getting somebody to fill in the form on the internet. And we’ve got to select who do we talk to before, I’m going to go through quite elaborate procedures of trying to pick a representative sample of the British public.
For the exit poll, we’re not doing that. We are very carefully to deciding where we go to do the poll. And obviously the limit to that is we can only afford to put so many interviewing teams out in the field on election day. We go to as many as we can manage to do. And then we are systematically interviewing all the voters or a random sample of the voters, but enough of the vote to be pretty sure that we’ve got a very accurate picture of how people vote in those polling stations. So the gap between the measurement and the, the outcome, the prediction on the overall measure of voting intention is filled in a completely different way.
Andrew Blick: Okay, and you’ve already said that more money is spent on the one exit poll than the spends on polling through the whole campaign, which shows that the people paying for it obviously play a very high premium on this but who are the customers?
Roger Mortimore: It’s paid for by the broadcasters. So BBC, ITV and Sky News. They joined the commission and clearly it’s a big part of their election night programme. They commissioned it from a team of academics who designed the poll and do the analysis, working with polling companies who actually do the practical work of putting interviews on the ground and, and doing the interviews. So the polling company I work, Ipsos Mori, has been doing this since 1997. In recent elections, we can generally work with another company called GFK used to be called NMP. And then an Academic Team: John Curtis is employed by the BBC, Colin Rollins and Michael Thrasher who work for ITV and Sky News. And between them and with their colleagues, they design exactly how we’re going to do the battle, where we’re going to go and do the, the interviews. And then when we get the data back on the day, they analyse that and work out what it means in terms of actually predicting the result of the election.
Andrew Blick: So lots of different outlets and in the end it’s only one exit poll, only one publicly available exit poll that we know we know for certain exists.
Roger Mortimore: That’s right. And it is such a huge exercise that you know, you it’s very unlikely that anybody else is going to be doing their own extra private exit polls of the same sort. There used to be more than one. BBC and ITV, they used to do separate ones until 2001. And they came to the conclusion that it was better to combine their sources and do one bigger and better one than two separate ones.
Andrew Blick: Okay now, exit polls, I suspect, are not unique to the UK but in the context of the UK and what we call the first-past-the-post electoral system, what particular challenges does that system present as opposed to if you had a more proportional results example?
Roger Mortimore: If we were exit polling in a system with proportional representation, centrally will be measuring something different. All we’d need to measure would be how many votes each side’s got. That’s not the aim here. The complication to first-past-the-post are really the whole essence of what we’re trying to do. Because the broadcasters want to know how many seats each party is going to win? So we need to measure votes, but we also need to work out how is that going to translate into how many seats each party is going to win and the whole design of the exit poll is built around making that translation of giving us the information we need to know to jump from one to the other.
Andrew Blick: And I’ve noticed that in terms of the presentation of polling between elections is all about the percentages in the way it’s presented. And they don’t say that much. Whereas on the night of the election, all you really hear about is the seats and there’s not much discussion of the packs, because that could be an unwelcome distraction from the matter of the seats.
Roger Mortimore: That’s right. Yeah. I mean, it would be perfectly possible to design the exit poll so that it was predicting vote chairs as well as seats, I mean, that used to be done but that then does become a distraction, it’s not what they want to talk about. They want to talk about: Has the biggest party won a majority or not, which is the biggest party. And if you get a position where seats and votes don’t match up very well, you just get confused. Now the regular opinion polls, they don’t have that kind of information. Within what you can afford to do with an ordinary opinion poll for a newspaper, you can measure the votes relatively easily. But you need huge amounts of extra information to turn that accurately into seats, which is exactly why you need this much bigger, more complicated exercise for the exit poll.
Andrew Blick: What is success in the context of an exit poll, in your view?
Roger Mortimore: Well, you know, it’s a very difficult question because what would be regarded as success, you know, in terms of our work as political scientists or as market researchers, is quite simply to make a measurement that is within the margins of error in the poll. But that’s often not enough for getting journalists and critics of polls. They want to criticise it even when we have done the impossible. Now, in fact, the exit poll in recent years has been extraordinarily accurate, extraordinary successful, really more than we could reasonably expect. We’ve been lucky.
In three of the last four general elections, we’ve got the number of seats for the biggest party within four seats. Now, the margin of error is much, much bigger than that. I mean, you can’t calculate exactly, but it’s in the region of plus or minus 16 seats. So when you’ve done your poll, all you can confidently state, all you would be allowed to say, if this was an academic paper we were writing, is our researchers said that the result will be within 16 seats of this and 16 seats a lot. It’s really too big a margin to fit the level of precision people want to understand what’s happened in the election. It’s a difference between a party winning a relatively comfortable majority in parliament and being well sure and having to look for coalition partners. And it’s in fact exactly what happened in the one recent poll when we were a little bit less accurate than that. In 2015. The poll said the conservatives would get 316 seats. In fact, they got 331. Now, that’s not wrong, that is within the margin of error. But as far as critics oppose concerned, that looks wrong, because 316 what the poll said, is a hung parliament. And 331 is a minority government. Polls cannot be supernaturally accurate, they are polls, surveys. Those statistics dictate that there is a margin of error just through sampling error. But people would like it to be more accurate than that. So we can come up with a poll that we are satisfied with which is as good as it could reasonably have been. And that’s not necessarily going to satisfy everybody.
Now, clearly, we like to satisfy everybody. If we are lucky, we carry on being lucky and getting ridiculously close, getting the exact number of seats as we did in in 2010. And again in 2005, you know, that’s great and clear that success. But it’s difficult when the expectations are unrealistic, and because this is a very complex subject, people don’t understand it is often difficult to get across to people why it isn’t realistic to expect that an exit poll can always predict within five seats.
Andrew Blick: And I also suspect that in, for instance, 1997, whether you were 10, 20 seats out when Labour were going to win a huge majority, and that was pretty widely expected, doesn’t really matter that much. It’s in an era where for the time being, I’m not predicting what’s going to happen in the future, but for the time being, results have been very tight and winning in a workable majority is much more challenging that suddenly you’re expected to produce this pinpoint accuracy. And I can understand why people want that. But then there are these scientific criteria we all talk about, which are difficult to get across.
Roger Mortimore: That’s absolutely right. And the same applies to the conventional polls as well as for the exit polls. When the election isn’t close it doesn’t matter if you’re a little bit out and nobody notices. And people say, hey, the polls are great. Maybe get an unreasonable level of expectation and trust in what the polls can do, because there’s simply not noticing how accurate the polls are. And yes, as it happens that exit polls in 1997 were a little bit further out. Of the two polls one overestimated Tony Blair’s majority, one underestimated it. But it didn’t matter because they know it would have been impossible to not notice that Tony Blair was going to get a majority. With politics at the moment with results likely to be close, it matters far more if the result is a little bit different. And there’s actually nothing we can do, but that’s not within our control.
Andrew Blick: What are the, and you touched on some of this, but one of the major challenges in conducting a poll and, if you had unlimited resources, if you had unlimited resources in time, money, etc. what might be done differently?
Roger Mortimore: Well, with unlimited resources, we’d make the poll much bigger. The more people you interview, the more places you go and do interviews, the better your information is going to be. And, you know, this does happen in some of the countries which I’d better not mention, a country where we’ve done exit polls relatively recently, exit polls were a new thing. It was massively important to make absolutely sure that the exit poll worked and worked perfectly and created confidence in the whole polling industry and so on. And they sent out interviewing teams to 10s of thousands of polling stations. Now, you know, we, we can’t afford to do that. For our budget, which is big, we can poll between 100 and 150 polling stations across the country.
And that’s the biggest challenge in, that number of polling stations will give us a total sample size of somewhere between 20 and 30 thousand, which sounds big. But it’s not really because it’s not the total number that matters. Because what a far bigger determinant of whether they give us a good prediction or not, is whether we’re at the right polling stations. There’s a massive difference between polling stations, there are lots of polling stations where almost everybody votes Conservative, a lot where almost everybody votes Labour, and a lot which are split 50/50. And if we don’t get the right mix of those, then the poll isn’t going to be right. And the big challenge we have in this country which applies and nowhere else in the world, we can’t check beforehand and make sure that our sample is representative because voting numbers are not published at polling station level. It is part of the law goes back to the 1880s to act against corruption and bribery.
Andrew Blick: We now have pre-budget statements instead of corruption and bribery with elections.
Roger Mortimore: But with those pre-budget statements, even if the counsellor knows which village he is expecting to vote for him as a result of that, he can’t check up afterwards that they have done. So what that means is we’ve got to try and pick a relatively small number of polling stations without any direct knowledge of how they voted last time. And if you’re just doing that at random, the margin of error there is huge, you know, essentially to a sample 140, not a sample of 25,000 and as you’ll know, conventional opinion poll samples are about 1000, which gives you a margin of error plus or minus three. And even that’s too big for plus or minus three points on a party share is the difference between Labour being the biggest party and the Conservatives getting majority.
The manoeuvre in the exit poll, if you just take polling stations at random, and you only take 140 will be bigger than that, which obviously is totally useless. So the challenge is to find some way of getting around that of being sure that the information that we’re getting is useful and representative and is telling us how the voters are voting without having to go to 10s of thousands of different polling stations which we can’t afford to do.
Andrew Blick: General elections are, obviously, to a large extent, about parties and so I’ve got some questions I’d like to ask about how this figures into what you’re doing. If there are one or more parties, for instance, that have not contested a general election before, and then they are now running a significant number of candidates. How do you deal with that? What challenges might that pose?
Roger Mortimore: It does pose a challenge, but it’s a relatively minor one. And the first and foremost one is to get this accurate measurement of how the national vote is going to go. And that’s true, whatever the parties are, and if you got a new party, so as long as you know how much their vote has gone up, then you’ve got the same information you need. But let me explain how we deal with that challenge of not being able to poll at very many polling stations because that’s the essence of it.
Now, the first key thing that we rely on is that although votes vary a lot across the country, there are lots of Labour area, lots of Conservative areas and lots of mixed areas and so on. The change from election to election, there is far less. That’s what we call national uniform swing. And so the poll is based around measuring change. And if we can accurately measure how much the vote has changed, it doesn’t matter so much, whether we’re in very Conservative areas or very Labour areas, because the change is going to be similar.
How do we measure the change given that we don’t know what happened last time? Well, this is the clever bit and this is also why there can only be one exit poll, because we’re using something that a new exit poll can’t have. We’re using our information from last time. As far as possible, 140 polling stations where we go to do our poll are going to be the same as 140 polling stations where we polled at the last election. So although we don’t know what the official results in those polling districts was, we know what our poll said in those polling districts. And so in each of those 140 different polling districts, we’ve got a direct measure of change from the last election to this election. Add them all together, you’ve got a measurement of what’s going on across the country.
Then comes second bit. You’re going to look for patterns in that. What sorts of constituencies are behaving differently? Is Wales behaving differently from the rest of the country? Are constituencies, which have voted Labour in the past, but are pro-Brexit, voting differently? Are constituencies with lots of students voting differently. And you run the numbers through the computer and you come up with basically a list of what are the key differences which the data is telling you are big enough to matter statistically significant differences. And you can then build up basically an equation what we call a model to tell you how every constituency is going to vote. So take your constituent Wellingborough. We know what the result in Wellingborough was last time. We know what we think the national change is: Conservatives up 70%, Labour down so many percent and we know what are the other differences in our list that apply to Wellingborough. Okay, so it’s not in Wales, maybe it’s slightly more progressive than others and so on, and we added all of those. And after that formula comes to prediction of here’s a predictable result in Wellingborough.
We then from that can say, we think allowing for the possibility, margin of error and so on, that the probability of the Conservatives winning in Wellingborough is X percent, and of Labour winning is X percent. We add up all of those probabilities across the country. And that gives us our prediction.
So we’ve built in all the different factors that we can detect in the data on the day. And that means that in theory most of the time so long as we know what to look for, we have all we need if the measurement is right. Now, we still got to do the difficult but straightforward bit, get the interviews on the ground, finding out how our polling station in Wellingborough is voting. But that’s a relatively straightforward part of the job.
But if there’s something obviously that we haven’t thought of and we haven’t built into the model, then we’ve got a problem. So if it turns out that constituencies where lots of farmers are behaving completely differently from everywhere else, and we haven’t thought of that, and we haven’t put that into the, the input that’s gone into the, into the computer, then maybe we’re going to miss that. And maybe we’re going to be that far out on those different constituencies. And if it’s something we have thought of, then we’ve you know, we’ve still got to make sure that we’ve got enough information to deal with it. So for example, in 2015, it was obvious before the election, that UKIP was going to be a big player for the first time. We couldn’t have measured how well UKIP was going to do if we weren’t polling in enough constituencies where UKIP is going to do well, because you could, you know, we could guess they weren’t going to do equally well across the whole country, they were going to be strong in some places, weak in some other places. And we needed to have enough constituencies where we expected them to be strong, and enough where we expected them to weak to make sure that we could build all that information into prediction.
So the challenge when you get changes in the circumstances, new parties coming in, new issues coming up, and so on is s that maybe we need to change where we’re doing the poll, in this case to bring in more… another big change has happened over the last few years. Suddenly, Scotland has become far more significant in the result, because after 2010 we knew that Labour was going to win almost every seat in Scotland, you didn’t need much polling to find out what was going on there. Now, suddenly Scotland is a lot more complicated, it is making a much bigger contribution to the overall result. So we need to make sure we do enough interviews in Scotland to get a reliable measure of what’s happened there.
So that means that one of the elements that we do have to do and one of the things that complicates it, we can’t just go back to the same 140, whatever polling stations every time, we do have to find you in that selection at all, and also you get the complication that every now and then polling district boundaries are going to change. So if you went back to the same polling station, you simply wouldn’t be polling the same people. So again, at that point, you’ve got to tear it up and go somewhere new. And it happens that part of the job is my job, which is when we’ve got to pick new polling stations, somebody’s got to look through all the available data and pick which are the best ones. So given that we decided we want to poll the constituency for the first time, which is the most representative polling district.
Andrew Blick: Do you ever go there physically and have a look?
Roger Mortimore: I don’t usually go. We try and do it first of all from local election results. So although we don’t have any local breakdown on general election results, we can see how the wards voted local elections. So we can see which is a reasonably balanced ward as opposed to being strongly Labour, strongly Conservative or strongly SNP or whatever. And then within that ward, we can look at the census data for the polling districts and try and find the most typical polling district.
Andrew Blick: So you must get to learn a lot about the geography and profile of the United Kingdom for this job.
Roger Mortimore: Exactly. You have to have quite a good feel for what sort of place as described in the statistics is a good place to do the exit poll.
Andrew Blick: So you do Northern Ireland for instance. So you’d have to look at…
Roger Mortimore: Oddly enough we don’t do Northern Ireland. Northern Ireland, of course, is a completely different party system. Labour, Conservatives, Liberal Democrats, … didn’t win seats in Northern Ireland, so doing the exit poll in Northern Ireland wouldn’t contribute at all to know what’s happening to Labour and Conservatives. Now it would be useful, in particular at the moment, it’s becoming more useful to know how are the seats going to fall out between the different Northern Ireland parties, but to do that, we’d need a full scale separate exit poll in Northern Ireland, you know, doing with all the numbers related to the divisions and the different groups. Suppose we set aside five polling stations in Northern Ireland as part of the poll from five polling. We couldn’t possibly get a reliable measurement of how the DUP is doing. It will just be a waste of money. So, really all the exit poll can say about Northern Ireland is that 17 seats will be taken by Northern Island parties.
Andrew Blick: That’s very interesting. Now, some of this may maybe been dealt with in what you said before, but there are historic examples of electoral pacts between parties. The most famous one is probably the 1918 coupon election where Lloyd George and the liberals who followed him into his government, splitting from the Asquith liberals had an arrangement with the Conservatives that in predetermined seats, they would not run candidates against each other. Now, you weren’t doing this in 1918. But were this to come up again in the future general election, how might an exit poll try and handle that?
Roger Mortimore: It won’t really be very different from what we’re doing already. It’ll be all part of the measurement of what happens to the votes of each of the party in different circumstances. But of course, we will need to ensure that we’re building those circumstances in. So if there are seats where A and party B have got electoral packs, we must make sure we’re polling in some of those constituencies, so we can see do the voters do what they’re expected to do or not?
Andrew Blick: Yeah, so were you doing it in 1918 you’d need a seat where there was a Lloyd George Liberal, and no Conservative and an Asquith Liberal and maybe an Independent Conservative and try and get a range of those, those kind of seats?
Roger Mortimore: That’s exactly right. Yes. And only that points up to some extent, one of the real problems that any exit poll is going to have. If you have weird local circumstances, it’s very hard to, to take full account of them. If they are really weird and really local, if they’re only happening in one or two constituencies, you can’t measure them with a national poll, they just, you know, they’re just too small, they sort of go, go below the resolution of the poll.
Andrew Blick: And they won’t necessarily make much difference to the headlines. Although I know in some general elections, including recent ones, every seat can count, but…
Roger Mortimore: That’s right. If it is just one or two,it’s not going to matter much, we can throw their shoulders. It’d be hard to blame me for that. If it’s 50. But every one of those 50 is separate and different, we’ve got a real problem.
Andrew Blick: Again, a similar question and it partly comes back to the point about the electoral system. We have a phenomenon of tactical voting. I guess your answer is going to be you try and model this in, and tactical voting I suppose it’s hard to know whether people are actually voting tactically or not. But we’re aware of this concept. And in as far as it is real, it happens in every election in the UK to some extent, I presume. So that’s a permanent thing you have to account for.
Roger Mortimore: Tactical voting is not something we try and account for separately. All we’re interested in is knowing how they’re going to vote, not why. I’ve no doubt there is a huge amount of tactical voting going on and being incorporated into the the exit poll. But if, for example, a lot of the liberal democratic votes at a given election is really Labour supporters voting for them to stop the Tories. Well, all we need to measure is how many Lib Dem votes there are.
Andrew Blick: Sometimes particular issues can be more divisive than others and can perhaps make people more likely to vote differently or not vote or vote rather not vote, than they might otherwise do where that issue not around. Again, I suspect this all works into your model anyway. But is that something you try and take account of.
Roger Mortimore: Yeah, very much so, this is exactly the sort of thing I’m mentioning in terms of making sure that we’re polling in the right place. So yeah, and obviously, a classic example of this will be Brexit and 2017.
And so it was necessary to make sure that we were polling in enough Labour seats that were leave and enough Labour seats that were remain or enough Conservative seats that were leave, enough Conservative seats were remain, so that we had a solid body of evidence of what the voters did in each in each circumstance. And that’s fine and easy when it It’s a clear issue like that. Because we had the referendum, we knew how each area would vote in the referendum, we had a very clear idea of how strong the leave and remaining part of the country was. So it was very easy to build into the design of the poll. If you get something that suddenly comes up, where it’s harder to pin down, where it’s going to matter, then potentially that’s much trickier. I can’t think of an example that has come up in recent years, but maybe it will come up in the future. And that is, you know, it is where it’s much more difficult.
Andrew Blick: Well, I think you’ve covered a huge amount of ground there and really explained things, I think, certainly for me and for those who are listening to this podcast will make them look upon the polling that’s announced on the next general election night, whenever that may be, in quite a different way. It’d be interesting to talk a bit about how you came to be actually in this pose, and what was your path that took that you are now studying individual constituencies across the country and polling stations.
Roger Mortimore: In a sense, it’s a complete fluke. I mean, it’s clearly very much the sort of thing that I want to do. Other people seem to think I’m reasonably good at it. I started off with this really do my doctorate at Oxford on the British electoral system, the way in which the boundary condition draws the boundaries and the political impact of that.
Andrew Blick: So who was your supervisor?
Roger Mortimore: My supervisor was David Butler, I was very lucky. The great David Butler. And before I even got to that stage, my undergraduate tutor had ben McLean who was also very much an enthusiast for elections and helped push me in this direction and get me enthusiastic about it. So I came out of university having done my doctorate with this whole sort of geographical aspect of elections being one of the things that I had studied and knew about. And I then went to work for Mori. Now at that point, Mori wasn’t doing any exit polling and never done in the exit polls. But within the next few years, they did their first one.
Andrew Blick: When did exit polling start? Do you know?
Roger Mortimore: The first UK exit poll was in 1970. That was just in in one constituency for the BBC in 1970. And then that worked and they rolled the whole thing out nationally, both BBC and ITV in 1974.
Andrew Blick: And that’s a very interesting election for them to have to started with in that we went through a cycle of… at first it looked in the late 60s that Wilson’s Labour government was dead in the water, then it looked like they lead their way back or this was popular wisdom that that they were going to actually win the general election and then actually they lost it So was that poll accurate?
Roger Mortimore: The poll was very, very accurate and played a big part I think in establishing the idea.
Andrew Blick: So it’s partly again, this luck, it can happen to be a good time to have been right in that election would earn you a lot of credibility.
Roger Mortimore: Exactly. And that’s the way a lot of social science works, if it said at the right time.
Andrew Blick: And it could be for the wrong reasons.
Roger Mortimore: For the wrong reasons, or for luck, or for it you know, a correct but imprecise conclusion happening to be bang on the mark. That convinces people who don’t really understand what it is anyway.
Andrew Blick: Being wrong for the right reasons rarely wins you any support.
Roger Mortimore: Exactly. And it’s a problem with all of what we do in academia really.
Andrew Blick: The disconnect between our methodology, going about things the right way, the speed, the time that takes and also the realistic expectations of what we can deliver, a disconnect between that and what the punters out there want or what people are intermediaries between us and the punters think the punters want, which is quick, decisive simple answers to questions.
Roger Mortimore: I’m sure it is what the punters want. But sometimes that’s impossible.
Andrew Blick: And then you would say you came in, you were taking this on?
Roger Mortimore: Yes, I mean Mori made the decision to start doing exit polls I think essentially because there was a company that had previously been doing the ITV exit poll wasn’t going to carry on doing it. And so there was a tender for the contract and we went for it and got it and
Andrew Blick: Was this Bob?
Roger Mortimore: It was Bob Worcester who was head of the company then. Whether it was he who made that decision or not, I’m not sure. It may have been whoever was head of political research at the time and I’m sure Bob, you know, was fully behind it. It certainly wasn’t me. I was very junior. They give it once we decided to do it. You know, clearly I was one of the natural people to blend in on this. Yeah.
Andrew Blick: And you’ve been doing them ever since?
Roger Mortimore: And we’ve been doing them ever since. Exactly.
Andrew Blick: And that’s a long day, the actual election day itself?
Roger Mortimore: It is and it isn’t.
Andrew Blick: Clearly you’ve done all the work anyway, at that point. It’s
Roger Mortimore: And that’s right. And my part of the job on election day is simply sitting in the office, hoping nothing goes wrong and be being there and you know, if something does catastrophically go wrong, I’d be the person who has to deal with it or help deal with it. But of course across the country, we’ve got 140 teams of interviews, arriving before the poll sessions open at seven o’clock, working away doing the polls, interviewing a dozen people an hour until 10 o’clock, it’s a very long day for them.
Andrew Blick: A huge logistical management job?
Roger Mortimore: And then we’ve got telephone centres full of our telephonists collecting the results as they’re phones in from the polling stations around the country, turning them into computer files. And then locked up somewhere in a darkened corridor in the BBC is John Curtis and his team who have got this data and they’ve got to turn it into a prediction in a very short period of time. In total secrets and nobody knows what’s going on because it is quite literally illegal to leak the results of the exit poll before 10 o’clock. So it’s a long day for all them. My part of it really is it’s a long six weeks beforehand of doing all the work so everybody who is working on the day is ready to do the job.
Andrew Blick: So what actually happens on the ground, on the day?
Roger Mortimore: At each of the polling stations around the country, we will have a team of interviews, I think it’s usually three interviews in each team, although they may work in shifts during the day and bring in extra people and so on.
They will arrive before the doors open at the polling station seven in the morning. And they will be all day selecting a random sample of voters to talk to, so we’ll have told them beforehand. We know how many people are expected to vote at tgat polling station and taken that into account. Talk to every one in every five voters. And so one of them will have a little mechanical clicker to count the votes. And every time they reach a multiple of five, they will stop the voters, they come out and ask them to do the poll. And doing the poll is very simple, straightforward. They’re given a replica ballot paper and asked to fill it in again, the same way they’ve done before, fold it up without showing anybody and putting it into a ballot box.
So it’s very quick. It replicates as far as possible, the action they’ve just done. And we find because it’s quick and simple, it is possible to persuade most people to do it. And we get a response rate of well over 80%, which is absolutely phenomenal for this sort of survey, and that’s vital in terms of making the data reliable. We are really getting a pretty solid sample of all voters at the polling station to take part.
If a voter says no, we replace them with the next similar voters or the next person or the same gender and approximately the same age and same class if they’re sort of obvious differences that are similar. But we keep an eye on that and we check whether the difference between the original selections and the substituted selections.
At several points during the day, the interviewers will stop and will open the ballot box and will count all the ballot papers and will then phone back there numbers to the telephone bandwidth which is recording all the things. So through the day at various intervals, we’ve got the data coming in through all the polling stations and being fed off to the analysis team who will add it up and build it into their model and work out how is this prediction going to work out
And then at around nine o’clock, so about an hour before the polls finally close, we have the cut off for the main part of the poll, the main data prediction. So at that point again, all the polling stations will phone in their numbers at that point. And that’s the final data that the analysis team will then use to produce the number that is put into a sealed envelope and handed to David Dimbleby, Jonathan Dimbleby and whoever’s doing it, this guy. And they will see the numbers about five minutes before they go on air. But it’s exciting for them as well. But that’s basically the process. And it’s very straightforward in a sense, but of course, it’s also difficult and lengthy and you know, well, that sounds nice… if it’s pouring with rain, and if the interest of the voters don’t want to stop and if you’ve got local officials who are being unhelpful and so on, it can be a very stressful time as well. And they do an incredibly good job. And the data has always arrived on time.
Andrew Blick: And who are these people and how do you find them at what could be relatively short notice in circumstances like a snap general election?
Roger Mortimore: Well, they’re all part of the regular interviewing panels for Ipsos Mori or whoever it is doing the poll, so that people who are working on all our other market research at other times, we tend to use the same interviewers on the exit poll again and again, they like doing it but apparently it is, you know, really popular and ego-boosting thing to do to be working on the election exit poll. And I guess you can I think they will they love it and they can go home and, you know, unusually can see the direct impact of what they’ve been doing all day on the TV screens. But it certainly works, of course, because we do have these panels of interviews who, all the time doing their job of face to face interviews for market research, right.
Andrew Blick: And it might be shampoo, it might be supermarkets, or it might be who you’re gonna vote for. Or in this instance, who did you vote for that?
Roger Mortimore: That’s right, yeah. And one of the facts of life that the number of companies that are doing this amount of businesses doing face to face interviewing is steadily contracting, it’s getting harder and harder to find enough interviewers for jobs like the exit poll.
Andrew Blick: The word you have mentioned a few times is computer, I supposed in 1970, I suspect there was a computer of some kind involved, but even in the time you’ve been doing it, there must have been some significant changes in the technology. Has it made it easier, has it just increased people’s expectations or just changed the way you do it?
Roger Mortimore: I think it’s probably a definite benefit. I mean, in the time that I’ve been involved, there hasn’t been, I think, much of a real change in the statistical end of it. It’s a pretty sophisticated computer model, but we’ve been able to do that since the 1990s. It would have been I suspect a lot simpler in the 1970s. And in fact, when I first did it for Mori with ITV, we were doing things on a less complicated basis than anything BBC we’re doing. So BBC had the computer model we use now. We were doing much the same thing in principle but basically dealing with spreadsheets.
Andrew Blick: Okay, now that all that’s fascinating. To round things off, can you recommend a good book on polling, exit polling or anything else for our listeners?
Roger Mortimore: I don’t think there is a good book on modern, up-to-date British polling.
Andrew Blick: We need to write it, don’t we, Roger?!
Roger Mortimore: I think I’ll probably be stuck with this at some point in my life. There are at least three very good books, but they’re all written in the 1990s. Bob Worcester and Nick Moon, both wrote very good books on the theory of polling, the history of polling, how polling is or was done in the 1990s. And remember, this is a point in which we barely knew the meaning of the word internet. So things have moved on a lot since Bob and Nick wrote their books
And exactly the same period for the academic market. David Broughton wrote an excellent undergraduate textbook on opinion polls. And all of those, I’d recommend, they’re very good reads. They haven’t really been brought up-to-date.
On the exit poll itself, the best thing by far is a single article written by John Curtis and David Firth. And it explains from start to finish how the whole exit poll is designed, and how the data is dealt with.
Andrew Blick: Right, so anyone listening to this podcast today who wants to kind of, find this interesting, wants the kind of single statement of work from A to Z, the procedures should go to that article. Roger, that’s been great. To all our listeners: thanks for joining us on this episode of the governance podcast with Roger Mortimore and me Andrew Blick. To learn more about our upcoming podcasts and events at the Centre for the Study of Governance and Society. Follow us on Facebook, Twitter and Instagram at @CSGSKCL. In the meantime, we look forward to seeing you again soon on the governance podcast.