All posts by Joe Greenwood-Hau

Unknown's avatar

About Joe Greenwood-Hau

I am a Lecturer in Politics in the School of Social and Political Science at the University of Edinburgh, where my teaching focuses on Introduction to Political Data Analaysis and I am wrapping up the Capital, Privilege and Political Participation in Britain and Beyond project. Previously, I was a British Academy Postdoctoral Fellow in the School of Government & Public Policy at the University of Strathclyde, a Teaching Fellow in the Department of Government at LSE, a Data Analyst at YouGov, and a Guest Lecturer in the Department of Government at the University of Essex, where I completed my PhD.

Two MPs’ Views on Public Political Engagement

This post is a complement to one that I wrote to report the launch of the Hansard Society 2015 Audit of Political Engagement. There were two MPs at the event to respond to the findings of the Audit and, after making many and varied points and sharing some (often very funny) anecdotes and quips, they took nicely distinct positions. I’m not going to name them or their parties because I’m more interested in the ideas that they presented than in the individuals themselves (engaging though they were).

The first MP’s response to the Audit can be pretty much summarised as ‘the public are a confusing and contradictory lot.’ I find this sort of sentiment frustrating because it suggests judgement of the public by a politician and because it seems to expect the public to coordinate their opinions so that they are more easily interpretable, rather than expecting politicians to think about the possible reasons for superficially contradictory opinion. Further, I don’t think that the public are necessarily any more confusing and contradictory than any other big group of people (be they MPs, party activists, or another group) who are confronted with complex topics on which they are expected to have an opinion. I have written on this more extensively in relation to vote choice elsewhere, but it seems clear to me that there could be many factors affecting the positions that people take on politics at any given time. More importantly, the examples cited by the MP were not, to my mind, necessarily contradictory.

To take a specific case, it was observed that 75% percent of respondents to the Audit think that referendums should be used for important decisions, but only 59% report that they are certain to vote in the EU referendum. This was taken as a contradiction but it isn’t. It’s perfectly possible to think that referendums in general are a good thing but to accept that circumstances might stop you voting in a particular referendum. Further, it’s perfectly reasonable for members of the public to indicate their uncertainty on a complex topic like, say, the EU by stating that they aren’t sure to vote on that topic. If someone doesn’t feel sure that they understand an issue should they be expected to vote on it because they think voting is a worthwhile thing? I don’t think so and, in fact, I think that the logical corollary of the right to vote is the right not to vote, for instance if you are uncertain.

Similarly, the MP suggested that the public is ‘schizophrenic’ about Prime Minister’s Questions because it is the event for which most people want tickets and yet polls consistently show a majority who disapprove of the conduct and manner of debate there. Of course, it’s not difficult to see that this isn’t a contradiction; a majority of the public can disapprove of something but that leaves a (potentially sizeable) minority who approve of it. In other words, it’s perfectly possible that all those people who want tickets to Prime Minister’s Questions are members of the public who approve of how it currently works. Or alternatively, because it’s probably the most prominent example of parliamentary debate in the UK it’s also possible that people know about it and thus apply to see it when they want to see Parliament in action. There may be other explanations but the key point here is that it’s not necessary to think that the public are confusing and contradictory. Indeed, as I noted when I wrote about the range of motives for voting for candidates, when politicians dismiss political behaviour that doesn’t make sense to them it demonstrates a lack of willingness to think about why the public might be behaving that way.

Much more positively, the second MP who responded to the Audit showed a greater willingness to accept the idea that it’s easy for politicians to become disconnected from members of the public. This was a nice counterbalance to the first MP’s view in the sense that it acknowledges that MPs have some responsibility to engage with, and understand, the public. Indeed, the second MP gave a fascinating and vivid example of how the actions of politicians can dampen public engagement. Citing the hypothetical example of a leisure centre being built, they noted how a local MP would be sure to get their picture taken at the opening and claim credit for getting it built. In fact, such projects are usually the results of a whole host of actors coming together. Members of the public might make comments to councillors, who could then start raising the issue with the officers at the council, who may then note that the area is affected by problems relating to lack of exercise, resulting in a proposal for a leisure centre. The local MP will probably only get involved in the process towards the end, and even then is only likely to intervene by offering support for an existing plan. Thus, for the MP to take credit for the leisure centre is inaccurate and, crucially, removes agency from all the other people, including local residents, who contributed to the process.

The second MP also went on to say that the public wants MPs to get on with doing things for the country rather than be seen to do things for their own benefit. Indeed, it was noted that when people say ‘you’re all the same’ about politicians it may well be an observation that they’re not seen to care about ‘normal people’. This was a polite rebuttal to the first MP’s observation that there is a contradiction between the public wanting MPs to work together but also to have distinct positions. This may be a difficult balance to achieve but I think it’s a lot easier to move towards it if you make the effort to understand and interpret what members of the public say rather than dismissing their opinions. Thus, again, I had a lot for sympathy for the second MP’s position, and was pleased to see such obvious efforts to understand some of the reasons for the opinions that members of the public express. Crucially, whilst I disagreed with the first MP it was great to have two such distinct positions expressed in the same space so that they brought each other into contrast. Thus, I thought the Hansard Society did a good job of complementing their presentation of the 2015 Audit of Political Engagement with some lively, if indirect, political debate.

Report on the Launch of the 2015 Audit of Political Engagement

Introduction:

This morning I was at the launch of the Hansard Society’s 2015 Audit of Political Engagement, which is, as always, a laudable and valuable piece of work. For those of you who don’t want, or don’t have time, to read the whole report (and couldn’t make it to the launch) I thought I’d summarise what was said. I’ll stick to the structure that they used, but I’ve also written a separate post (with a little more with opinion) on what was said by the two MPs who were invited to pass comment at the launch.

I always await the Audit with a mixture of excitement and trepidation, the former because it’s a fascinating piece of work, the latter in case it answers all the questions I’m focussing on in my own research. This year was no exception and, fortunately, they rewarded my excitement and proved my trepidation misplaced (though, from a less self-interested perspective it would be brilliant to have the Hansard Society looking into the structural and perceptual influences on political engagement). The launch event was at Parliament, which is fitting given that the focus is on engagement with that institution, though I worry that it makes it less accessible to the public at large. Still, it was open to those who wanted to attend, and was constituted by a succinct summary of some key points emerging from the data.[1] These can be grouped under the headings of the election effect, perceptions of Parliament, and the EU referendum.

The Election Effect:

There appears to have been a post-election bounce in political engagement, with some areas showing much higher levels than in 2014. More people reported certainty to vote (up by 10% to 59%), interest in politics (up by 8% to 57%), knowledge of politics (up by 8% to 55%), satisfaction with politics (up by 7% to 33%), and a sense of efficacy (up by 3% to 35%). As you can see, the latter two areas have much lower levels of engagement than the others, which has also consistently been the case in the past. In addition, I noticed a trend that wasn’t commented on; there were distinct peaks in many of these areas in both 2010 and 2015 (i.e. general election years) with an apparent decline in between.

The above trends in engagement hold across age groups although, despite the positive movement, young people were still the least likely to report certainty to vote (39% compared to 59% overall). At the same time, the Audit recorded the highest level of party support (41% being either very or fairly strong party supporters) since the beginning of the series in 2004. I’m intrigued by whether the link between the increased engagement amongst younger people and increased party support could be, in part, the ‘Corbyn effect’, which has been widely reported to have engaged younger people.

Interestingly however, the above uptick in engagement was counterbalanced by a decline in the sense of influence at the national level reported by respondents (13% feel influential compared to 17% in 2014). As is commonly the case, the reported sense of influence was also lower at national level than at local level (with 25% feeling influential at local level), whilst also being lower than the reported desire to be involved at both local and national level (46% and 41%, respectively wish to be involved at those levels). Thus, more people wish to get involved in politics than think they can influence it, perhaps because it doesn’t necessarily make sense to get involved with a system that you can’t influence, even if you’d like to. Of course, this is an abiding problem of political engagement; people need to get involved to influence politics but they won’t feel influential unless they get involved (and perhaps not even then).

Perceptions of Parliament:

Net reported knowledge of Parliament is now positive for the first time since the Audit began (i.e. more people report being knowledgeable than report not being knowledgeable, by a whopping 5%), though it would be interesting to see some measures testing knowledge (which should relate to both local and national contexts, and practical and abstract knowledge) alongside the question on self-perceived knowledge. There were also increases (again, between 2014 and 2015) in the number of respondents agreeing that Parliament ‘holds government to account’ (up by 7% to 42%), ‘encourages public involvement in politics’ (up by 3% to 28%), ‘is essential to democracy’ (up by 12% to 73%), ‘debates and makes decisions that matter to me’ (up by 10% to 58%). This very positive looking slew of findings, it was pointed out, could be another result of the election effect.

Satisfaction with Parliament also increased (by 5% when compared to the 2013 Audit, when it was last asked) but still stands at only 32%, which is lower than in the first Audit in 2004. Satisfaction with MPs continues to be higher in relation to local MPs (35% satisfied) than in relation to MPs in general (29% satisfied), though the gap is closing due to a big (6%) bump in satisfaction with MPs in general (perhaps surprisingly). Despite the closing gap, this remains a good example of the paradox of distance, in which people rate their local services (e.g. schools or hospitals) and the people they have encountered (e.g. immigrants they know or their local MP) more favourably than they do those services (e.g. education or health) or groups (e.g. immigrants or MPs) in general. This could be logical because it is reasonable to assume that the national picture or a wide group of people will include more variation than the specific service or person you’ve encountered and thus may not be as good overall. Also, it could be explained on the grounds that that people are likely to be more favourable towards what they know and have experienced than they are towards distant or abstract concepts. Of course, a more pessimistic interpretation could be that people are disposed to be negative towards (or prejudiced against) some services and groups generally despite encountering examples of them being good individually.

Moving on, the Audit suggested that undertaking political acts continues to be a minority pursuit. Indeed, even in terms of willingness to undertake an act in the future (rather than reporting having done so in the past), only contacting an MP or Peer had more than half (52%) saying they would do it. Willingness to create or sign a paper petition came in second (with 35%, closely followed by paper petitions with 34% willing to create or sign one), and these two areas constitute by far and away the most used, or potentially used, routes to engage with politics. Importantly, almost all of the areas of political activity had increased in terms of both reported acts and willingness to act in future, which could well be another result of the general election. Lastly in this section, there was a statistically significant increase (the only time this was reported) in the belief that Prime Minister’s Questions deals with the important issues facing the country, and in agreement that it is grounds for pride in Parliament, though both are still very low (45% and 17% agreement respectively (despite each being up by 5% compared to 2014)).

The EU Referendum:

As a kind of footnote to the presentation of the results it was reported that there are high levels of interest and intent to vote in the EU referendum (63% interested, 59% certain to vote), coupled with low levels of satisfaction with and knowledge of the EU (21% satisfied, net -24% feel knowledgeable). It was thus suggested that there may be too much heat and not enough light in the debate around the referendum. This suggestion appeared to be contradicted by one of the MPs on the panel, who argued that people need to feel less like the referendum is a debate over technicalities between bureaucrats and more like it matters to day-to-day life, though I don’t think those two things are mutually exclusive. It’s possible to outline technical information about the referendum and relate it to the meaningful ways in which the outcome could impact on people’s lives. As with politics in general, the aim should be to strike a balance between being passionate and being informed, which can be a tough one to get right.

Conclusion:

All of the above was fascinating but I felt the launch was lacking in terms of considering who is engaged with politics. Are some groups more interested than others? Do some groups report undertaking more political acts than others? These are the questions relating to political engagement that underpin my research, along with questions of why any such differences between groups exist. Fortunately, the Hansard Society had a ready-made response in the form of the following summary paragraph (in the Audit and on the website) relating to inequalities in engagement:

‘Generally, the most politically engaged in the Audit series tend to be male, older, white, higher educated, affluent, home-owning citizens. The social class gap in electoral participation continues to rise: there is now a 37 percentage point difference between the certainty to vote levels of those in social classes AB and DE, an increase of six points in 12 months. However, the gap between the social classes tends to be much smaller in relation to questions about satisfaction with politics and institutions. Younger people (aged 18-24) are also more likely to be satisfied with the politics and institutions of our political system, and have a greater sense of their own potential to influence it than are other more generally engaged groups. This is also true of BME adults, although they are much less likely to say they have actually undertaken some form of political action than white adults in the last year.’

I find the first two sentences in the above the most striking, and I will certainly be reading the report more closely with them in mind. I also hope that when I finish my research (ideally sooner rather than later) I will be able to shed at least a sliver of light on why those discrepancies exist.


[1] In terms of methodology, the Audit is a time series study in its thirteenth year, and can be seen as an annual health check on the state of political engagement in the United Kingdom. The survey that all of the results are based on was fielded last December, by Ipsos MORI, to a representative sample of 1,231 British adults across Great Britain (i.e. excluding Northern Ireland). The Audit should not be used at the basis for predictions, rather it is a snapshot at particular moment in time. It presents a complex and contradictory picture, which is unsurprising given people’s lukewarm attitudes towards Parliament (and politics).

On a Relative’s Benefits Tribunal

A few Fridays ago (fortunately not on Easter Friday) I got up early and caught the train to Cambridge, where I needed to attend the County Court. This was so that I could be present whilst a close relative went through a tribunal to appeal the withdrawal of her Disability Living Allowance and the Personal Independence Payment (PIP) element of the benefits that replaced it. I am writing this post is to give an insight into the effect of the current (and last) government’s policies on disability benefits both on a recipient from whom they have been (partially) withdrawn and, much less importantly, on someone who’s not a recipient (i.e. me).

To give some context, my relative has been diagnosed with Borderline Personality Disorder (BPD) and has been signed off work by her GP for the last seven years. For those of you who wish to know more about the condition, you can read about it here, here, or here. My experience of having a relative with BPD is that she is fine most of the time but experiences episodes of acute depression or anxiety and self-harm. These can be more predictable, for instance when they are associated with a time of year, or less predictable, for instance when they are triggered by a negative experience, but there is always the possibility that such an episode is just around the corner. The likelihood of such an episode is reducing as my relative gets better at dealing with the patterns and triggers that affect her, as those around her improve their understanding of BPD, and also as the health service gets better at supporting people with such conditions. Still, the risk of an episode is ever-present, and is heightened when the government’s squeeze on spending threatens provision of things like the Complex Cases Service, which provides excellent support to those with personality disorders. The threat of closing such services is a threat to remove a safety net from those who demonstrably need it. It is worth noting that the government’s approach to limiting spending not only means that benefits are withdrawn from recipients but also that support services are closed or reduced at the same time, thus doubly impacting on users.

The last time my relative had a severe mental health episode was on the day that she received notification that her PIP was to be withdrawn by the Department of Work and Pensions (DWP, still headed at the time by Iain Duncan Smith). It is clear that receipt of the letter from the DWP played an important part in triggering the episode. That was in October 2015, and my relative has had periods of acute anxiety and depression since then due to the fact that she has had her income notably reduced and has spent the entire period with an approaching tribunal hanging over her. It is testament to her resilience that she has borne the brunt of preparing for the tribunal and sought appropriate advice and support from Citizens’ Advice Bureau (CAB, one of the first services to be cut as a result of the last (and now the current) government’s austerity agenda), from Complex Cases, and from a friend who works for Unite the Union.

The particularly unpleasant twist in the above is that her very capacity to appeal the withdrawal of her PIP might be seen by some as evidence that she could get a job and cease receipt of benefits. Such a view is based on a fundamental misunderstanding of my relative’s condition, and this is a problem with the government’s assessment system that is being implemented by ATOS. By that assessment system’s reckoning my relative can make a cup of tea, cook for herself, and clean herself so she should not be entitled to PIP. It doesn’t matter that she has periodic and, at times, severe mental health episodes (which can be triggered, for instance, by the stress associated with a full-time job), the fact that she is physically able most of the time means that she doesn’t qualify. My relative’s capacity to live her life is mostly due to her own abilities but it is also, in part, to do with the support available to her from family, from friends, and from the state. All of those things are needed and they complement each other; the state can’t replace family or friends but neither can family and friends provide the financial support and mental health services that the state offers. The withdrawal of any one of those sources of support creates a more precarious situation for my relative.

When I arrived at the court on that Friday a few weeks ago my relative seemed fine (she can be good at hiding inner turmoil from those around her). As the time passed, however, it became increasingly apparent that the stress of being assessed (having already been assessed once by ATOS) was hard to bear. She was impatient for the tribunal to start and desperate for it to be over and to know the result. After the hour of the tribunal itself, during which I waited outside and my relative was supported by a lawyer from Citizens’ Advice Bureau, she emerged in tears. This was not because the tribunal panel itself had been horrible (indeed they approached the situation with admirable humanity) but because my relative had to prove that she deserved to receive the financial support that enables her to live her life. In some way, even despite the decency of the panel members, my relative was on trial, being asked to prove that she had a condition that justifies the support she receives. Never mind that her relatives and friends, her GP, and the staff who support her at Complex Cases had no doubts that she needed PIP. Never mind what the people who know her best think, a series of tick-boxes on an ATOS assessment form meant that my relative had to bear the burden of proving her right to receive support from the state.

The good news is that, after a short but tense period of deliberation (during which we waited outside), the tribunal panel ruled in favour of my relative, awarding her the standard payment from the DWP (including back-payment). This affirmed my broad faith in the British justice system,[1] and I’m thankful to the panel members (who I will probably never meet) and the lawyer from CAB who supported my relative. Still, she had to live through five months with her PIP withdrawn and a tribunal approaching. Thus, this government’s policies on disability benefits (even before the recent budget, resignation of Iain Duncan Smith, and subsequent capitulation on the part of the government) have had a direct negative impact on my relative and an indirect negative impact on me. I don’t for a moment resent or regret offering support to my relative, but I do oppose a government policy that places stresses and demands on those in receipt of benefits and also their families and friends. I had to balance attendance at the tribunal alongside my research, my teaching, and completing job applications, which was a psychologically exhausting experience. Whilst I don’t think this is the most important impact of the government’s benefits policies I do think that we should assess policies on all of the impacts that they have, and we can’t ignore the possibility that placing stress on recipients of benefits has a ripple effect that impacts on those around them as well.

I have tried to avoid hyperbole and generalisation in what I’ve written here, and provide insight into this particular case. However, I think we can safely say that the experience of my relative is not unique. Indeed, I also have a friend who has had to go through a tribunal to prove his chronic health condition warrants PIP payments (which, of course, it does, as his tribunal ruled). Further, I imagine that there are many who do not necessarily have the knowledge or skills to challenge the withdrawal of their benefits (my relative has resolved to used her experience to help such people). So, the cases that I know are not necessarily examples of the people who are most in need of help and support, and this puts me in mind of a conversation that I had with a friend before the 2010 general election. He was arguing that it will make little difference which party is elected because the country will largely continue to run regardless (e.g. bins will be collected, schools will stay open, and trains will keep running). I pointed out that it was very unlikely to be privileged people like us (we’re both educated, financially secure, white, heterosexual men with no disabilities or chronic health conditions) who would be significantly affected by a change in government. Rather, it is the less privileged who are most vulnerable to changes in government policy, and it seems clear to me that this has been the case since 2010. Indeed, I now have personal experience of the negative impact that the austerity agenda can have on someone who receives support from the state.

 

 

[1] On the basis of its capacity to make evidence-based decisions that challenge the unjust consequences of government policy, and thus to provide recourse for those who might not otherwise have it. In that light, let’s not even get started on cuts to legal aid.

The Cathie Marsh Lecture on the Polling Miss

Back in November last year I attended the annual Cathie Marsh Memorial Lecture at the Royal Statistical Society, which was excellent (as it has been when I’ve attended before). The focus of the lecture was on polling failure and the future of survey research, and it was delivered by Professor Patrick Sturgis, who is chairing the inquiry into the performance of the polls preceding the general election. Given that the polling inquiry is due to release its results this week, I thought this would be an opportune moment to record my record and interpretation of the points made by Prof. Sturgis back in November. To be clear, the following is a mix of his and my thoughts, so if you’re interested in seeing Prof. Sturgis’ own words then you can watch the full lecture here.

The lecture began, rightly, with some kind words remembering Cathie Marsh, before engaging in a little definition. To wit, it is possible to differentiate between polls and surveys on the grounds of snootiness, quality, and purpose. Taking the latter two, more defensible, grounds, it has been argued that surveys are higher quality than polls (based, as they usually are, on random (or at least closer to random) samples) and that their purpose is broadly investigatory (i.e. academic) rather than political or democratic. Crucially, the point was made that this distinction is now less rigid than it was in the past. Still, even if the distinction between the two is less rigid, the fact that surveys and polls are arguably distinct on the basis of quality didn’t seem to bode well for the latter. Like any good academic, though, Professor Sturgis was quick to introduce a note of complexity.

It’s not as simple, he argued, as saying ‘the pollsters got it wrong’. Indeed, they did a good job in predicting the UKIP vote, the SNP surge, and the Liberal Democrat collapse so it was just, alas, on the ‘main event’ that they went skew whiff. Whilst the latter point may seem the most salient, Prof. Sturgis went on to remind the audience that without polls there may be a growth in even less accurate speculation about the outcomes of elections. There is certainly a healthy dash of truth in his statement that we couldn’t do better on that front by relying on twitter, facebook, and equivalent sources.[1] This, of course, does not mean that we should settle for polling as it is (and, in my experience, the polling companies have far from rested on their laurels since May), especially in light of the historical trend that was outlined wherein polls have increasingly underestimated the Conservative share of general election votes whilst at the same time overestimating the Labour share. This may mean, as remarked, that we are now using something that measures pounds to measure ounces (if you’ll forgive the imperial units).

With the magnitude of the problem established (it’s not great but still better than it could be), Prof. Sturgis turned to possible explanations for the polling miss, all of which have been circulating since the day after the general election:

  1. Late swing. In other words, a load of people might have changed their minds just before they voted (and largely moved to the Conservatives) thus rendering the polls, which were conducted at the latest a day before, wide of the mark.[2]
  2. Sampling and weighting. As Prof. Sturgis pithily put it, polls are ‘modelling exercises based on recruited samples’. So, maybe the polling companies have recruited the wrong people to the panels of respondents that they survey, or perhaps they had out-of-date or incorrect assumptions underpinning the weights that they apply to their samples to correct for unrepresentative recruitment.
  3. Turnout misreporting. Perhaps a load of people who said they were sure they’d vote and that they would do so for Labour ended up not being able to make it to polling stations. At the same time, perhaps more of the people who said they’d vote Conservative managed to actually do so in practice.
  4. Don’t knows or refusals. If the people who said they didn’t know who they’d vote for, or who refused to say, broke to the Conservatives more than Labour then it could explain the disparity between the polls and the election result.
  5. Question wording. If the questions that are asked do not prompt a similar decision-making process to the one that people go through before they actually cast their vote then they may give a different answer.[3]
  6. Voter registration and postal voting. It may be that issues with registering to vote disproportionately affected voters for one party (i.e. Labour), or that those who held postal votes were not accurately taken into account. As Prof. Sturgis pointed out, this is unlikely to be the case since there were relatively small numbers in both groups.

We’ll come back to which of the above explanations is most convincing but, before doing so, it was noted that the polling companies’ results were surprisingly similar given their methodological differences. This may have suggested uncoordinated herding by the companies, whereby they looked at each other’s results and adjusted their methods to replicate those of their competitors (based on fear of being too far from the pack). This is obviously important (and related to the point about polling as a ‘modelling exercise’ above) but it’s an issue that needs to be considered separately from the original cause(s) of the disparity from the election result.

Since you’re reading this I guess you’re aware of at least some of the implications of all the above but, we were helpfully reminded. Namely, such a high-profile polling miss is likely to reduce public interest in polls and surveys on the basis that, their trust (along with that of the media and politicians) has been dented. This could have the knock-on effect of further reducing response rates, making it even harder for pollsters and survey researchers to gain accurate results in future. This is something that the polling companies appear acutely aware of; I wouldn’t go so far as to call this an existential threat to them but it’s obviously had serious reputational repercussions and could continue to make their business harder for some time.

Despite the above, Prof. Sturgis went to some effort to moderate concerns, suggesting (unexpectedly) that the polling miss will actually have a relatively minimal impact. First, it’s rather difficult to estimate election results, in part because respondents are best at answering questions about their recent behaviour rather than about what they will do in the future. Thus, it shouldn’t be too much of a surprise that the polls get it wrong at times, which links to the previous point about measuring ounces with an instrument for pounds. Second, returning to the opening distinction between polls and surveys, it is likely that the damage will impact more on the former than the latter because surveys with samples that were recruited through more other means (such as the exit poll (which could also ask about recent behaviour rather than future behaviour), and the British Election Study face-to-face survey) did a better job of approximating the outcome. It is important that Prof. Sturgis referenced this distinction again at this point in the lecture, as will be seen below. Third, a number of different research designs (phone and online, varying in proximity to randomness) failed to predict the result so no particular company is implicated, meaning the consequences will be spread between them. Fourth, the rise of opt-in panels (which are low cost, have a rapid turnaround, allow for ever-increasing functionality, and can accommodate client involvement in survey design) seem inexorable, so the polling miss is unlikely to stop it.

The final of the preceding points (which links to his preceding restatement of the distinction between polls and surveys) is key, because Prof. Sturgis went on to note the increasingly difficult time that those who conduct random sample surveys have. Response rates are falling (even more so with random digit dialling phone surveys than face-to-face) so it takes more time and effort to get the same response rate, meaning that costs also rise. Thus, in certain key senses random sample survey research is increasingly suffering by comparison to opt-in panels. This is a paradox in the sense that it is also random sample surveys, as noted above, that did a better job of predicting the outcome of the general election. And thus, we return to which of the possible explanations for the polling miss seems most likely to account for it. The focus of much of the latter part of the lecture on the difference (in quality) between random sample (survey) research and opt-in panel (polling) research suggests that sampling and weighting are likely to be the main culprits (though other explanations may well have a part to play), and this is a position that is supported by work that has been done by both the British Election Study team and the British Social Attitudes survey team (both of which have random samples). It is also supported by Prof. Sturgis’ comment that there is not a great deal of value in those who adopt a random sample approach chasing non-response. This is an unnecessary additional cost (for an already expensive method of gathering data) and random sampling is already better than non-probability opt-in panel based sampling. Thus, Prof. Sturgis concluded, reports of the death of random sample surveys are exaggerated.

So, what do we, or at least I, take from this? Well, if sampling and weighting were the main problem with the general election polls, which seems perfectly plausible, then the repeated distinction between surveys (based on random samples) and polls (based on samples drawn from opt-in panels) becomes particularly salient. This is especially so for those working with survey research in academia (especially quantitatively orientated social science), because survey methodology is a whole sub-field of academia on its own, and because it reflects an ongoing debate about whether opt-in panel samples (usually online) are good enough to base robust academic research conclusions on.[4] The polling miss, and Prof. Sturgis’ lecture, seems to suggest that the latest point in that ongoing debate favours the sceptic’s point of view. In other words, it may now be harder for those who conduct research based on opt-in panel samples (such as myself) to convince academics to trust our results.

And what about beyond academia? I was recently asked why all this fuss about polling really matters. My answer was that some in the media may feel that they were led up the garden path by polling companies and were therefore implicated in ‘misleading’ the public, who may now be less trusting of both polling companies and the media. Crucially, there is also the argument that the media focus on the ‘horse-race’ that was supplied by the polls took attention away from the policy positions and political issues that should have been reported on more, which may have influenced the outcome of the election (which would be pretty important if it could be proved to be true).[5] This is especially problematic because the race that took so much attention turned out to have a much clearer winner than had been anticipated. So, the polling miss is important because it has implications for public trust of polls, and the media that report them, which means that it has implications for how, and whether, the media report them in future. This means that it may also have implications for future election campaigns and perhaps even results. As I have said, the polling companies (and media) seem to be taking these implications very seriously, as demonstrated by their full cooperation with the inquiry. The release of that inquiry will make the precise nature of the aforementioned implications clearer, so I’ll certainly be paying attention to it.


[1] If anyone who’s critical of polls or survey research ever tries to make a point about what people think based on what they’ve seen on social media then I implore you to call out this contradiction.

[2] Notably absent from this list is the idea of ‘shy Tories’, or people who don’t want to admit to polling companies that they vote Conservative. This was a big part of the explanation for the polling miss at the 1992 general election but seems much less likely to be part of the problem this time round.

[3] There’s absolutely tons of research on the impact of question wording (down to minute levels detail), and this informed the approach of those who conducted the Exit Poll, which asked respondents to repeat the voting process with a replica ballot paper and ballot box, rather than just answering a survey question. This, along with their targetting of a representative sample of polling stations, may have contributed to the high level of accuracy that the Exit Poll achieved.

[4] If you’re interested in looking into this debate you can start with the following two articles that represent the two sides:

Neil Malhotra and Jon A. Krosnick, ‘The Effect of Survey Mode and Sampling on Inferences about Political Attitudes and Behavior: Comparing the 2000 and 2004 ANES to Internet Surveys with Nonprobability Samples’, Political Analysis, Vol. 15, No. 3 (Summer, 2007), pp. 286-323 [presenting evidence that non-probability samples drawn from internet panels may produce less accurate results].

David Sanders, Harold D. Clarke, Marianne C. Stewart, and Paul Whiteley, ‘Does Mode Matter for Modeling Political Choice? Evidence From the 2005 British Election Study’, Political Analysis, Vol. 15, No. 3 (Summer, 2007), pp. 257-285 [presenting evidence that non-probability samples drawn from internet panels may produce results that are not (statistically) significantly different from random face-to-face samples in terms of the relationships between variables].

[5] I’ll go out on a limb and state that I don’t think this will be proven any time soon; it’s remarkably difficult to prove the impact of particular factors in election outcomes, and this would take quite a lot of (quite expensive) academic research to provide robust evidence (if that’s even possible now that the event has passed), and with no guarantee of a clear conclusion.