How Ipsos got ANC support completely wrong in 2016

by The Editor


InsidePoliticsFEATURE:  In 2016 eNCA teamed up with market research company Ipsos to produce a weekly voter tracking poll. It got the ANC’s support levels horribly wrong. But, through a clever piece of last minute spin, it managed to muddy the waters and avoid ever accounting for the mess that were the eNCA/Ipsos weekly polls. It is a familiar pattern Ipsos seems to indulge every election. Here is how it all went down.

How Ipsos got ANC support completely wrong in 2016

By: Gareth van Onselen


1 April 2019

Introduction

It is election season, and that means a raft of political polls. One of the mainstays of the South African market research universe is “research giant” Ipsos. It has, since July 2018, already released three such polls. They have included a range of numbers – particularly with regard to DA support – that are deeply problematic, for a wide range of reasons.

The first set of these were in July 2018, when it put the DA on 28% in the Western Cape (on the provincial ballot) and on 15% in Gauteng (provincial ballot). Nationally, it put the DA on 13%. In January 2019, it had the DA on 14% nationally and, in March, it had the DA on 18% nationally. Those numbers were a nonsense.

The methodological problems with the Ipsos polls are many and various. In July, Ipsos did not distinguish between registered voters or unregistered voters (and used a sample that included 15-year-olds – Ipsos stated that it had “assumed” that those aged 15 years and older would have two birthdays before the 2019 elections, thereby making them eligible to register to vote). It had no way of allocating undecided voters, and so had huge numbers of undecided voters in a number of provinces. And it did not generate any turnout scenarios, which are obviously a helpful way of gauging a more realistic picture of the electoral landscape.

It has addressed some of these. In January, it did distinguish between registered and unregistered voters. In March it distinguished between registered and unregistered voters, allocated undecideds and provided some turnout scenarios. And so, as its methodology improves, its DA numbers seem to be getting more reasonable with time. It helps no end, of course, that as the election approaches undecided voters typically make up their minds, making polling closer to an election inherently more accurate. Nevertheless, Ipsos always seems to poll the DA too low – something about their particular methodology means they always have the official opposition below its final percentage.

Thus, the Ipsos pattern in 2019 is typical of Ipsos polling generally, as exemplified by its “Pulse of the People Study”, published every six months and the backbone of its political market research. The further one gets away from Election Day, the more unreliable Ipsos numbers get, and particularly with regard to the DA. But Ipsos has a clever way of circumventing this problem: it publishes a poll the week before Election Day. And, primarily because undecided voters have made up their mind, it tends to be generally accurate. Ipsos then promotes that particular poll as evidence of its accuracy. It then simply explains away or ignores any earlier errors as the product of an uncertain and fluctuating marketplace.

It’s not true, of course. Ipsos is simply wrong. As the 2019 election has not yet taken place (and we cannot yet gauge how far off the mark Ipsos was), let us use the last election – in 2016 – to demonstrate how Ipsos gets its percentages wrong, and then evades responsibility for them, in the manner I have set out above. 2016 is a helpful illustration because, on that occasion, it was the ANC support levels Ipsos failed to properly poll – the result of a very particular methodology it employed that year, for a set of weekly polls in conjunction with eNCA. Yet, it dealt with the problem in exactly the same way it always does – by pretending there was no problem. Here is how.

The 2016 ENCA/Ipsos polls

In the run-up to the 2016 local government elections, Ipsos teamed up with eNCA to produce a weekly tracking poll over eight weeks to map party support in three contested metros: Johannesburg, Tshwane and Nelson Mandela Bay. Each week, along with a range of other indicators, eNCA/Ipsos would track support for the African National Congress (ANC), Democratic Alliance (DA) and the Economic Freedom Fighters (EFF) in each metro.

Each poll was published and broadcast every Thursday night, from 9 June 2016 to 28 July 2016, just five days before the elections took place, on 3 August 2016. Here is a summary of what eNCA/Ipsos found and broadcast:

eNCA/Ipsos 2016 Weekly Polls

With regard to support for the ANC, the results were incredibly poor and significantly out of line with the final IEC percentages for the PR ballot (the ballot on which people vote for metropolitan governments) – in all three cases by more than double the margin of error and, for Tshwane and Johannesburg, by more than four times the margin of error.

In summary, in its final poll on 28 July, five days before the election:

• eNCA/Ipsos put the ANC on 32% in Johannesburg. Five days later, the ANC ended up with 44.92% on the PR ballot, a difference of 12.92 percentage points. Ipsos claimed its margin of error for Johannesburg was “between 1.2% and 2.8%”. Even at 2.8%, it was out by more than four times that.
• eNCA/Ipsos put the ANC on 26% in Tshwane. Five days later, the ANC ended up with 41.48% on the PR ballot, a difference of 15.48 percentage points. Ipsos claimed its margin of error for Tshwane was “between 1.6% and 3.7%”. Even at 3.7%, it was out by more than four times that.
• eNCA/Ipsos put the ANC on 30% in Nelson Mandela Bay. Five days later, the ANC ended up with 41.5% on the PR ballot, a difference of 11.5 percentage points. Ipsos claimed its margin of error for Nelson Mandela Bay was “between 2.5% and 5.7%”. Even at 5.7%, it was out by more than double that.

Saved by spin and misdirection

It is worth re-stating that these numbers were broadcast on national television and widely advertised on the eNCA website (although many of those stories have now been removed). They were seen by millions of voters in the lead up to the 3 August election.

The ANC’s percentages represent a profound polling failure. Were the media more comprehending of market research or had better understood the extent of eNCA/Ipsos’s monumental error, the whole episode would have done serious damage to the company’s reputation. But, as we shall see, it evaded almost all accountability.

The reasons for the failure are many and various but, obviously, all stem from a fundamentally compromised methodology, which clearly dramatically under-represented ANC voters, among a myriad of other problems (See full methodology below).

The ANC complained bitterly about the eNCA/Ipsos polls. It was right, too. But Ipsos seemed more or less immune to criticism. In an interview on eNCA just after the first two eNCA/Ipsos polls had been released, both Mari Harris, Ipsos Public Affairs Director, and host Jeremy Maggs, laughed off party political criticism from first principles (see here).

Jeremy Maggs: “And that, Mari, just very quickly, that is the world over, is that parties will embrace when they show up well, but they will reject it emphatically when they don’t show up well. That is to be expected, that is part and parcel of it, that is exactly what you want?”
Mari Harris: “That is exactly it. You know, you react to that but, in the end, as I said, it’s independent, it’s there, it’s for everybody to see. It is not something that they can downplay necessarily or hide away.”

On 8 July, eNCA editor-in-chief Anton Harber (the eNCA/Ipsos polls were his brainchild) addressed the ongoing complaints (see here), but his attitude, essentially, was no different from that of Harris – this was political posturing, no more.

“The SACP is particularly concerned about what the polls showed about Tshwane – where the DA was reported as pulling far ahead of the ANC in the wake of the recent intra-ANC violence in the area. This does not match what party workers are experiencing on the ground, they argue. So which would you believe: a scientific poll by an international company with a solid reputation for this kind of work, and a history of accurate polling in this country, or the gut feel of party campaigners on the ground? Interesting question.”

It was an interesting question. A good poll is, of course, a more scientific, objective measure of party support than “on-the-ground” sentiment. A bad poll, however, is no different from it. But there was no attempt from Harber or Ipsos to revisit their methodology. “We invited the SACP to visit us this week, as we are happy to share with them our polling company’s methodology and the evidence they have to show their sampling is sound and matches international standards,” Harber wrote. But the sample was not sound. And no less than Ipsos itself would soon prove that to be the case.

With the elections looming, and having presented all eight weeks’ worth of polling, faced with both a PR and professional problem, Ipsos released the results of its own “Pulse of the People” Study on 31 July 2016, presumably in a last-minute attempt to publicly interdict the eNCA polls it had done, as the final word on the matter. With a fundamentally different and more comprehensive methodology, the Pulse of the People Study was far more accurate.

But, at this point, things get murky.

When it released its Pulse of the People Study results under the headline “2016 Election: Poll of Polls” (see here), it presented the findings as born wholly of its own, distinctive methodology.

Briefly, the eNCA/Ipsos polls were carried out by phone, to a static sample of 2,500 people in the three metros, of which 1,500 were phoned back every week, for eight weeks. But the Pulse of the People Study was very different: A total of 3,861 face-to-face interviews were conducted nationally, from 17 June to 18 July. A wide range of other methodological differences distinguished the two polls. But Ipsos never suggested its final numbers were anything more than born of its Pulse of the People Study.

In it, Ipsos found the following:

Johannesburg:
• ANC: 46%
• DA: 41%
• EFF: 8%

Tshwane:
• ANC: 47%
• DA: 43%
• EFF: 9%

Nelson Mandela Bay:
• ANC: 37%
• DA: 44%
• EFF: 6%

Suddenly – effectively overnight – the ANC had grown its support by 14 percentage points in Johannesburg, by 21 percentage points in Tshwane and by seven percentage points in Nelson Mandela Bay.

Nowhere in the Ipsos press release did it attempt to explain the discrepancy, even though it made reference to the eNCA polls.

On its website, eNCA would declare just three days after its final partnership poll: “In a massive turnaround, the ANC is back on track in Tshwane, neck-and-neck with the Democratic Alliance (DA) in the capital, according to the final comprehensive eNCA Ipsos poll.” (see here).

What it was trying to say, or suggest, was that in just three days the ANC’s support had mysteriously jumped by 21 percentage points in Tshwane, all without any reflection on what this said about its partnership polls with Ipsos. It was a world-class piece of spin.

Some dark methodological witchcraft

In a seemingly desperate attempt to obscure the discrepancy, the news channel stated online: “The ‘Poll of Polls’ merges the eight-week eNCA local election polls with an Ipsos national poll conducted last week” (see here).

That was mirrored by Mari Harris in an interview on 3 August (see here). She stated:

“We took a strategic decision with eNCA right up-front, to speak to people using their cell phones. They were first of all recruited under a process we call CAPI – computer assisted telephone interviewing – they were, we used a system called random digit dialling, which means no one chose the numbers, the computer generated the numbers. We then phoned the numbers and recruited people on this panel of voters, a voters’ panel basically, and there were about 3,000 people on this panel. And then every week, on the Monday and Tuesday, we would phone back at least 1,500 of them, speaking to them about different issues. We asked two questions that were the same every week: about their feelings about political parties. And we asked about five or six parties. Everybody had to answer for every party. And then we spoke about which party would you vote for, and then every week there was a different question as well. We projected these results to the people who have access to cell phones in the three different metros. We knew we were missing out, say, 5% to 10% of the population in those areas. And, throughout the eight weeks, there were between 15% to 20% of people who said they don’t know, or they wouldn’t tell the interviewer who they would vote for. And, obviously, that group really held the, sort of the key to this whole process. In the end, we married those results [the eNCA polls] – perhaps the best way I can explain it – to a national poll we undertook [the Pulse of the People study], of about 4,000 interviews, that were done face-to-face, in the homes and in the home languages of respondents, that were chosen randomly throughout the country. So, not only the big cities but also deep rural areas. And this combination of all these polls together is what our forecast is based upon.”

If true, that is an astounding claim. The two polls were fundamentally methodologically incompatible, and it would take some dark methodological witchcraft to “marry them.”

Certainly, in its press statement announcing the “Poll of Polls”, Ipsos made no attempt to explain how they were merged.

But, even if one assumes they were merged, because the eNCA/Ipsos polls had the ANC so low, the Pulse of the People Study would have had to have had the ANC too high, in order for the final, “married” numbers, to have balanced each other out at a realistic level. A they would have to have been exceedingly high at that, because, although Ipsos interviewed 3,800 people for the Pulse of the People Study, only a small portion of them – less the 2,500 people in the eNCA/Ipsos polls – would have resided in the relevant metros, so the ANC’s support would have had to have been in the 60% range to balance out the eNCA/Ipsos ANC numbers). In other words, both polls would have had to have been significantly wrong on the ANC, each in the opposite direction.

The truth is, they were never merged. The eNCA/Ipsos polls were too compromised.

What is more likely is that eNCA and Ipsos panicked and, in a last minute attempt to save face, adopted Ipsos’s sounder, more accurate Pulse of the People Study as its final prediction. On its website, it deliberately and disingenuously blurred the two together (see here), creating the impression they were one and the same and thus presenting the public with a more credible set of figures.

If they did indeed “marry those results”, quite how they did that Ipsos has never revealed. Nor is it ever likely to do so, because it would have been more magic than science.

Neither eNCA nor Ipsos has ever properly accounted for the misleading ANC figures they published for eight weeks in the lead-up to the 2016 election or the poor methodology that underpinned them. Quite the opposite – the organisation has obfuscated and misdirected to avoid any explanation.

How Ipsos evades accountability

On 21 June 2018, Ipsos released an equally inaccurate set of numbers in its Pulse of the People Study, this time regarding the DA, which it put on 28% in the Western Cape. That particular poll is not the subject of this Case Study, but it suffices to say that it drew sharp criticism from the Democratic Alliance, and which raised the matter of the eNCA/Ipsos 2016 polls in turn.

In a piece for the Daily Maverick on 10 July, DA strategist Jonathan Moakes wrote of Ipsos’s poor track record: “This can be seen in the weekly polling conducted for eNCA in major metros in the lead-up to the 2016 local government elections. For example, they had the ANC on 26% two weeks before election day in Tshwane (their Week 8 results). The ANC ended up getting 41.25%. They were equally inaccurate in Johannesburg and Nelson Mandela Bay.”

[Moakes got the final poll date wrong, it was actually released on 28 July 2016, just five days before the election, a mistake which only makes his case stronger.]

In a later radio debate on 18 July 2018 between the DA and Ipsos (see here) about the quality of their polling, Antonia Squara, Research Manager at Ipsos, would say the following:

“So, the [2018] figures are a point in time measurement. These are trending data. So, I’m sure that as the listeners can appreciate, that, as human beings we are not static, we have so many thoughts and emotions in such a short space and time, so what we are going to think and feel in 2019, we are going to have to check in 2019. So, to illustrate how trends and how party behaviour affects voter choice, I am going to refer to the particular criticism that Jonathan Moakes made towards us, saying that we predicted 26% in the City of Tshwane and they received 41% of the vote in the 2014 [sic] municipal elections. Those were not the final results, they were a reflection of the people’s view when the ANC announced Thoko Didiza as the [ANC] mayoral candidate for the city of Tshwane and the city erupted in violence. I even remember seeing a meme which compared the scene to the poster for the apocalyptic movie, World War Z. So the ANC, after that, managed to recover, and did get 41% of the vote, and Ipsos predicted that they were to get 47% of the vote. So our predictions are actually quite on the mark, particularly with a municipal election like that, where large margins of error are expected. So people are changing, we do not have the same thoughts at all times.”

That was a profound nonsense, as the following eNCA/Ipsos Tshwane timeline reveals:

• 9 June 2016: First eNCA/Ipsos poll published: ANC: 27%
• 16 June 2016: Second eNCA/Ipsos poll published: ANC: 28%
• 17 June 2016: Ipsos Pulse of People Study goes into field
• 20 June 2016: ANC announces Thoko Didiza as Tshwane mayoral candidate
• 23 June 2016: Third eNCA/Ipsos poll published: ANC: 27%
• 30 June 2016: Fourth eNCA/Ipsos poll published: ANC: 23%
• 7 July 2016: Fifth eNCA/Ipsos poll published: ANC: 26%
• 14 July 2016: Sixth eNCA/Ipsos poll published: ANC: 25%
• 18 July 2016: Ipsos Pulse of the People Study out of field
• 21 July 2016: Seventh eNCA/Ipsos poll published: ANC 23%
• 28 July 2016: Eighth eNCA/Ipsos poll published: ANC: 26%
• 31 July 2016: Pulse of the People poll published: ANC: 47%
• 3 August 2016: Local Government Elections: ANC: 41.48%

Didiza was confirmed as the ANC’s mayoral candidate for Tshwane on 20 June 2016. Subsequent to that, eNCA/Ipsos ran no fewer than six of its eight weekly polls. Indeed, the gap between Didiza’s election and the final eNCA/Ipsos poll, on 28 July, was 39 days. Her election and the consequences thereof were thus both well-established events in the electoral marketplace by the time eNCA/Ipsos produced its final numbers.

In turn, the Pulse of the People Study went into the field just three days before Didiza’s election and ran for 29 days subsequent to it – so it too would be measuring an event that was generally well-established in the electoral market, and yet it came up with an entirely different, and generally more accurate, set of numbers.

It doesn’t matter which way Ipsos cuts it – there was something profoundly wrong with the methodology of its eNCA polls. But it will not acknowledge or take responsibility for them. As Mari Harris would write (see here) in response to Moakes:

“Ipsos was criticised in the 2016 elections, with many saying that the DA’s strength was being overstated, while the ANC’s support base was being underestimated. In 2016, it was mostly ANC officials who took issue with these numbers and research methodology. However, ultimately these critics were proved wrong, with the final vote tally remarkably close to the predictions in the metros (and well within the margin of error), and on the head nationally. Unfortunately, these results have the potential to upset political parties when they show that support has waned for whatever reason.”

Certainly Ipsos has never apologised to the ANC. Instead, both it and eNCA relied totally on the Pulse of the People poll as a gauge of its contribution – to the extent that, these days, Ipsos even flaunts that final poll as an example of how accurate its polling in 2016 was.

In a particularly egregious example of this kind of misdirection, in a statement titled “Ipsos pre-election forecasting on the mark nationally” (see here) Ipsos states: “Ipsos in South Africa again delivered a very close forecast on the final local election results.” And while the statement mentions the eNCA weekly polls, no mention is made of the dire set of ANC percentages that defined them.

The whole episode was a profound indictment of both eNCA and Ipsos; eNCA, for evading responsibility, Ipsos for never explaining why and how it got its methodology so wrong. But spin saved them both. Through a clever bit of last-minute wrangling and obscuring, eNCA and Ipsos pulled the wool over everyone’s eyes. But for two months over June and July 2016, both of them had broadcast a profound nonsense to millions of people about the ANC’s support levels in three electoral races, all of which eventually came down to the wire.

Conclusion

Ipsos will no doubt release a poll the week before the 8 May election. It will, no doubt, be more accurate than everything that has preceded it. And, no doubt, it will use that poll to once again mask the poor and misleading findings it has published in the year leading up to the election.

A contributing factor in all of this is the media’s inability to properly interrogate the methodology of polls. The fourth estate simply doesn’t have the right set of expertise, and so Ipsos is rarely asked the hard questions and, through the use of jargon and misdirection, as with the radio interview above, it can take advantage of this shortcoming and, easily enough, gloss over any apparent problems. In other countries where polling is taken far more seriously and in which the consequences of bad polling are met with far-reaching repercussions, things are very different.

In the United Kingdom, for example, the BBC has a series of editorial guidelines by which it must abide when publishing or reporting on any poll (see here), likewise for election periods specifically (see here). The relevant section reads:

The general rules and guidance applying to the reporting of polls need to be strictly applied during election campaigns. They are:
• not to lead a news bulletin or programme simply with the results of an opinion poll (especially a single voting intention poll);
• not to headline the results of a single voting intention poll unless it has prompted a story which itself deserves a headline and reference to the poll’s findings is necessary to make sense of it;
• not to rely on the interpretation given to a poll’s results by the organisation or publication which commissioned it, but to come to our own view by looking at the questions, the results and the trend;
• to report the findings of voting intentions polls in the context of trend.
The trend may consist of the results of all major polls over a period or may be limited to the change in a single pollster’s findings. Poll results which defy trends without convincing explanation should be treated with particular scepticism and caution;
• not to use language which gives greater credibility to the polls than they deserve: polls “suggest” but never “prove” or even “show”; it is important that other editorial judgements – eg which aspects of the election may be given more coverage – do not rely too heavily on what the polls may appear to be indicating.
• to report the expected margin of error if the gap between the contenders is within the margin. On television and online, graphics should always show the margin of error;
• to report the organisation which carried out the poll and the organisation or publication which commissioned it.

One can take issue with some of those, but the majority of them are important and helpful to the public. They are not how the South African media corps covers political polling. Thus, it is an environment in which bad polling is able to thrive – unchallenged and unable to be challenged by those tasked with reporting on it. This needs to change, and taking account of what polling companies have said and done over time is an important component part of that.

eNCA/Ipsos Methodology [From ENCA website]

• Ipsos, Social & Market Research and Political Polling Specialists undertook an “establishment survey” which was launched at the end of May/beginning of June to recruit eligible voters in the three hotly contested metropolitan areas: the cities of Johannesburg and Tshwane and Nelson Mandela Bay metro. As mobile phone incidence in these areas are very high, it was decided to use mobile phone interviews for this project. The aim was to recruit a panel of eligible voters who would be asked to participate weekly in a CATI (Computer Assisted Telephone Interview) survey, focusing on their party choices and relevant campaign issues.
• As no lists or directory of mobile phone numbers is freely available, lists of mobile numbers were created by computer and these lists were used as the basis of a random digit dialling process to phone would-be respondents. Only respondents in the three metros were recruited, thus a large number of phone calls were made that could not be used for the study. However, it was important to follow a random selection procedure.
• In this first part of the process the demographic detail of individuals were recorded and pertinent questions about their views on the country and political parties were asked. One of the questions probed about the party they voted for in the 2014 national election – this was chosen as a baseline as this was the most recent election in the country.
• A total of 2,500 panel members were recruited and every week 1,500 of them are phoned back for a 5-minute interview to answer questions on pertinent issues around the Local Government Elections. The results are representative of the opinions in each metropolitan area and are weighted and projected to reflect the views of the eligible voters in each area. These results should be evaluated within the margin of error. (All sample surveys are subject to a margin of error, determined by sample size, response rate and sampling methodology used.)
• Interviews were conducted on the Monday and Tuesday of each week, data was processed on the Wednesday and results were published on the Thursday.
• The margin of error for the results of the City of Johannesburg will be between 1.2 percent and 2.8 percent for the City of Tshwane it is between 1.6 percent and 3.7 percent and for Nelson Mandela Bay between 2.5 percent and 5.7 percent. As opinion research is not an exact science, results will have to be evaluated keeping these margins of error in mind.

  • Gareth van Onselen (@GvanOnselen) is the Editor of Inside Politics (@insidepols)

To follow Inside Politics by e-mail simply go to the bottom of the page and fill in your address. When you confirm it, you will receive an e-mail the moment any new post is loaded to the site.