Horserace polls have always been controversial since the science and art of polling has existed. Justin Ling has a good recap of the history here I won’t bother to repeat it. Despite some very dramatic examples of what is called “outlier” polls, largely we have seen polling consensus in most political races across all countries and levels of government.

A good example of “poll consensus” is the most recent BC election of May 2017.  There were seven polling firms in the field in the closing weeks of the election. Among the final published polls of the 7 firms, the largest lead for the BC NDP was a 2 point lead. The biggest lead for the BC Liberals was 3 points, a 5 point spread. The unusual election that saw the rise of the Green Party in BC who captured 3 seats in the BC Legislature. The largest divergence was in the numbers posted for the BC Greens, as low as 15% and as high as 23%.

Although many different predictions were made for the outcome, ranging from a BC Liberal majority, to a BC NDP majority, and everything in between, the numbers posted by 7 firms all came in within the margin of error. This included two exclusive IVR polls, one IVR and online blend, three exclusively online polls and one live agent phone and online blend. As has been noted many times before, the mode effect was very minimal in the polls conducted for the BC election.

Similar poll consensus can be seen in the most recent Nova Scotia, Saskatchewan, Manitoba and Alberta elections between May 2015 and June 2017. In the US during the last Presidential election, we largely saw similar consensus, with most firms posting between a Trump +2 points to a Clinton +4 points in the closing days ahead of the November 8th vote. US polling tends to be less divergent because of the regulations, only live agent polls can contact both cellular and landlines, leading to an absence of mode effect.

But that was then. We’ve started to see a very different polling landscape in the fall of 2017.

This past fall, we started to see greater divergence of poll results, and this appears to not be related to mode in some cases, it may be something a bit more sinister. Take the case of Alabama’s recent Special Senate Election, we saw polls in the closing weeks showing Republican Roy Moore leading by as much as 9%, and other showing the Democrat Doug Jones leading by as much as 10%, a 19 point spread. The result was a 1.5% win for Jones. In Calgary, we had contender Bill Smith leading by 10% using IVR, both the online panel polls in the field had the incumbent Nenshi leading by 21%, a whopping 31-point spread. The result, was a 7% margin for Nenshi.

In Ontario, we are seeing similar divergence among polls, leading some to speculate that some modes can no longer be trusted over others. As a strong advocate of IVR for a long time I still believe it can be an effective mode, but take any pollster who says their preferred mode is better or more scientific with a grain of salt, we all have an inherent bias toward our preferred mode. Truth is it’s likely a blend of modes will begin to be more accurate than any single mode, and poll aggregators like Eric Grenier, Bryan Breguet, Nate Silver and others will become indispensable.

To speculate on what this effect might be, I will again reference the report by Justin Ling, which hints at this phenomenon in the introduction. It is the relatively new factor of candidates and campaigns directly engaging with polling firms, polls and media that report them. This was started by Donald Trump in his race for the Republican nomination, where he praised the polls showing him doing well, then during the election, where he dismissed and discredited polls he disagreed with. He continues to do this in office as President, mainly using his Twitter account. This has created a new type of social desirability bias that we have not seen before. Trump supporters may not only be “shy” to polls as was suggested in 2016, they may in fact be actively not participating in them, especially those conducted by polling firms known to show poor results for the President. What has long been documented at “House effect” and/or “house bias” may now be amplified by the social desirability of responding to particular polls, or particular types of polls.

Here in Canada, our post election surveys for the Calgary election, showed us that not only did Naheed Nenshi voters not respond in sufficient numbers to our IVR polls, but that each poll after the first, had more and more Nenshi supporters hang up. Perhaps the intense criticism, some quite fair as I’ve pointed out, itself led to the inability to get a truly representative sample. A “self-fulfilling prophesy” effect of sorts. Our most recent polling in the Calgary-Lougheed by-election used samples from two separate firms to avoid the possible “house effect” and social desirability bias combination.  Although the results of those samples did not vary drastically, it is important to note that the NDP did score lower in the sample conducted by Mainstreet where they polled in 3rd place compared to the sample conducted by a third party vendor. This bears further study in future elections, especially ones where polls diverge significantly or where one firm or mode receives significantly more criticism than others.

With an increasingly sophisticated and sensitive electorate, and increasingly activist campaigns and candidates directly interacting with polls and media who report them, this may be just the beginning of an era without poll consensus. How pollsters and especially media respond to these new challenges of divergence, will define the industry for years to come.