The Sydney Morning Herald and The Age – or the Herald/Age, to adopt what is evidently Nine Newspapers’ own preferred shorthand for its Sydney and Melbourne papers – have revealed their opinion polling will be put on ice for an indefinite period. They usually do that post-election at the best of times, but evidently things are more serious now, such that we shouldn’t anticipate a resumption of its Ipsos series (which the organisation was no doubt struggling to fund in any case).
This is a shame, because Ipsos pollster Jessica Elgood has been admirably forthright in addressing what went wrong – and, importantly, in identifying the need for pollsters to observe greater transparency, a quality that has been notably lacking from the polling scene in Australia. In particular, Elgood has called for the establishment of a national polling standards body along the lines of the British Polling Council, members of which are required to publish details of their survey and weighting methods. This was echoed in a column in the Financial Review by Labor pollster John Utting, who suggests such a body might be chaired by Professor Ian McAllister of the Australian National University, who oversees the in-depth post-election Australian Election Study survey.
On that point, I may note that I had the following to say in Crikey early last year:
The very reason the British polling industry has felt compelled to observe higher standards of transparency is that it would invite ridicule if it sought to claim, as Galaxy did yesterday, that its “track record speaks for itself”. If ever the sorts of failures seen in Britain at the 2015 general election and 2016 Brexit referendum are replicated here, a day of reckoning may arrive that will shine light on the dark corners of Australian opinion polling.
Strange as it may seem though, not everyone is convinced that Australian polling really put on all that bad a show last weekend. Indeed, no less an authority than Nate Silver of FiveThirtyEight has just weighed in with the following:
Polls showed the conservative-led coalition trailing the Australian Labor Party approximately 51-49 in the two-party preferred vote. Instead, the conservatives won 51-49. That’s a relatively small miss: The conservatives trailed by 2 points in the polls, and instead they won by 2, making for a 4-point error. The miss was right in line with the average error from past Australian elections, which has averaged about 5 points. Given that track record, the conservatives had somewhere around a 1 in 3 chance of winning.
When journalists say stuff like that in an election after polls were so close, they’re telling on themselves. They’re revealing, like their American counterparts after 2016, that they aren’t particularly numerate and didn’t really understand what the polls said in the first place.
I’m not quite sure whether to take greater umbrage at Silver’s implication that Antony Green and Kevin Bonham “aren’t particularly numerate”, or that the are – huck, spit – “journalists”. The always prescient Dr Bonham managed a pre-emptive response:
While overseas observers like Nate Silver pour scorn on our little polling failure as a modest example of the genre and blast our media for failing to anticipate it, they do so apparently unfamiliar with just how good our national polling has been compared to polling overseas.
And therein lies the rub – we in Australia have been rather spoiled by the consistently strong performance of Newspoll’s pre-election polls especially, which have encouraged unrealistic expectations. On Saturday though, we saw the polls behaving no better, yet also no worse, than polling does generally.
Indeed, this would appear to be true even in the specifically Australian context, so long as we take a long view. Another stateside observer, Harry Enten, has somehow managed to compare Saturday’s performance with Australian polling going all the way back to 1943 (“I don’t know much about Australian politics”, Enten notes, “but I do know something about downloading spreadsheets of past poll data and calculating error rates”). Enten’s conclusion is that “the average error in the final week of polling between the top two parties in the first round” – which I take to mean the primary vote, applying the terminology of run-off voting of the non-instant variety – “has been about five points”.