Pretty much everyone got it wrong, terribly wrong.
CNN released 7 battleground state polls in the last 48 hours before the election:
Wisconsin - Harris 51%-46% (+6)
Michigan - Harris 48%-43% (+5)
Arizona - Harris 48%-47% (+1)
North Carolina - Harris 48%-47% (+1)
Pennsylvania - Harris 48%-48% (tied)
Nevada - Harris 47%-48% (-1)
Georgia - Harris 47%-48% (-1)
All very wrong. CNN was not alone.
All the professional pollsters got it wrong, some very very badly. Most, like Nate Silver, said it was essentially a coin toss. It wasn't a coin toss. In the final analysis the respected 538.com aggregate of polls had Harris ahead nationally by a little more than one percent. I know that this doesn't sound like much, and is well within the so-called statistical 'margin of error' - a euphemism for it could be off by as much as 6-8 points in either direction ie. pretty much useless. Turns out it was.
J. Ann Selzer the lauded pollster whose forecasts of Iowa elections have been uncannily accurate for decades, got it horribly wrong. She came out with her final poll for the Des Moines Register on the weekend before the election showing Harris up by 3 points in a deep-red state trump won in 2020 by 8 points. The result sent shock waves through the pundit class. Trump ended up winning Iowa by more than 13 percent, a difference of 16 from Selzer's result.
Another state maven Jon Ralston, editor of The Nevada Independent, is respected for understanding and accurately forecasting Nevada's unique voter patterns. He doesn't use mathematical modelling like Selzer, nonetheless, he wasn't shy to predict the outcome of this election to the tenth of a percent, with Harris winning 48.5 percent to trump's 48.2. His reasoning was that it all depended on how non-major-party voters break, and he believed they would break for Harris. He was way off. So far, trump is beating Harris by more than 4 percent in Nevada (the race hasn't been officially called yet).
Top data gurus got it terribly wrong too. Analyst Tom Bonier who runs a website called TargetSmart specializes in analyzing the early vote numbers. He said that the hard data of actual votes (not models of 'likely voters') showed a significant gap between women and men in voting, and the numbers indicated that the abortion issue was probably a huge motivator that could make the difference for Harris. It didn't. The gender gap was more or less a wash for both candidates.
So scientific modellers, the data analysts and the pundit/journalists got it completely wrong. Surely academics fared better. Well, a certain very prominent one didn't.
Allan Lichtman is the renowned historian some have called the 'Nostradamus of US elections'. He developed The 13 Keys to the White House which had an unblemised record of predicting the final result of presidential elections since the early 1980s (with one exception in 2000, the so-called 'hanging-chad' election decided by the Supreme Court that ended when the count was abruptly ordered stopped with a 536 vote margin in favour of George W. Bush). Lichtman brashly says that we can throw the polls in the trash because they aren't worth a damn thing. He was right about that. He claims that it's governance that matters, not the candidates - except a once in a lifetime superstar like FDR, Kennedy, or Obama - or campaigning. On the face of it, his method sounds questionable, because that would mean, in theory, campaigning could never change the outcome of an election, so what's the point of wasting all that time and resources? But his method, developed by examining 165 years of presidential election history - 'from the days of the buggy whip' as he puts it - ensures the robustness of his system over time. In September, before the debate (because debates don't matter) he announced that according to the 13 Keys, Harris was a shoo-in. This time his method was just as wrong as those useless polls.
Notably, the online offshore betting markets turned out to be more accurate than the professional pollsters. I consider this purely serendipitous.
The one respectable commentator who got it right was a fellow from the UK on YouTube named Vlad Vexler, who calls himself 'a baby public intellectual'. I started following him at the start of the war in Ukraine because he has a special interest in Russian politics and great insight into what makes Putin tick. Vexler said he thought trump always had the advantage, even after Biden left the race, and never wavered from his belief that trump would win. At one point, he even looked at Lichtman's 13 Keys and showed how the inventor could be mis-using his own system. He said he believed that the Keys actually predict a trump victory. As a social scientist/political philosopher, Vexler takes a broad historical view, believing that there are identifiable and undeniable trends that compel events. These forces cannot be easily taken off course (except by cataclysm, for example, a world war.) The salient political trend we are seeing for decades now, he argues, is 'democratic backsliding' precipitated by a retreat of the US from the global order it established after 1945. The force of this trend has been accelerating due in part to profound changes in the information environment. He calls the period we are experiencing, Post-Truth Populism. In 2020 he believed trump would be re-elected easily, if not for the global pandemic. But the pandemic didn't halt the historical trend of democratic backsliding. Rather it was a bump on the road, and therefore he predicted, failing another such major event, trump would handily win in 2024. He nailed it.
Notwithstanding his expertise in Russian politics, I thought Vexler was completely off base. There was no way you could apply analytical tools about historical world events to the particularities of this US election. Turns out I was wrong, he could, and with accuracy.
This is the 3rd election in a row - 5th if you count the 2018 and 2022 mid-terms, especially the one where the 'red wave' never materialized - when the polls were inaccurate. Is it something about trump that makes his candidacy unique and difficult to model? Is it something about our unique information (and disinformation) environment that makes gathering accurate responses to polling unusually difficult? Is it a psycho-social phenomenon, like a herd mentality, in which analysts and pollsters are afraid to be too far outside the margins from their competitors, so they adjust their approach to be similar to others? Is it all of the above?
Whatever the answer(s) - and I don't have one - I can only say that in my experience, the social media algorithm seems to have a lot to do with the skewing. This election was a sort of litmus test for me. A test of my ability to stay objective and seek out accurate information on the state of the race so I could make up my own mind about the outcome. I failed the test miserably. I was caught in my own information bubble. All of my sources of data and analysis reinforced my hopes and beliefs about what I thought would happen. My opinions hardened. When one of my favourite commentators, someone who I respected tremendously for his insight, voiced a dissenting perspective, I thought he was off his rocker and paid him no further heed on the subject (although I'm still an avid watcher of his channel). He was right and I was wrong. The experience has taught me something meaningful about myself, and about the subtle and not so subtle dangers of the world we live in.