He didn’t get it wrong. He said the Clinton Trump election was a tight horse race, and Trump had one side of a four sided die.
The state by state data wasn’t far off.
Problem is, people don’t understand statistics.
If someone said Trump had over a 50% probability of winning in 2016, would that be wrong?
In statistical modeling you don’t really have right or wrong. You have a level of confidence in a model, a level of confidence in your data, and a statistical probability that an event will occur.
So if my model says RFK has a 98% probability of winning, then it is no more right or wrong than Silver’s model?
If so, then probability would be useless. But it isn’t useless. Probability is useful because it can make predictions that can be tested against reality.
In 2016, Silver’s model predicted that Clinton would win. Which was wrong. He knew his model was wrong, because he adjusted his model after 2016. Why change something that is working properly?
Just for other people reading this thread, the following comments are an excellent case study in how an individual (the above poster) can be so confidently mistaken, even when other posters try to patiently correct them.
May we all be more respectful of our own ignorance.