You are viewing a single thread.
View all comments View context
-28 points
*

Silver made a prediction. That’s the deliverable. The prediction was wrong.

Nobody is saying that statistical theory was disproved. But it’s impossible to tell whether Silver applied theory correctly, and it doesn’t even matter. When a Boeing airplane loses a door, that doesn’t disprove physics but it does mean that Boeing got something wrong.

permalink
report
parent
reply
14 points

but it does mean that Boeing got something wrong.

Comparing it to Boeing shows you still misunderstand probability. If his model predicts 4 separate elections where each underdog candidate had a 1 in 4 chance of winning. If only 1 of those underdog candidates wins, then the model is likely working. But when that candidate wins everyone will say “but he said it was only a 1 in 4 chance!”. It’s as dumb as people being surprised by rain when it says 25% chance of rain. As long as you only get rain 1/4 of the time with that prediction, then the model is working. Presidential elections are tricky because there are so few of them, they test their models against past data to verify they are working. But it’s just probability, it’s not saying this WILL happen, it’s saying these are the odds at this snapshot in time.

permalink
report
parent
reply
-1 points

Presidential elections are tricky because there is only one prediction.

Suppose your model says Trump has a 28% chance of winning in 2024, and mine says Trump has a 72% chance of winning in 2024.

There will only be one 2024 election. And suppose Trump loses it.

If that outcome doesn’t tell us anything about the relative strength of our models, then what’s the point of using a model at all? You might as well write a single line of code that spits out “50% Trump”, it is equally useful.

The point of a model is to make a testable prediction. When the TV predicts a 25% chance of rain, that means that it will rain on one fourth of the days that they make such a prediction. It doesn’t have to rain every time.

But Silver only makes a 2016 prediction once, and then he makes a new model for the next election. So he has exactly one chance to get it right.

permalink
report
parent
reply
2 points

His model has always been closer state to state, election to election than anyone else’s, which is why people use his models. He is basically using the same model and tweaking it each time, you make it sound like he’s starting over from scratch. When Trump won, none of the prediction models were predicting he would win, but his at least showed a fairly reasonable chance he could. His competitors were forecasting a much more likely Hillary win while he was showing that trump would win basically 3 out of 10 times. In terms of probability that’s not a blowout prediction. His model was working better than competitors. Additionally, he basically predicted the battleground states within a half percentage iirc, that happened to be the difference between a win/loss in some states.

So he has exactly one chance to get it right.

You’re saying it hitting one of those 3 of 10 is “getting it wrong”, that’s the problem with your understanding of probability. By saying that you’re showing that you don’t actually internalize the purpose of a predictive model forecast. It’s not a magic wand, it’s just a predictive tool. That tool is useful if you understand what it’s really saying, instead of extrapolating something it absolutely is not saying. If something says something will happen 3 of 10 times, it happening is not evidence of an issue with the model. A flawless model with ideal inputs can still show a 3 of 10 chance and should hit in 30% of scenarios. Certainly because we have a limited number of elections it’s hard to prove the model, but considering he has come closer than competitors, it certainly seems he knows what he is doing.

permalink
report
parent
reply
10 points

Silver made a prediction. That’s the deliverable.

I see what you’re not getting! You are confusing giving the odds with making a prediction and those are very different.

Let’s go back to the coin flips, maybe it’ll make things more clear.

I or Silver might point out there’s a 75% chance anything besides two heads in a row happening (which is accurate.) If, as will happen 1/4 times, two heads in a row does happen, does that somehow mean the odds I gave were wrong?

Same with Silver and the 2016 election.

permalink
report
parent
reply
-2 points
*

I or Silver might point out there’s a 75% chance anything besides two heads in a row happening (which is accurate.)

Is it?

Suppose I gave you two coins, which may or may not be weighted. You think they aren’t, and I think they are weighted 2:1 towards heads. Your model predicts one head, and mine predicts two heads.

We toss and get two heads. Does that mean the odds I gave are right? Does it mean the odds you gave are wrong?

In the real world, your odds will depends on your priors, which you can never prove or disprove. If we were working with coins, then we could repeat the experiment and possibly update our priors.

But suppose we only have one chance to toss them, and after which they shatter. In that case, the model we use for the coins, weighted vs unweighted, is just a means to arrive at a prediction. The prediction can be right or wrong, but the internal workings of a one-shot model - including odds - are unfalsifiable. Same with Silver and the 2016 election.

permalink
report
parent
reply
6 points

The thing is, Nate Silver did not make a prediction about the 2016 race.

He said that Hilary had a higher chance of winning. He didn’t say Hilary was going to win.

permalink
report
parent
reply
8 points

Silver made a prediction. That’s the deliverable. The prediction was wrong.

Would you mind restating the prediction?

permalink
report
parent
reply
-11 points

He predicted Clinton would win. That’s the only reasonable prediction if her win probability was over 50%

permalink
report
parent
reply
12 points
*

If I say a roll of a 6-sided die has a >50% chance of landing on a number above 2, and after a single roll it lands on 2, was I wrong?

If anything, the problem is in the unfalsifiability of the claim.

permalink
report
parent
reply
4 points

It’s forecasting, not a prediction. If the weather forecast said there was a 28% chance of rain tomorrow and then tomorrow it rained would you say the forecast was wrong? You could say that if you want, but the point isn’t to give a definitive prediction of the outcome (because that’s not possible) it’s to give you an idea of what to expect.

If there’s a 28% chance of rain, it doesn’t mean it’s not going to rain, it actually means you might want to consider taking an umbrella with you because there’s a significant probability it will rain. If a batter with a .280 batting average comes to the plate with 2 outs at the bottom of the ninth, that doesn’t mean the game is over. If a politician has a 28% probability of winning an election, it’s not a statement that the politician will definitely lose the election.

permalink
report
parent
reply
-4 points
*

If the weather forecast said there was a 28% chance of rain tomorrow and then tomorrow it rained would you say the forecast was wrong?

Is it possible for the forecast to be wrong?

I think so. If you look at all the times the forecast predicts a 28% chance of rain, then it should rain on 28% of those days. If it rained, say, on half the days that the forecast gave a 28% chance of rain then the forecast would be wrong.

With Silver, the same principle applies. Clinton should win at least 50% of the 2016 elections where she has at least a 50% chance of winning. She didn’t.

If Silver kept the same model over multiple elections, then we could look at his probabilities in finer detail. But he doesn’t.

permalink
report
parent
reply
3 points

How about this:

Two people give the odds for the result of a coin flip of non-weighted coins.

Person A: Heads = 50%, Tails = 50%

Person B: Heads = 75%, Tails = 25%

The result of the coin flip ends up being Heads. Which person had the more accurate model? Did Person A get something wrong?

permalink
report
parent
reply
-1 points
*

Person B’s predicted outcome was closer to the truth.

Perhaps person A’s prediction would improve if multiple trials were allowed. Perhaps their underlying assumptions are wrong (ie the coins are not unweighted).

permalink
report
parent
reply
4 points

Perhaps person A’s prediction would improve

But in this hypothetical scenario of explicitly unweighted coins, Person A was entirely correct in the odds they gave. There’s nothing to improve.

permalink
report
parent
reply

News

!news@lemmy.world

Create post

Welcome to the News community!

Rules:

1. Be civil

Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.

Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.

Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.

Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.

Posts must be news from the most recent 30 days.


6. All posts must be news articles.

No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.

If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.

Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.

The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body

For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

Community stats

  • 14K

    Monthly active users

  • 11K

    Posts

  • 200K

    Comments