I have noted on this blog a few times the hazards of putting too much faith in studies conducted with mathematical models. See this post in 2011 on the interesting book Models Behaving Badly and this one last year on the disastrous failure of a sports prediction model.
One good thing about election prediction models is that we get quick and conclusive results about whether they are right or wrong. Charles Forelle has this story in the WSJ about the meltdown of British pollsters in last week's election.
The reason I bring this up is to remind everyone to be skeptical when someone asserts that some proposition is proven because "studies show" it to be true when the studies are based on mathematical modeling. Such models can provide indications but not definitive proof, and it is a serious error to rely on them entirely.
One good thing about election prediction models is that we get quick and conclusive results about whether they are right or wrong. Charles Forelle has this story in the WSJ about the meltdown of British pollsters in last week's election.
The website FiveThirtyEight, run by statistics guru Nate Silver, who made his name with accurate predictions of U.S. presidential elections, used a model developed by British academics. It came up with 278 seats for the Conservatives and 267 for Labour, and put the probability at 90% that the Conservatives would win between 252 and 305 seats. They won 331.The difference between the prediction and the outcome was double what was supposed to be a 90% confidence interval. That's more than failure; that's crash and burn.
The reason I bring this up is to remind everyone to be skeptical when someone asserts that some proposition is proven because "studies show" it to be true when the studies are based on mathematical modeling. Such models can provide indications but not definitive proof, and it is a serious error to rely on them entirely.

Leave a comment