JohnStuartMill wrote:
Why does your bolded statement follow logically but not the italicized one? (Hint: yours doesn't.)
D-K exemplar,
This is not difficult. It is a fact that the probability of obtaining a head in a fair coin toss is 0.5 (+/- epsilon, possibly). That's
objective; the hypothetical you (who also has "just enough of learning to misquote," apparently) isn't modeling anything except in the most trivial sense. Hypothetical you might as well state the probability that the sun will rise tomorrow is 1 and refer to that as "modeling" too. By way of contrast, Nate Silver's Psephological modeling really is modeling in a non-trivial sense and it involves subjective approaches. Since he has been wrong in highly favoring the gay marriage side at least twice, I am justified in questioning his modeling on that particular issue.
Think about it like this, Calc: what if Silver had predicted that the proposition would fail, giving that outcome a probability of 51%, and then the proposition won? Would that prove that Silver is a bad statistician (or as you erroneously implied, no statistician at all)? Of course not.
Nate Silver is not a statistician.
What if he'd given a probability of 60%? This would be better evidence that Silver is a bad statistician, but it would probably still be better to reserve judgment until more data comes in. 70% would be even better evidence, but still, we'd have to weigh the correspondence of Silver's models with the outcome in this instance against the correspondence of Silver's models with the correspondence of his models with the outcomes in other instances.
As I see it, candidates and "issues" are not the same and Silver's past success in predictions concerning the former does not "make up" for his underwhelming "gay marriage" predictions.
Silver's prediction in Maine was wrong, yes.
And CA
But as I've explained to you before, statisticians are not soothsayers.
You don't explain things in statistics to me junior;
I explain them to you.
The 70% (i.e., roughly 5:2) figure is a meta-prediction: if he makes predictions like the one about the Maine proposition a hundred times and gets the prediction wrong thirty times, then he's a better statistician than if he got all those predictions right, because the meta-prediction is what's important. To illustrate: if Silver had predicted two other outcomes with a similar model to the one used for the Maine proposition, and got both of those right, then the fact that his model predicted the wrong outcome in Maine would actually be evidence against the idea that he's a bad statistician! (Because predicting outcomes correctly 2/3 [~66.7%] of the time is closer to the the meta-prediction of getting things right 70% of the time, than getting them right all [100%] of the time.) You point to Silver's incorrect prediction in Maine as evidence that he's a bad statistician, but he actually predicted that he'd make incorrect predictions a significant (~30%) proportion of the time!
We have two trials that I know of. The last of his predictions re: the disposition of Prop 8 was "55/45" in favor of 'no.' The probability Silver would be wrong in both cases (assuming independence, which is most likely not the case, since Silver was the one making both predictions) is 0.1285714. I would expect at least one outcome favoring "gay marriage" using his odds.