Lemmie wrote:Here is my actual comment addressing this, prior to that last sentence you quoted:
So if individual factors, NOT captured by the statistics, are the "driving determinants" of success, then the only way that IQ could still have "the most predictive power" would be if those individual factors WERE correlated with IQ. But the main argument of these outliers, if I am reading you correctly, was that these individual factors were NOT correlated with IQ. So, which is it? IQ predicts? Or individual factors predict? Or is it that the authors are carefully asserting no individual is being talked about here, but their analysis still holds for groups and averages, hence EA's "exaggerated relationship" comment?
Analytics wrote:I only have a minute, but I'll try to answer this. The idea is that IQ has the most predictive power of the variables we have been able to study. Individuals do better or worse than what the prediction based solely on IQ predicts. On average however, IQ does a good job of predicting.
Why do some people do better than the prediction and others do worse? Presumably there is a reason. Or maybe it's just luck. Or both? The whole concept of predictive analytics and big data is to expand the scope of analysis, bring in more factors, look at other model forms, and make better predictions. The fact that the model has room for improvement doesn't invalidate the statistical relationships that it does explain.
Whether it "exaggerates" the relationship is a subjective claim, one that I'm not sure I agree with and don't know how to address.
Does that help?
Who is it that you think is asking for
help? That's twice now. Just weird.

I'm not asking for
help, I am registering some serious reservations regarding your the assessment of the statistical work you are reviewing.
You say "presumably there is a reason" the model fails for some, but you don't seem to understand that if the statistical analysis is not set up correctly by the humans it will fail--not because the principles of statistical analysis are weak, but because the set up has errors!
And your italicized part "
of the variables we have been able to study," does not make sense. There are a myriad of variables and data available, and in your example, the authors chose TWO, found a MODEST relationship, with an extremely large number of unexplained data points, and then made sweeping, unsupported conclusions.
A wrote:The fact that the model has room for improvement doesn't invalidate the statistical relationships that it does explain.
Actually, yes it does. If the model is not properly capturing the relationship, then the incorrect statistical relationships it IS producing are indeed invalidated.
A wrote:Whether it "exaggerates" the relationship is a subjective claim, one that I'm not sure I agree with and don't know how to address.
No, the way you've presented the stats it's an objective issue that CAN be addressed by evaluating the robustness of the inputs into the statistical model compared to using other, apparently unevaluated independent variables, and the relative strength of the resulting statistical relationships.
What I'm seeing from your summary and excerpts is that the authors have engaged in some extremely weak and unreliable statistical analysis, which they are then using to unjustifiably support their conclusions. I don't have the confidence you do that they are accurately portraying the situation; in my opinion you are taking their work at face value without adequately considering and understanding the statistical issues.