Today the Supreme Court heard oral argument
, regarding when a murderer is deemed mentally retarded so as to be exempt from execution regardless of how heinous, callous, premeditated, or sadistic his actual crime really was.
In the course of this discussion, there was much talk about the 95% confidence interval in statistics. Contrary to myth, this 95% number is nothing but a conventionally adopted rule of thumb. There is nothing magic about it, and there is no compelling reason to use 95% in every circumstance, rather than some other number tailored to the needs of a particular situation.
The rule of thumb goes back to the period between the two world wars and the work of R. A. Fisher. A common problem in studies is that we find that two things, call them A and B, tend to go together, and we want to get a handle on whether this is coincidence or a true correlation. The rule of thumb is that we "reject the null hypothesis" and say it's not just a coincidence if the correlation between A and B is strong enough that the chance of it being a coincidence is less than 5%. This is expressed in journals as p
< .05. A result meeting that criterion is pronounced "statistically significant" and given the coveted asterisk
, as if there were a big difference between p
= 0.051 and p
= 0.049. (There isn't.)
The quasi-religious devotion to this arbitrary criterion was skewered by the famed psychological statistician Jacob Cohen in a classic article:
The atmosphere that characterizes statistics as applied in the social and biomedical sciences is that of a secular religion [citation], apparently of Judeo-Christian derivation, as it employs as its most powerful icon a six-pointed cross, often presented multiply for enhanced authority.