This article in the WSJ isn't directly related to criminal law, but it nicely illustrates a problem in social statistics that comes up all the time in policy discussions about crime.
The article is titled The Most Dangerous Place to Bicycle in America. How do they measure dangerousness? Their measure is a ratio with the number of people killed in bicycle accidents in a ten-year period in the numerator and the population divided by 100,000 in the denominator. That is, they define dangerousness = deaths / (pop/100000). See the problem?
As so often happens, the problem is in the denominator. When we compare the dangerousness of flying versus driving, for example, we compare deaths per passenger-miles (adjusted by some convenient factor). That is a much more relevant denominator. But how do we know how many miles are ridden on bicycles in a given area? We don't.
With the optimum number not available, the study authors use an inferior one. Population would only serve as a proxy for cycle-miles if bicycle usage were uniform across the country. That is, if the number of miles ridden on bicycles per person per year were the same in every locality. Is it? Of course not.
Among the 50 largest metro areas, all 10 of the areas with the highest "dangerousness" rate by this measure are places where winter is considerably milder than the national average. Is the number of cycle-miles per capita higher in mild-winter areas than in places with bone-chilling cold several months of the year? Very likely. Chicago rates very safe on the article's dangerousness scale. No doubt there are also other factors making bicycle usage different in different areas. The general level of "eco-consciousness" in an area might increase bicycle usage and thereby drive up the flawed measure of dangerousness.
The article goes on to discuss why Florida is particularly dangerous for cyclists, and those reasons may be valid. Florida has higher rates than other mild-winter areas. What's annoying, though, is that the article does not tell the reader that its basic statistic is flawed.
The moral of the story, as I have mentioned before, is that we can never just take the caption from any study and accept it as truth. We must dig deeper into the numbers behind the capsule description. Two common flaws are (1) the numbers we really want are not available, so other numbers with only a dubious relation to the ones we want are substituted; and (2) the problem is in the denominator of the ratio, where it is less likely to be noticed. Both flaws are present here.
What "studies show" "ain't necessarily so."

Leave a comment