<< It's Only So Much of a Tantrum | Main | News Scan >>


Study Shows Studiers Can Show Anything They Want

| 2 Comments
Joseph Brean reports in the National Post:

[A] new report in the journal Psychological Science, which claims to show "how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis."

...[T]wo scientists from the Wharton School of Business at the University of Pennsylvania, and a colleague from Berkeley, argue that modern academic psychologists have so much flexibility with numbers that they can literally prove anything.

In effect turning the weapons of statistical analysis against their own side, the trio managed to to prove something demonstrably false, and thereby cast a wide shadow of doubt on any researcher who claims his findings are "statistically significant."

In "many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not," they write.

The article is about psychology rather than crime, but the implications apply fully to studies about crime.  Indeed, because crime issues often have one side deemed Politically Correct in academia, the danger of manipulation is at its maximum.  For many years, studies by researchers with an anti-death-penalty agenda have claimed to show pervasive race-of-victim bias in the death penalty, but, as I have shown here and here, they really don't.  Now we are inundated with "evidence-based" claims that alternatives to incarceration will keep us just as safe as locking up the bad guys, and claims that the dramatic drop in crime after we got tough is just a coincidence.  As George Gershwin said, "It Ain't Necessarily So."

The new study is Joseph P. Simmons, Leif D. Nelson, and Uri Simonsohn, False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant, Psychological Science, November 2011, vol. 22, no. 11, pp. 1359-1366.  Here is the abstract:

In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.
"The culprit is a construct we refer to as researcher degrees of freedom."  (For statistics nerds, that's wickedly funny.)

Even in the absence of Political Correctness, researchers have an incentive to produce a study showing a significant effect.  "Publish or perish," the saying goes, and null results are hard to get published.  There are a great many decisions that can be made along the way, and a desire to produce a significant result can produce a conscious or unconscious bias to choose the tine of the fork in the research road that leads to significance.

In the presence of Political Correctness, the danger is magnified in both probability and consequences.  A study with a PC result is more likely to get noticed and less likely to be criticized in the ideological cesspool of contemporary American academia. 

The authors note in general, "[F]alse positives waste resources: They inspire investment in fruitless research programs and can lead to ineffective policy changes."  In the criminal justice arena, "ineffective policy" means victimizing innocent people.  Many tens of thousands of people were needlessly robbed, raped, and murdered because of the folly of soft policies of the 1960s.  We are now in danger of repeating the error.  The evidence proffered to support the proposed policy change requires the most exacting scrutiny.  What "studies show" ain't necessarily so.

2 Comments

Interesting find. The old lore was that a good scholar was someone who was inherently skeptical. The modern scholar is someone who has something to sell.

Steve is correct. Tell me the affiliation of any author or poll and I will be able to predict the survey/research results.

There are few independent authorities without an axe to grind.

Leave a comment

Monthly Archives