<< Is Plea Bargaining Just? | Main | News Scan >>


Computer Model Predicts Brazil Will Win World Cup

| 0 Comments
Here's a bit of old news.  On May 28, Neelabh Chaturvedi reported for the WSJ MoneyBlog that the very smart folks at Goldman Sachs "crunched lots of numbers. Lots and lots of numbers," to make the prediction that host Brazil would win the grand prize in the world soccer tournament.

Um, that didn't exactly work out.  Phillipa Leighton-Jones and Jon Sindreu have this follow-up on the same blog.

In December 2011, I wrote a post titled Models Behaving Badly about a book with the same title on the shortcomings of computer modeling of human behavior.  The post noted the implications for criminal justice.  (See also the noteworthy comment by federale86.)

All of these cautions apply when the modeler is actually trying to get to the truth.  An additional heaping scoop of skepticism is called for when the modeler is or is retained by an advocate trying to convince people of a position he would believe for other reasons.  On Tuesday, Robert Caprara had this op-ed in the WSJ titled Confessions of a Computer Modeler.  While working for the EPA, Caprara was told to go back and do his model over repeatedly until he got the "right" answer, the answer predetermined by the agenda of the official in charge:
I realized that my work for the EPA wasn't that of a scientist, at least in the popular imagination of what a scientist does. It was more like that of a lawyer. My job, as a modeler, was to build the best case for my client's position. The opposition will build its best case for the counter argument and ultimately the truth should prevail.

If opponents don't like what I did with the coefficients, then they should challenge them. And during my decade as an environmental consultant, I was often hired to do just that to someone else's model. But there is no denying that anyone who makes a living building computer models likely does so for the cause of advocacy, not the search for truth.
The problem with this "ultimately the truth should prevail" theory, at least as applied to criminal justice, is that the playing field is not level.  There is little independent funding to do research in the area, and most of it ends up being done by academics.  Because academia is poisoned by Political Correctness, a lot more studies are going to be done by people with a pro-defendant agenda than a pro-prosecution one.  There is nowhere near enough scrutiny of studies with a pro-defense bottom line, while any academic who publishes a study with a bottom line useful to the prosecution will be savagely attacked regardless of actual merit.

Then there is the problem of publicizing the results once they are found.  The finding of David Baldus et al. that there was a "race-of-victim" bias in capital sentencing was widely reported in the press and the academic literature and is accepted by a great many people as the unquestioned truth.  The finding of the federal district court in the McCleskey case, after a full trial with expert testimony on both sides, was that Baldus's data show nothing of the sort.  Yet that finding is virtually unknown.  Even a nationally known expert on sentencing law was genuinely surprised when I told him that.

To take another example from the same dispute, the finding of Raymond Paternoster et al. regarding supposed bias in Maryland capital sentencing is well known.  Richard Berk and Azusa Li of UCLA recrunched the numbers and found that Paternoster's results were "fragile," but hardly anyone knows that.  For whatever reason, negative findings in this area are not considered newsworthy.  For more on this, and citations to the above sources, see my OSJCL article.

Bottom line:  what "studies show" "ain't necessarily so."

Leave a comment

Monthly Archives