Recently in Studies Category

Another Hidden Cost of Crime

| No Comments
Unfortunately, some people on the conservative side of the political aisle have jumped on the "let-em-out" bandwagon because they see that as a way to reduce government budgets.  Looking at costs to government alone, however, is not the correct way to measure costs of alternative courses of action to society as a whole.  When government fails in its fundamental obligation to protect people from crime, it imposes costs on the victims, a kind of "crime tax" that falls heavily, partly at random, but disproportionately on people of modest means.  Quantifying the cost of crime to victims is a tricky business in many ways, and one of the ways is that much of the cost is hidden.  Science Daily has this article on a hidden cost that has been overlooked to this point:

In a recent paper published in the Journal of Development Economics, researchers Professor Marco Manacorda (Queen Mary University of London) and Dr Martin Foureaux Koppensteiner (University of Leicester) focused on evidence from the exposure of day-to-day violence in Brazil by analysing the birth outcomes of children whose mothers were exposed to local violence, as measured by homicide rates in small Brazilian municipalities and the neighbourhoods of the city of Fortaleza.

The team estimated the effect of violence on birth outcomes by comparing mothers who were exposed to a homicide during pregnancy to otherwise similar mothers residing in the same area, who happened not to be exposed to homicides.

The study found that birthweight falls significantly among newborns exposed to a homicide during pregnancy and the number of children classified as being low birthweight increases -- and that the effects are concentrated on the first trimester of pregnancy, which is consistent with claims that stress-induced events matter most when occurring early in pregnancy.

Evidence Mounts on Crime Spikes

| No Comments
Many people   -- including law enforcement officials, bloggers here, and Heather MacDonald at the Manhattan Institute -- have been raising alarms that increasing numbers of innocent people are being needlessly victimized due to ill-considered policy changes (e.g., California's "realignment" and Prop. 47) and the "Ferguson Effect" of police holding back as they come under hyper-scrutiny.  Soft-on-crime apologists have responded that the data are too tentative and too anecdotal to draw such conclusions.  That answer did have some validity initially, but it wears increasingly thin as data accumulate.

In Chicago, the local variant of the Ferguson Effect might be called the McDonald Effect.  The number crunchers at FiveThirtyEight (who definitely do not lean conservative) have concluded that the numbers have reached the threshold that they can't be brushed off like that any more.  Rob Arthur and Jeff Asher have this post.
Some of the best information we have in social sciences comes from longitudinal studies, where people are followed and data gathered over a long period of time.  Such studies are more likely to give us definitive answers than "snapshot" correlational studies that look at a sample at one time and merely tell us that certain factors tend to go together in members of the sample.

The problem with longitudinal studies, of course, is that they necessarily take a very long time to do.  Some of the best work has come out of New Zealand.  Magdalena Cerda, Terrie Moffitt, et al. have this article in Clinical Psychological Science on results from the Dunedin Longitudinal Study on the long-term effects of persistent marijuana use.  Here is the abstract:

With the increasing legalization of cannabis, understanding the consequences of cannabis use is particularly timely.  We examined the association between cannabis use and dependence, prospectively assessed between ages 18 and 38, and economic and social problems at age 38. We studied participants in the Dunedin Longitudinal Study, a cohort (N = 1,037) followed from birth to age 38. Study members with regular cannabis use and persistent dependence experienced downward socioeconomic mobility, more financial difficulties, workplace problems, and relationship conflict in early midlife. Cannabis dependence was not linked to traffic-related convictions.  Associations were not explained by socioeconomic adversity, childhood psychopathology, achievement orientation, or family structure; cannabis-related criminal convictions; early onset of cannabis dependence; or comorbid substance dependence. Cannabis dependence was associated with more financial difficulties than was alcohol dependence; no difference was found in risks for other economic or social problems.  Cannabis dependence is not associated with fewer harmful economic and social problems than alcohol dependence.

The New Data on Eyewitness Testimony

| No Comments
The current issue of the Monitor has a short article on some new data regarding eyewitness testimony.  For many years, various psychological experts have insisted that sequential lineups are vastly superior to simultaneous lineups.  Now, perhaps, the reverse is true:

By some estimates, around a third of law enforcement agencies in the United States now use the sequential format, says John Wixted, PhD, a psychologist at the University of California, San Diego. But, he says, that switch might have been a mistake.

Wixted is one of several scientists, along with Clark and Scott Gronlund, PhD, a psychologist at the University of Oklahoma, who have championed a statistical method called receiver operating characteristic (ROC) analysis, a method widely used in other fields to measure the accuracy of diagnostic systems.

Using that analysis, sequential lineups don't appear to be beneficial -- and might lead to slightly more misidentifications than simultaneous lineups, Gronlund and Wixted have reported (Current Directions in Psychological Science, 2014). The problem, they say, is that previous analytical methods confounded accuracy with a witness's willingness to choose a suspect. In other words, sequential lineups seem to make people less likely to make a choice at all. But when they do pick a suspect, they might be at greater risk of making the wrong choice. "It turns out sequential lineups are inferior," says Wixted.

And what about eyewitness confidence?

For many years, researchers didn't think an eyewitness's confidence revealed much about his or her accuracy in identifying a suspect, says Wixted. A confident eyewitness could be just as likely to get the ID right -- or wrong -- as a less confident witness. But in the last two decades, numerous analyses have converged on the fact that eyewitness confidence is actually a strong indicator of accuracy.

Stats Matter

| No Comments
The Ferguson effect is undoubtedly being downplayed by the left.  Heather Mac Donald has this piece in the City Journal addressing the way researchers have tried to obscure its existence.  The subject of the piece is a recent paper published in the Journal of Criminal Justice and authored by four University of Colorado Boulder researchers and sociologist David Pyrooz, in which they "created a complex econometric model that analyzed monthly rates of change in crime rates in 81 U.S. cities with populations of 200,000 or more."  Some of the their findings:

The researchers found that in the 12 months before Michael Brown was shot in Ferguson, Missouri--the event that catalyzed the Black Lives Matter movement--major felony crime, averaged across all 81 cities, was going down. In the 12 months after Brown was shot, that aggregate drop in crime slowed down considerably. But that deceleration of the crime drop was not large enough to be deemed statistically significant, say the criminologists. Therefore, they conclude, "there is no systematic evidence of a Ferguson Effect on aggregate crime rates throughout the large U.S. cities . . . in this study."

Mac Donald clarifies:

[T]he existence of a Ferguson effect does not depend on its operating uniformly across the country in cities with very different demographics. When the researchers disaggregated crime trends by city, they found that the variance among those individual city trends had tripled after Ferguson. That is, before the Brown shooting, individual cities' crime rates tended to move downward together; after Ferguson, their crime rates were all over the map. Some cities had sharp increases in aggregate crime, while others continued their downward trajectory. The variance in homicide trends was even greater--nearly six times as large after Ferguson. And what cities had the largest post-Ferguson homicide surges? Precisely those that the Ferguson effect would predict: cities with high black populations, low white populations, and high preexisting rates of violent crime.
CalNonCalCrimeChanges2014_2015.gifAs noted previously on this blog, the FBI recently announced the Preliminary Semiannual Uniform Crime Report covering the first half of 2015 for cities over 100,000.  I have totaled the crime counts for violent and property crimes for 2014 and 2015 and computed the percent changes for California cities versus cities in other states.  Click on the graph for a larger view.

California has (1) court orders overriding state law to release prisoners because of overcrowded conditions caused by the Legislature's failure to build enough prison space, (2) the "realignment" program moving prisoners from state prison to overcrowded county jails where they are either released or push out prisoners who would otherwise be in jail, and (3) Proposition 47, which reduced many felonies to misdemeanors.  Between these measures, the state has seriously softened its approach to crime and put many criminals on the street who would otherwise be in custody.  Although other states are taking more modest measures to reduce prison populations, nowhere else do we see this headlong rush to push criminals out the gates.  One would expect, then, that California would have much worse results than other states, and that is exactly what we see.

What is Governor Brown's plan?  Push even more criminals onto the streets. 

Murders Up 6.2%, FBI Data Show

| No Comments
Official crime statistics are slow to confirm what common sense tells us is likely to happen and what anecdotal evidence tells us is happening.  There is a lag between cause and effect and another lag between effect and the official statistics.  But eventually the facts, "stubborn things," do come in.

Devlin Barrett has this article in the WSJ, with the above headline in the print version (slightly different online).

Murders rose 6.2% in the first half of 2015, according to preliminary crime data released Tuesday by the Federal Bureau of Investigation, figures that are likely to further fuel the current political debates about crime, policing and sentencing.

Violent crime overall increased 1.7%, the FBI found, while property crimes decreased 4.2%, compared with the first six months of 2014. Police chiefs from around the country had warned about an apparent surge in recent months.

Notes on Crime Statistics

| No Comments
Carl Bialik has this post at 538 Blog on crime statistics and why they can be confusing and manipulated.  It's a useful post on the technical aspects, but the "spin" aspect of it is annoying.  Bialik tries to soft-pedal the increase in homicides.  A 14% jump in a single year is huge.

Bialik says, "For what it's worth, homicides are up -- though probably by less than what you've read."  What's with the "probably"?  Is he implying that most news media have exaggerated the increase?  From what I have read, most media are fully complicit in the soft-peddling.

"So-called justifiable homicides don't count toward the FBI definition."  So-called?  They are called that because they are that.  Would you point at an oak tree and say it is a "so-called oak"?

Bialik discusses the problem of crimes being defined differently in different jurisdictions, which is indeed a huge problem in crime statistics.  (It's even worse when you go international.)  He talks about the NYPD's reporting of "shootings" and the FBI's UCR category of "aggravated assault."  He quotes a criminologist saying (correctly), the latter is more deserving of confidence.  Then he tosses in, "Unfortunately, the FBI doesn't separate assaults by firearm from other assaults for individual cities in the data it reports."  Why unfortunate?  If you need to choose between classifying assaults by harm caused and classifying them by weapon used, it seems to me that harm caused is by far the more important.  Ideally, you could classify them both ways, but resources limit what you can do, so the ideal is rarely achieved.
Here's a follow-up on my Boxing Day post.  That big city murder increase dismissed as a mere 11% in a single year in the Brennan Center's preliminary figures turns out to be 14.6% in the final figures, according to this press release.  That is nearly one in seven.  And what could cause this?

The preliminary report examined five cities with particularly high murder rates -- Baltimore, Detroit, Milwaukee, New Orleans, and St. Louis -- and found these cities also had significantly lower incomes, higher poverty rates, higher unemployment, and falling populations than the national average.
"There are none so blind as those who will not see."  These are also cities where the police are under severe attack.

Simultaneous v. Sequential Lineups

| No Comments
One thing we know from studying studies is that you should not make radical changes based on a single study but rather wait for the result to be confirmed by other studies.  You don't know how "robust" a result is until an issue has been studied multiple ways by multiple researchers.  How many times would you have stopped and restarted drinking coffee if you went with every study that came along?

A while back there was some research that indicated that sequential lineups -- where the witness looks at suspects or pictures one at a time -- were far better than simultaneous ones where the witness looks at a group at once.  There was a rush to codify this preference into rigid requirements.  Well, that may not be right.  Bradley Fikes reports in the San Diego Union Tribune on a study indicating, among other things "simultaneous lineups were, if anything, diagnostically superior to sequential lineups. These results suggest that recent reforms in the legal system, which were based on the results of older research, may need to be reevaluated."

Another important finding is that the witness's confidence at the first observation is an important indication of accuracy, much more so than the witness's demeanor at trial that juries must usually go on.
The North Carolina Supreme Court has sent back to the trial court the cases on that state's ill-conceived, misnamed, and since repealed "Racial Justice Act."  The purpose of that act is to defeat rather than promote justice, and it allows murderers to overturn their sentences based on the kind of statistics-based arguments rejected by the U.S. Supreme Court in McCleskey v. Kemp.  (See my law review article for background on the racial statistics controversy.)

Jacob Gershman has this article in the WSJ.

The state supreme court vacated the decisions in favor of the murderers, but it did so on the narrow ground that the trial judge did not allow the prosecution sufficient time to gather evidence to rebut a large study submitted to support the claim.  That means the case goes on. 
WaPo fact checker Michelle Ye Hee Lee had this article Thursday on President Obama's various statements after mass shootings, which are not fully consistent with each other or with the facts.  The article also has a cautionary nugget about what "experts say" and what "studies show."

Mr. Obama gets the maximum Four Pinocchios (reserved for "whoppers") for his December 1 statement in Paris, "I say this every time we've got one of these mass shootings: This just doesn't happen in other countries."  Wow.

The President's other, more nuanced statements about the relative frequency of such incidents get the milder Two Pinocchio rating ("significant omissions and/or exaggerations").  To check the facts, Ms. Lee consults experts Adam Lankford and John Lott and gets very different answers.

Astute readers might notice how Lankford and Lott both compared the United States to grouped European countries, but their conclusions are vastly different. Lott says the rate is about the same, while Lankford says the rate is five times higher in the United States. How is this possible? The researchers are looking at different sets of years and different sets of countries. (Lott looked at Europe as a whole; Lankford at the European Union.) Lott uses a broader measure of mass shootings than Lankford does. Lankford looks at the number of shooters; Lott uses fatalities and shooting incidents. This is an example of how the data and definition can be adjusted to show different findings about mass shootings, even using a per capita rate.
Lots and lots of choices have to be made in setting up a study, many seemingly benign in themselves.  If a person wants to reach a particular result, it is easy as pie to run the numbers 16 different ways, pick the way that best supports your agenda, and throw the others in the trash.

This is why the viewpoint one-sidedness of American academia and the well-funded nonprofits is so very dangerous.  The truth comes out much more clearly when there are people on both sides doing these kinds of studies, but academic conservatives are an endangered species, and those who do "come out" are targeted by neo-McCarthyists determined to achieve ideological purity.

Be very, very skeptical about what "studies show" and "experts say."  

Can DNA Testing Be Too Sensitive?

| 1 Comment
Forensic DNA testing has gotten better and better over the years, giving us definitive answers from samples that previously would have been too small or too degraded.

Generally, that has been a good thing.  For example, the "wrongly executed" Roger Coleman and the "exonerated" Timothy Hennis were both proved guilty by conclusive DNA matches after the technology improved.

However, DNA testing is now getting so sensitive that it can pick up a person's DNA from a place he has never been or an object he has never touched by transfer from someone else.  Ben Knight has this article at Yahoo News.

The problem, of course, is not in the science but in the interpretation.  The answer to the rhetorical question of the caption is no.  DNA testing cannot be too sensitive, but the results of ultrasensitive tests must be interpreted with great caution.

"Noble-Cause Corruption"

| No Comments
Catching up a bit, this article by Paige St. John in the LA Times is a couple weeks old now and not on topic, but it introduces an important term to express a form of deceit that I have seen many times but did not have a term for.

Cal. Gov. Jerry Brown last month tied the rash of destructive fires to carbon emissions.  One small problem, say the scientists.  There is no scientific basis for that connection.

University of Colorado climate change specialist Roger Pielke said Brown is engaging in "noble-cause corruption."

Pielke said it is easier to make a political case for change using immediate and local threats, rather than those on a global scale, especially given the subtleties of climate change research, which features probabilities subject to wide margins of error and contradiction by other findings.

"That is the nature of politics," Pielke said, "but sometimes the science really has to matter."

Tweet of the Day

| No Comments
From Pat Sajak:  "Studies show 92% of stats are manipulated to make political or social points, but if repeated, are believed by 96%."

Monthly Archives