In a recent paper published in the Journal of Development Economics, researchers Professor Marco Manacorda (Queen Mary University of London) and Dr Martin Foureaux Koppensteiner (University of Leicester) focused on evidence from the exposure of day-to-day violence in Brazil by analysing the birth outcomes of children whose mothers were exposed to local violence, as measured by homicide rates in small Brazilian municipalities and the neighbourhoods of the city of Fortaleza.
The team estimated the effect of violence on birth outcomes by comparing mothers who were exposed to a homicide during pregnancy to otherwise similar mothers residing in the same area, who happened not to be exposed to homicides.
The study found that birthweight falls significantly among newborns exposed to a homicide during pregnancy and the number of children classified as being low birthweight increases -- and that the effects are concentrated on the first trimester of pregnancy, which is consistent with claims that stress-induced events matter most when occurring early in pregnancy.
Recently in Studies Category
In Chicago, the local variant of the Ferguson Effect might be called the McDonald Effect. The number crunchers at FiveThirtyEight (who definitely do not lean conservative) have concluded that the numbers have reached the threshold that they can't be brushed off like that any more. Rob Arthur and Jeff Asher have this post.
The problem with longitudinal studies, of course, is that they necessarily take a very long time to do. Some of the best work has come out of New Zealand. Magdalena Cerda, Terrie Moffitt, et al. have this article in Clinical Psychological Science on results from the Dunedin Longitudinal Study on the long-term effects of persistent marijuana use. Here is the abstract:
With the increasing legalization of cannabis, understanding the consequences of cannabis use is particularly timely. We examined the association between cannabis use and dependence, prospectively assessed between ages 18 and 38, and economic and social problems at age 38. We studied participants in the Dunedin Longitudinal Study, a cohort (N = 1,037) followed from birth to age 38. Study members with regular cannabis use and persistent dependence experienced downward socioeconomic mobility, more financial difficulties, workplace problems, and relationship conflict in early midlife. Cannabis dependence was not linked to traffic-related convictions. Associations were not explained by socioeconomic adversity, childhood psychopathology, achievement orientation, or family structure; cannabis-related criminal convictions; early onset of cannabis dependence; or comorbid substance dependence. Cannabis dependence was associated with more financial difficulties than was alcohol dependence; no difference was found in risks for other economic or social problems. Cannabis dependence is not associated with fewer harmful economic and social problems than alcohol dependence.
By some estimates, around a third of law enforcement agencies in the United States now use the sequential format, says John Wixted, PhD, a psychologist at the University of California, San Diego. But, he says, that switch might have been a mistake.
Wixted is one of several scientists, along with Clark and Scott Gronlund, PhD, a psychologist at the University of Oklahoma, who have championed a statistical method called receiver operating characteristic (ROC) analysis, a method widely used in other fields to measure the accuracy of diagnostic systems.
Using that analysis, sequential lineups don't appear to be beneficial -- and might lead to slightly more misidentifications than simultaneous lineups, Gronlund and Wixted have reported (Current Directions in Psychological Science, 2014). The problem, they say, is that previous analytical methods confounded accuracy with a witness's willingness to choose a suspect. In other words, sequential lineups seem to make people less likely to make a choice at all. But when they do pick a suspect, they might be at greater risk of making the wrong choice. "It turns out sequential lineups are inferior," says Wixted.
And what about eyewitness confidence?
For many years, researchers didn't think an eyewitness's confidence revealed much about his or her accuracy in identifying a suspect, says Wixted. A confident eyewitness could be just as likely to get the ID right -- or wrong -- as a less confident witness. But in the last two decades, numerous analyses have converged on the fact that eyewitness confidence is actually a strong indicator of accuracy.
Mac Donald clarifies:The researchers found that in the 12 months before Michael Brown was shot in Ferguson, Missouri--the event that catalyzed the Black Lives Matter movement--major felony crime, averaged across all 81 cities, was going down. In the 12 months after Brown was shot, that aggregate drop in crime slowed down considerably. But that deceleration of the crime drop was not large enough to be deemed statistically significant, say the criminologists. Therefore, they conclude, "there is no systematic evidence of a Ferguson Effect on aggregate crime rates throughout the large U.S. cities . . . in this study."
[T]he existence of a Ferguson effect does not depend on its operating uniformly across the country in cities with very different demographics. When the researchers disaggregated crime trends by city, they found that the variance among those individual city trends had tripled after Ferguson. That is, before the Brown shooting, individual cities' crime rates tended to move downward together; after Ferguson, their crime rates were all over the map. Some cities had sharp increases in aggregate crime, while others continued their downward trajectory. The variance in homicide trends was even greater--nearly six times as large after Ferguson. And what cities had the largest post-Ferguson homicide surges? Precisely those that the Ferguson effect would predict: cities with high black populations, low white populations, and high preexisting rates of violent crime.
California has (1) court orders overriding state law to release prisoners because of overcrowded conditions caused by the Legislature's failure to build enough prison space, (2) the "realignment" program moving prisoners from state prison to overcrowded county jails where they are either released or push out prisoners who would otherwise be in jail, and (3) Proposition 47, which reduced many felonies to misdemeanors. Between these measures, the state has seriously softened its approach to crime and put many criminals on the street who would otherwise be in custody. Although other states are taking more modest measures to reduce prison populations, nowhere else do we see this headlong rush to push criminals out the gates. One would expect, then, that California would have much worse results than other states, and that is exactly what we see.
What is Governor Brown's plan? Push even more criminals onto the streets.
Devlin Barrett has this article in the WSJ, with the above headline in the print version (slightly different online).
Murders rose 6.2% in the first half of 2015, according to preliminary crime data released Tuesday by the Federal Bureau of Investigation, figures that are likely to further fuel the current political debates about crime, policing and sentencing.
Violent crime overall increased 1.7%, the FBI found, while property crimes decreased 4.2%, compared with the first six months of 2014. Police chiefs from around the country had warned about an apparent surge in recent months.
Bialik says, "For what it's worth, homicides are up -- though probably by less than what you've read." What's with the "probably"? Is he implying that most news media have exaggerated the increase? From what I have read, most media are fully complicit in the soft-peddling.
"So-called justifiable homicides don't count toward the FBI definition." So-called? They are called that because they are that. Would you point at an oak tree and say it is a "so-called oak"?
Bialik discusses the problem of crimes being defined differently in different jurisdictions, which is indeed a huge problem in crime statistics. (It's even worse when you go international.) He talks about the NYPD's reporting of "shootings" and the FBI's UCR category of "aggravated assault." He quotes a criminologist saying (correctly), the latter is more deserving of confidence. Then he tosses in, "Unfortunately, the FBI doesn't separate assaults by firearm from other assaults for individual cities in the data it reports." Why unfortunate? If you need to choose between classifying assaults by harm caused and classifying them by weapon used, it seems to me that harm caused is by far the more important. Ideally, you could classify them both ways, but resources limit what you can do, so the ideal is rarely achieved.
The preliminary report examined five cities with particularly high murder rates -- Baltimore, Detroit, Milwaukee, New Orleans, and St. Louis -- and found these cities also had significantly lower incomes, higher poverty rates, higher unemployment, and falling populations than the national average."There are none so blind as those who will not see." These are also cities where the police are under severe attack.
A while back there was some research that indicated that sequential lineups -- where the witness looks at suspects or pictures one at a time -- were far better than simultaneous ones where the witness looks at a group at once. There was a rush to codify this preference into rigid requirements. Well, that may not be right. Bradley Fikes reports in the San Diego Union Tribune on a study indicating, among other things "simultaneous lineups were, if anything, diagnostically superior to sequential lineups. These results suggest that recent reforms in the legal system, which were based on the results of older research, may need to be reevaluated."
Another important finding is that the witness's confidence at the first observation is an important indication of accuracy, much more so than the witness's demeanor at trial that juries must usually go on.
Jacob Gershman has this article in the WSJ.
The state supreme court vacated the decisions in favor of the murderers, but it did so on the narrow ground that the trial judge did not allow the prosecution sufficient time to gather evidence to rebut a large study submitted to support the claim. That means the case goes on.
Mr. Obama gets the maximum Four Pinocchios (reserved for "whoppers") for his December 1 statement in Paris, "I say this every time we've got one of these mass shootings: This just doesn't happen in other countries." Wow.
The President's other, more nuanced statements about the relative frequency of such incidents get the milder Two Pinocchio rating ("significant omissions and/or exaggerations"). To check the facts, Ms. Lee consults experts Adam Lankford and John Lott and gets very different answers.
Astute readers might notice how Lankford and Lott both compared the United States to grouped European countries, but their conclusions are vastly different. Lott says the rate is about the same, while Lankford says the rate is five times higher in the United States. How is this possible? The researchers are looking at different sets of years and different sets of countries. (Lott looked at Europe as a whole; Lankford at the European Union.) Lott uses a broader measure of mass shootings than Lankford does. Lankford looks at the number of shooters; Lott uses fatalities and shooting incidents. This is an example of how the data and definition can be adjusted to show different findings about mass shootings, even using a per capita rate.Lots and lots of choices have to be made in setting up a study, many seemingly benign in themselves. If a person wants to reach a particular result, it is easy as pie to run the numbers 16 different ways, pick the way that best supports your agenda, and throw the others in the trash.
This is why the viewpoint one-sidedness of American academia and the well-funded nonprofits is so very dangerous. The truth comes out much more clearly when there are people on both sides doing these kinds of studies, but academic conservatives are an endangered species, and those who do "come out" are targeted by neo-McCarthyists determined to achieve ideological purity.
Be very, very skeptical about what "studies show" and "experts say."
Generally, that has been a good thing. For example, the "wrongly executed" Roger Coleman and the "exonerated" Timothy Hennis were both proved guilty by conclusive DNA matches after the technology improved.
However, DNA testing is now getting so sensitive that it can pick up a person's DNA from a place he has never been or an object he has never touched by transfer from someone else. Ben Knight has this article at Yahoo News.
The problem, of course, is not in the science but in the interpretation. The answer to the rhetorical question of the caption is no. DNA testing cannot be too sensitive, but the results of ultrasensitive tests must be interpreted with great caution.
Cal. Gov. Jerry Brown last month tied the rash of destructive fires to carbon emissions. One small problem, say the scientists. There is no scientific basis for that connection.
University of Colorado climate change specialist Roger Pielke said Brown is engaging in "noble-cause corruption."Sometimes?
Pielke said it is easier to make a political case for change using immediate and local threats, rather than those on a global scale, especially given the subtleties of climate change research, which features probabilities subject to wide margins of error and contradiction by other findings.
"That is the nature of politics," Pielke said, "but sometimes the science really has to matter."