<< Attacking the Victim: 2019 Version | Main | Congress, Contempt, and Attorneys General >>


News Scan

| 3 Comments
5th Circuit: Injured Cop Can Sue BLM:  A unanimous Fifth Circuit decision announced last month will allow a Baton Rouge police officer, injured during a protest, to sue Black Lives Matter for provoking the confrontation that caused his injuries.  Joe Gyan of The Advocate reports that the officer, listed as John Doe in the lawsuit, was struck by a heavy object during a BLM protest over the 2016 police shooting of Alton Stirling, knocking out his teeth, injuring his jaw and causing a brain injury.  A Baton Rouge police officer fatally shot Sterling during a struggle outside a convenience store in 2016 after being summoned to the store. The 37-year-old had been selling homemade CDs and officers later recovered a loaded revolver from his pocket.  Because he had prior felony convictions, Sterling could not legally carry a gun.  The organizer of the protest, BLM member DeRay Mckesson of Baltimore, is the target of the lawsuit.  The Fifth Circuit decision overturned an earlier district court ruling which concluded that the BLM group was too loosely organized to be held accountable by a lawsuit. 

Should Algorithms Decide Who's Low Risk?  Prisons and courts across the country have been using artificial intelligence to evaluate criminal offenders for nearly two decades.  In California 49 of 58 counties and the state's corrections department use algorithmic risk assessment tools to make decisions on bail, sentencing, probation and eligibility for early release.  Rachael Myrow of KQED reports on a recent study by the Partnership on AI, a group of Silicon Valley heavyweights and civil liberties groups, which found that algorithmic tools are "extremely approximate, extremely inaccurate," according to the group's research director.  While these tools are promoted as providing a means for assessing offenders without human bias, the data used by the algorithms is entered by humans, and may be incomplete or inaccurate, and the code itself may hold the biases of the programmer.  Some tools, such as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), used by the California Department of Corrections and Rehabilitation (CDCR) to assess prison inmates for early release, are proprietary, meaning the owners do not share the source code to allow an assessment of a tool's accuracy.  In 2005, a commonly used static risk assessment tool was utilized by CDCR to determine that child-rapist John Gardner was eligible for parole as a low risk offender.  In 2010, Gardner plead guilty to the attempted rape a 23-year-old Candice Moncayo, the rape and murder of a 14-year Amber Dubois, and the rape and murder of 17-year-old Chelsea King.  While the Partnership on AI is concerned that these algorithms may have possible racial bias, the far more important concern is their obvious failure to accurately predict which criminals can be safely released back into to society.     

3 Comments

At some point the judicial system is going to have to figure out how to open up these black box algorithms. People are being convicted based on DNA evidence that is analyzed using algorithms that defense counsel can't see. It makes it impossible to properly defend the case. This is another example. These black boxes have to be opened.

I concur as to the algorithms noted in the post. They should either be open or banned.

I wasn't aware that DNA-matching algorithms were not being disclosed, but at least there the court is dealing with an objective fact, so I would think that if there is an issue another expert using another method can testify that it is not a match.

The DNA algorithms are generally used in cases of complex mixtures where conventionally methods don't work. Check out this article from the Harvard Journal of Law & Technology. https://jolt.law.harvard.edu/assets/articlePDFs/v31/31HarvJLTech275.pdf

Leave a comment

Monthly Archives