Showing posts with label ballistics. Show all posts
Showing posts with label ballistics. Show all posts

Tuesday, November 20, 2018

Texas voters think justice system rigged for the wealthy, NY Times reporting repeats forensics fail, pay to play in Harris County juvie appointments?, and other stories

Here are a few browser clearing odds and ends that merit Grits readers' attention:

State should end practice of letting untrained guards work in jails
Untrained jailers legally working on probation status at the privately managed Parker County Jail  were involved in the violent death of an inmate. Excellent story, go read it. The Texas Legislature should close the loophole allowing jailers to work in county jails before they've received training. They should have to fulfill training requirements before being put on the line, just like police officers must complete the police academy before being deployed in the field.

Texas voters think justice system rigged for the wealthy
Voters support bail reform, says a new poll, which also found that "90 percent of registered Texas voters are dissatisfied with the criminal justice system overall and 55 percent want a complete overhaul or major change." Further, "81 percent of Texas registered voters believe the wealthy enjoy substantially better outcomes in the criminal justice than poor and working-class people."

Pay to play in Harris juvie appointments?
The feds are investigating the Harris County juvenile justice system, zeroing in on potential "pay to play" relationships between criminal defense lawyers receiving appointments and judges receiving their campaign contributions. Readers may recall that just two judges in Harris County account for 20 percent of all juvenile commitments to Texas youth prisons.

NY Times reporter repeats HouChron failures in ballistics coverage
This New York Times story on ballistics matching made many of the exact same errors as did a Houston Chronicle story I'd criticized last month: Failing to acknowledge the lack of standards or any scientific basis for the practice. I commented on the article in a brief Twitter thread.

Charting new paths for District Attorney offices
Progressive District Attorneys elected around the country in the last couple of cycles are pioneering new approaches to reducing mass incarceration offices. For example:
One third of deaths in Illinois prisons were preventable with adequate healthcare
After following the issue of deaths-in-custody for many years, your correspondent believes lots more people die in Texas prisons from preventable ailments due to inadequate healthcare than are killed in the state's execution chamber. But because the system controls all information about healthcare, it's a difficult assertion to prove. In Illinois, litigation pushed the issue to the point where a federal court commissioned an independent expert to assess the situation. They found one-third of deaths in custody in that state were preventable with adequate healthcare. Here's the expert's report. IMO, a similar assessment in Texas would likely yield similar or worse results.

Friday, October 12, 2018

Reporters mustn't overstate forensics accuracy - ballistics edition

In light of ballistics evidence from a federal database leading to arrests in linked, gun-related crimes, the Houston Chronicle last week ran a feature on how an ATF initiative is using ballistics evidence compiled in a national database to solve gun crimes.

But the story failed to acknowledge shortcomings with ballistics evidence and overstated the accuracy of firearms matching. In particular, they repeated the claim by law enforcement that, "The ATF database allows firearms experts to match high-resolution photos of marks left on bullet casings after being fired. The guns' firing pins leave a mark unique to each gun, allowing investigators to connect casings fired at different shootings."

The "unique" part is unproven. And the phrase "match" is overstated. The seminal critique on this topic came from the 2009 National Academy of Sciences (NAS) report, "Strengthening Forensic Science: A Path Forward." Ballistics matching is one of the areas of non-scientific "forensic science" being challenged in the innocence-era wave of re-evaluation.

The NAS report specifically discussed the National Integrated Ballistics Information Network (NIBIN) database described in the Chronicle article. At the time, it was still under development. However, the NAS cautioned that, in the end, the computer still doesn't do the matching. That's in part because, as they also showed, there are no standards to match to; it's a subjective comparison performed by an examiner. With the NIBIN system, according to the NAS report:
the final determination of a match is always done through direct physical comparison of the evidence by a firearms examiner, not the computer analysis of images. The growth of these databases also permits examiners to become more familiar with similarities in striation patterns made by different firearms. Newer imaging techniques assess toolmarks using three-dimensional surface measurement data, taking into account the depth of the marks. But even with more training and experience using newer techniques, the decision of the toolmark examiner remains a subjective decision based on unarticulated standards and no statistical foundation for estimation of error rates. The National Academies report, Ballistic Imaging, while not claiming to be a definitive study on firearms identification, observed that, “The validity of the fundamental assumptions of uniqueness and reproducibility of firearms-related toolmarks has not yet been fully demonstrated.”
So there is no computer matching of ballistics, it's done by examiners who make a "subjective decision based on unarticulated standards and no statistical information about error rates." Knowing that, all of a sudden, accusations based on ballistics evidence may appear less certain than portrayed in the story by the ATF.

For more background on critiques of ballistics and toolmark evidence (the former discipline is a subset of the latter), check out the 2009 NAS report. (Ctrl F and search on "toolmark" to find the relevant passages.) That report changed the terms of debates regarding traditional forensics like ballistics. And even though it's now nearly ten years old, with few exceptions (arson, hair and fiber analysis), forensics fields have taken at most baby steps to address the problems.

IMO, reporters should no longer write uncritically about disputed, comparison-based forensic evidence - even when police say it led to a "match" and make arrests based on the findings - without acknowledging that these are subjective comparisons, not scientific results. We've seen too many innocence cases featuring flawed forensics for the system to project that level of certainty.

Thursday, October 20, 2016

What error rate would justify excluding non-science-based forensics?

A recent report from the President's Council of Advisers on Science and Technology renewed concerns first raised by the National Academy of Sciences in 2009 about the lack of scientific foundation for many if not most commonly used forensics besides DNA and toxicology. Our friends at TDCAA shared on their user forum a link to the first federal District Court ruling citing the PCAST report, focused in this instance on ballistics matching.

The federal judge out of Illinois admitted ballistics evidence despite the PCAST report because he considered estimated false-positive rates relatively low. Here's the critical passage on that score:
PCAST did find one scientific study that met its requirements (in addition to a number of other studies with less predictive power as a result of their designs). That study, the “Ames  Laboratory study,” found  that toolmark analysis has a false positive rate between 1 in 66 and 1 in 46. Id. at 110. The next most reliable study, the “Miami-Dade Study” found a false positive rate between 1 in 49 and 1 in 21. Thus, the defendants’ submission places the error rate at roughly 2%. The Court finds that this is a sufficiently low error rate to weigh in favor  of  allowing  expert  testimony. See  Daubert  v.  Merrell  Dow  Pharms.,  509  U.S.  579,  594 (1993) (“the court ordinarily should consider the known or potential rate of error”); United States v. Ashburn, 88 F. Supp. 3d 239, 246 (E.D.N.Y. 2015) (finding error rates between 0.9 and 1.5% to favor admission of expert testimony); United States v. Otero, 849 F. Supp. 2d 425, 434 (D.N.J. 2012)  (error  rate  that  “hovered  around  1  to  2% ”  was  “low”  and  supported  admitting  expert testimony).  The  other  factors  remain  unchanged  from  this  Court’s  earlier  ruling  on  toolmark analysis.
Using a 2 percent error rate could understate things: The error rates from the studies he cited ranged from 1.5 to 4.8 percent, so it could be twice that high (1 in 21). Still, I'm not surprised that some judges might consider an error rate of 1.5 to 4.8 percent acceptable. And the judge is surely right that the PCAST  report provides a new basis for cross-examining experts and reduces the level of certainty about their findings which experts can portray to juries, so that's a plus.

OTOH, an erroneous ballistics match - and even though analysts can't use the word "match" any more, it's how jurors will inevitably view such testimony - will loom large for jurors and be highly prejudicial as evidence. So if you're the unlucky one in 49, one in 21, or whatever the real number is of people falsely accused by ballistics comparisons, jurors are likely to go with the so-called "expert" and the defendant is basically screwed.

Grits has estimated before that two to three percent of criminal convictions involve actually innocent defendants - not too different than the judge's error rate he considers allowable on ballistics. But that rate gets you to thousands of unexonerated people sitting in Texas prisons alone, with many more on probation, parole, and who have already completed their sentences. Given the volume of humanity which churns through the justice system, two or three percent is quite a significant number of people.

I'm curious as to Grits readers' opinions: How high a false positive rate is too high? Is forensic evidence that's 95 to 98 percent accurate good enough to secure a conviction "beyond a reasonable doubt" if that's the principle evidence against a defendant? At what error threshold should forensic evidence be excluded? Make your case in the comment section.

Thursday, October 29, 2015

Forensic fails, and other stories

As your correspondent prepares for today's Exoneration Commission hearing, here are a number of items which likely won't make it into individual blog posts but which merit Grits readers' attention:

No clear way to track down junky bite-mark cases
The Dallas News has a high-powered team - Brandi Grissom and Jennifer Emily - covering the Steven Chaney bite mark case and the Forensic Science Commission review of bite mark cases. They reported Monday that "Tracking down dozens — maybe hundreds — of other potentially innocent victims of junk science won’t be ... easy. There is no central repository of cases in which bite-mark testimony was key. There’s no database of dentists who testified about bite marks. And the cases are mostly decades old, and experts, defense lawyers and prosecutors have moved on or died."

Bad Ballistics?
The Texas Court of Criminal Appeals ordered an examination into overstated ballistics testimony from an expert in Arthur Brown, Jr.'s 22-year old capital murder trial, in which he was prosecuted along with an already-executed accomplice for a quadruple killing in a botched drug transaction. Reported the Houston Chronicle:
Brown was scheduled for execution in October 2013 but received a stay to allow for forensic testing of evidence. An accomplice, Marion Dudley, 33, was executed in 2006.

On Wednesday, the appeals court judges acted on Brown's November 2014 appeal in which he asserted that Houston Police Department ballistics expert C.E. Anderson "testified falsely or in a materially misleading manner" in his case. Judges held that the claim met state standards warranting review.
See the court's order.

Paging Antonin Scalia: On the right to confront the boss of your accuser
The Court of Criminal Appeals also ruled that the requirements of the Sixth Amendment's "Confrontation Clause" may be met for "batch DNA testing" by a crime lab supervisor testifying based on computer printouts instead of the lab workers who conducted the analysis. Judge David Newell is of course correct that neither the CCA nor SCOTUS have ever "squarely answered this question." SCOTUS said that sworn affidavits are insufficient, but not whether a supervisor can testify based on the work of her subordinates. But one certainly wonders what Antonin Scalia might say about it. It's hard for this non-attorney to understand why relying on data generated by non-testifying lab analysts is different from relying on an affidavit to which they did not testify.

Are black-box calculations problematic for DNA mixtures?
Next year, the Department of Public Safety and most other Texas labs will shift to "probabilistic genotyping" to analyze DNA mixture evidence, a method which supposedly is superior even to the adjusted calculations which are currently available. However, that method relies on proprietary programs with black-box systems for which the makers will not release their code, similar to the situation surrounding proprietary breathalyzer algorithms which accuse defendants based on computer code which their attorneys and the court cannot see nor evaluate. See good discussions of the topic from Slate and Ars Technica. Moreover, as Grits reported earlier, because of the nature of the calculations, results based on probabilistic genotyping will be different every time - they're not replicable, in addition to not being transparent. So, while the fact that the DWI equipment is still in use makes me think Texas courts would ultimately find a way to allow this sort of proprietary opacity, Grits continues to wonder if probabilistic genotyping is the best tool for the job when it comes to providing courtroom testimony, given that there are other methods available where the calculations are both transparent and replicable.

Finding housing with a felony record
There's a good article in the Houston Chronicle on the struggles poor people have renting an apartment with a felony record. Houston is closing a dangerous low-income apartment run by a slumlord who they've sued over "atrocious living conditions." Reporter Emma Henchliffe decided to pay attention to what happened to the ousted tenants, finding that the ones with a felony record had a terrible time locating new places that would take them. "Individual owners have the right to accept and reject applications as they choose, but the lack of alternatives for tenants who do not meet owners' standards causes many former offenders to end up at places like Crestmont," she wrote. "Some apartments where [one tenant] applied took her application fee and never got back to her." When you consider the volume of felons Texas produces - we release more than 70,000 prisoners from TDCJ every year - this is a much more important issue than one would think from the amount of coverage it receives. I was glad to see this article.

No surprise: 'Stingrays' really do intercept content
Turns out, contrary to public assertions by law enforcement, Stingrays are indeed able to intercept calls as well as cell-phone metadata, it's now been proven. This was obvious to anyone who thought about it: The devices trick your phone into routing through a fake cell phone tower; clearly they were intercepting the whole call, not just metadata. And experts have been telling us this for a while. Still, nice to see it confirmed, it's one less thing to argue about.

NOTE: A brief item about a murder case in Denton was removed from this roundup after a commenter informed me that the underlying news article improperly attributed a court action to DNA mixture protocols when the real issue was unrelated. Grits apologizes for the error and will perhaps revisit the topic when more accurate information is available.

Friday, August 21, 2015

Why won't authorities say how many in Twin Peaks massacre were shot by cops?

Let's not mince words: It's absurd and disingenuous that authorities in Waco haven't yet released results from ballistics reports to tell us how many of the nine deceased and 18 wounded at the Twin Peaks massacre were shot by bikers and how many by cops. And every court hearing or public pronouncement where they avoid divulging that surely-by-now-known information undermines their credibility and gives the impression that many if not most victims were killed by police officers, not outlaw bikers.

Next week, we'll finally see examining trials (which are only allowed in cases where the D.A. has not sought an indictment from a grand jury) for a few defendants, but even then it seems likely the state will skate by without answering the fundamental question of who shot whom.

At Above the Law, Texas Southern's Tamara Tabo explains how DA Abel Reyna and his former law partner, District Judge Matt Johnson (who issued a gag order in the case) have successfully tamped down media access to even basic information about what happened that day. It's a good update of where we're at and why, give it a read.

Tuesday, October 30, 2012

A&M statistics prof helped debunk junk science on ballistics

"It takes a village to take down bad forensic science," said A&M statistician Cliff Spiegelman, who "was an ardent opponent of a method of forensic testing called Comparative Bullet-Lead Analysis (CBLA), which partly through his work the FBI discredited in 2007. The abandoned technique, which used chemistry to link bullets from a crime scene to those owned by a suspect, was first used following the 1963 assassination of President John F. Kennedy." According to Texas A&M:
"There's always the chance of error," he said. "So, for instance, if a hair from the defendant is similar to one found at the crime scene, the issue is, what is the frequency of hairs that are similar in the general population? Ninety percent? Ten percent? One percent? The relevance of the evidence is based in part on how common it is. And that's a statistical issue."

Spiegelman's interest in statistical forensics was sparked in 2002, when, because of his expertise in statistics in chemistry, he was appointed to serve on a National Research Council (NRC) panel to study bullet lead evidence. During the meetings, he would step out to inject himself in the stomach with a high dose of interferon as part of a difficult chemotherapy treatment.

Spiegelman's doctor gave him a 50 percent chance of living. Instead of quitting the panel to focus on his treatment, Spiegelman immersed himself in the work with the stark realization that it could be his last professional act.

The treatment was a success, and while he overcame the threat to his life, his passion for statistical forensics remained.

In 2008, Spiegelman was a co-recipient of a prestigious national award for leading a team that published a paper finding that forensic evidence used to rule out the presence of a second shooter in President Kennedy's slaying was fundamentally flawed. He shared the American Statistical Association's 2008 Statistics in Chemistry Award with Simon Sheather, professor and head of the Texas A&M Department of Statistics, William D. James, a researcher with the Texas A&M Center for Chemical Characterization and Analysis (CCCA), and three other co-authors.

The paper showed that the bullet fragments involved in the assassination were not nearly as rare as previously thought, and that the likelihood that all the fragments didn't come from the same batch of bullets also was greater than previously thought. 
Spiegelman is not a Kennedy assassination buff but "says the Kennedy case is the ultimate example: If the science could be wrong in a case with intense public interest and with the government having all the resources it needed, then it certainly could -- and has often been -- wrong in much more low-profile cases."

Thursday, October 20, 2011

Judges cautioned against reliance on overstated ballistics testimony

Recently, thanks to contributions from readers, Grits purchased a copy of the brand spanking new third edition of the "Reference Manual on Scientific Evidence" produced by the Federal Judicial Center and the National Research Council of the National Academies of Science - the first update of the manual in more than a decade - and just finished reading the chapter on "Forensic Identification Expertise" which may end up providing fodder for multiple Grits posts.

Basically, the book expands on work by the NAS in their 2009 report on the science (or lack thereof) behind forensics commonly used in criminal courtrooms, creating a reference manual for judges that combines the latest scientific assessments with an analysis of the relevant case law governing each technique discussed. Very helpful, and enlightening.

This 1,000-page tome addresses myriad aspects of scientific evidence used in courtrooms, but I thought I'd start with a discussion of the section on "Firearms Identification Evidence," or "ballistics," which have been used as identifiers in court dating back to the 1920s. "In 1923, the Illinois Supreme Court wrote that positive identification of a bullet was not only impossible but 'preposterous.' Seven years later, however, that court did an about-face and became one of the first courts in the country to admit firearms identification evidence. The technique subsequently gained widespread judicial acceptance and was not seriously challenged until recently." (Citations omitted.)

The 2009 NAS report found that "Sufficient studies have not been done to understand the reliability and repeatability of the methods" for matching fired bullets or cartridges with the originating weapon, but the studies that have been done certainly give one pause. Tests in 1978 by the Crime Laboratory Proficiency Testing Program found a 5.3% error rate in one case and a 13.6% error rate in another.  Experts evaluating those errors called them "particularly grave in nature." A third test by the same group found a 28.2% error rate.

Later proficiency testing produced lower error rates, but "Questions have arisen concerning the significance of these tests." Only firearms experts in accredited labs participated in testing, for starters, and they weren't "blind" studies: I.e., participants knew they were being tested. Some proficiency testing even reported zero errors, but in 2006, the US Supreme Court observed, "One could read these results to mean the technique is foolproof, but the results might instead indicate that the test was somewhat elementary."

Then, "In 2008, NAS published a report on computer imaging of bullets" that commented on the subject of identification, concluding that "Additional general research on the uniqueness and reproducibility of firearms-related toolmarks would have to be done if the basic premise of firearms identification are to be put on a more solid scientific footing." That report cautioned:
Conclusions drawn in firearms identification should not be made to imply the presence of a firm statistical basis when none has been demonstrated. Specifically, ... examiners tend to cast their assessments in bold absolutes, commonly asserting that a match can be made "to the exclusion of all other firearms in the world." Such comments cloak an inherently subjective assessment of a match with an extreme probability statement that has no firm grounding and unrealistically implies an error rate of zero. (emphasis in original)
In 1993, the US Supreme Court issued its pivotal Daubert ruling which proscribed new evidentiary standards for scientific evidence, but it took years for those standards to be rigorously applied to ballistics evidence. "This changed in 2005 in United States v. Green where the court ruled that the expert could describe only the ways in which the casings were similar but not that the casings came from a specific weapon." A 2008 case said an expert could not testify that a bullet matched a weapon to a "reasonable scientific certainty," but was only permitted to say that it was "more likely than not" that a bullet came from a particular weapon.

As with other comparative forensic techniques from fingerprints to bitemarks to microscopic hair examination, essentially, all ballistics experts are really saying is "After looking at them closely, I think these two things look alike." It strikes this writer that it's quite a big leap from "reasonable scientific certainty" to "more likely than not." Basically it's the leap from "beyond a reasonable doubt" to having "substantial doubt." I wonder how many past convictions hinged on testimony where experts used phrases like "reasonable scientific certainty" or "to the exclusion of all other firearms in the world"? And I wonder how many times those experts were simply wrong?