Forbes editor William Baldwin (no bleeding heart liberal), declared in response to these data that, "When it comes to science, judges are steeped in a long tradition of welcoming junk into the courtroom." It's a hard point to argue. When even fingerprint matches aren't 100% reliable, and some forensic techniques yield 10% error rates, that means quite a few innocent people may be accused by faulty forensics, allowing actual offenders to go free. Koppl's data about errors in the softer nether regions of forensic science reminded me of this recent assessment:
to judge by the most comprehensive study on the reliability of forensic evidence to date, the error rate is more than 10% in five categories of analysis, including fiber, paint and body fluids. (Meaning: When the expert says specimen X matches source Y, there's a 10% probability he's wrong.) DNA and fingerprints are more reliable but still not foolproof. The 1995 study, in the Journal of Forensic Sciences, looked at proficiency tests labs take to see whether their work is sound.
More recent studies have also shown problems. Though a 2005 study in the Journal of Criminal Law & Criminology suggests a fingerprint false-positive rate a bit below 1%, a widely read 2006 experiment shows an alarming 4% false-positive rate.
Yet the public sees errors as gross anomalies. Like the time the FBI wrongly linked an Oregon attorney named Brandon Mayfield to the 2004 Madrid commuter train bombing that killed 200 people. The FBI had claimed a 100% match between fingerprints found at the scene and Mayfield, who was held for two weeks in federal custody. When the Spanish National Police got the real perpetrator, an Algerian named Ouhnane Daoud, the FBI had to admit its mistake. Mayfield accepted a $2 million settlement from the government.
Another debacle, this time involving DNA testing: Josiah Sutton served four and a half years for rape after the Houston Crime Lab tied him to crime-scene DNA. The lab was later found to be rife with problems, including a leaky roof that let rainwater contaminate evidence. Sutton was proclaimed innocent in 2004 and awarded $118,000 in reparations the next year.
At the "Actual Innocence" conference in Plano last month [in April 2008], former executive director of the National Forensic Technology Science Center Bill Tilstone told the audience that most "pattern evidence" - handwriting analysis, shoe and tire print comparisons, etc., has no research-based foundation at all. Much of forensic science is "soft" science, he said, that at best has not or even cannot be comprehensively tested for accuracy.Given the subjectivity of much forensic analysis, IMO the key to minimizing forensic errors lies in redundancy, an idea Mr. Koppl also embraced.
He's on the right track, but I think Mr. Koppl hasn't quite yet found the best solution. The most efficient way to create redundancy in forensic science is to use the adversarial system. Instead of police sending evidence to three different labs, two is plenty if we simply provide funds for defense experts whenever government crime labs analyze evidence.
The core problem with the forensic system is monopoly. Once evidence goes to one lab, it is rarely examined by any other. That needs to change. Each jurisdiction should include several competing labs. Occasionally the same DNA evidence, for instance, could be sent to three different labs for analysis.
This procedure may seem like a waste. But such checks would save taxpayer money. Extra tests are inexpensive compared to the cost of error, including the cost of incarcerating the wrongfully convicted.
As Koppl notes, such redundant checks promote accuracy, justice and save taxpayer money in the long run by reducing overall error. How much would error be reduced? Here's how the math works:
Say a method of forensic analysis yields a 10% error rate, to use an example from the high end. If a second analyst examines the evidence, the redundancy diminishes the probability of error from 1 in 10 to 1 in 100 (.1 x .1 = .01). In the case of fingerprints, if the error rate currently is 1%, allowing redundant defense analysis would reduce the possibility of error to just one-one hundredth of 1% (.01 x .01 = .0001).
What's more, if the defense hires the second expert, it avoids questions of vendors coming up with similar results to cater to the same customer's agenda, making it more likely evidence will be thoroughly vetted and more accurate results discovered.
Recently Sen. John Whitmire identified one of the problems confronting the much-maligned Houston PD crime lab as its lack of neutrality, opining that the forensic lab should be controlled by a "neutral" entity. But forensic science isn't neutral, no matter who performs it. Rather than attempting to create a single neutral entity, IMO it makes a lot more sense to use the existing structure within the adversarial system to mandate redundancies and improve accuracy.
UPDATE: The author of the Forbes article, Roger Koppl, reacts.
See related recent Grits posts:
- Are crime lab reports "testimonial" documents? SCOTUS will decide
- Many arson convictions based on invalid science
- Arson convictions may be next venue for innocence claims
- More from Texas Innocence Summit
- Innocence summit drew officials, opinion leaders from around state
- Washington Post explores frontiers of DNA science
- Proving innocence, solving cold cases frequently depends on DNA collection and preservation
- When DNA exonerations end, innocence problems will continue
- Beyond the iceberg's tip: Dallas to extend innocence investigations beyond DNA cases
- CSI-Houston would be dark comedy