Wednesday, September 16, 2015

Top 5 Junky Forensic 'Sciences,' or, 'Why are forensics under fire now?'

It's been said that any field with the word "science" appended to its name is "guaranteed thereby not to be a science." With few exceptions, most forensic sciences fall into that category.

There are a few hard sciences like toxicology and DNA. But even DNA has subjective elements, we're learning, when lab analysts interpret DNA mixtures.

I've found myself explaining to several different folk recently why so many forensic disciplines all of a sudden find themselves questioned, so thought I'd share that spiel with readers. The debate really took hold after 2009, when the National Academy of Sciences issued a major report titled "Strengthening Forensic Science: A Path Forward." That expert review called into question numerous forensic disciplines in a fundamental way, particularly undermining the scientific credentials of comparative forensic disciplines from fingerprints to tool marks.

Of the comparative forensics - where somebody sits with a microscope and compares two bullet casings, hair follicles, bite marks, fingerprints, etc. - these are mostly not fields developed through application of the scientific method. Indeed, many of them have little formal scientific underpinning at all. They're just things cops began doing at some point in history (principally post-Arthur Conan Doyle) to accuse people of crimes.

No one develops expertise comparing hair follicles under a microscope, for example, unless they're paid by the state to try to match evidence to suspects in criminal cases (though they're not supposed to say "match"). It's not like there's an independent source of expertise defense attorneys can turn to in such instances - nobody does that work except other crime labs, whose analysts were probably all trained at the same FBI schools as the state's expert.

That's not to say that, being unscientific, these comparative disciplines are necessarily invalid. They're just more craft than science. Experienced, expert examiners can tell a lot about the evidence they look at. But it's at root a subjective, not a scientific process, regardless of the trappings. The NAS report laid that history bare.

Then there are other disciplines - like arson investigation and diagnoses of "shaken-baby syndrome" - where prior conclusions have been abandoned in light of more recent scientific developments. Texas' new and improved junk science writ makes this state an important site for litigating these issues over the next few years, so expect to hear about these topics more in the future. We're at the front end of a period when traditional forensics are being reevaluated, in many cases for the first time.

Here is Grits' list of top five junky forensic "sciences," all of which are either currently under scrutiny or predictably will be in the near future, with a few dishonorable mentions tacked on since five is awfully short for this list. They're in no particular order and represent my own opinion and no one else's. I could probably even be convinced to drop one or two off the list and add others (make your case in the comments). I offer the following up only as an off-the-cuff thought experiment, not a definitive account. With that said:

Bite Marks
Bite marks have been known to be on the junky end for a while, so they're only rarely used. However, prosecutors bring them in when they need that little extra push to get over the hump in a tough-to-prove case. Texas' review of bite marks cases (aka, "forensic odontology") kicks off in Dallas when a committee of the Forensic Science Commission meets in Dallas today (Wednesday) to consider the issue.

Hair Microscopy
This field arguably is more valid than bite marks - a compliment akin to "prettier in a dress than Dennis Rodman" - but was made nearly anachronistic by mitochondrial DNA testing, which is far more precise. Now, modern science and statistics have demonstrated that many analysts, particularly in older cases, routinely overstated the extent to which they could match suspects to evidence in court (for instance, they can't say "match" or even estimate statistical probabilities, since that overstates what could really be known about individualization of evidence from even the most expert review). The Texas Forensic Science Commission has begun reviewing old cases, mirroring a similar effort reviewing hair microscopy at the FBI. But the going is slow and made more difficult by problems getting transcripts from the appellate courts; there's a significant number of these cases out there.

Shaken Baby Syndrome
The New York Times called it "A Diagnosis that Divides the Medical World." Biomechanical research has debunked many of the early claims, but proponents remain dug in. Emotions run so high whenever someone thinks a caregiver murdered a child that science can become lost in the shouting. The Washington Post published a major piece this spring examining the state of the debate. These cases aren't legion but neither are their numbers insignificant. And the defendants are disproportionately women.

Handwriting Analysis
Another science-free field whose validity has been long debated, you'd think this one may eventually go extinct altogether. Analysts' associations are 95% accurate when they have four-page documents to compare, but who writes that much anymore? OTOH, when it comes to identifying forgers from signatures on checks: ''Even in laboratory settings, there is no evidence they can do it."

Abel Assessment/Penile Plethysmograph
These gems are used particularly on the parole side: They show alleged sex offenders dirty pictures and measure their responses, in the case of a plethysmograph by attaching measuring devices to the penis. Various studies have estimated the error rate on the Abel Assessment at 35-48%. One study found "a 42 percent false-positive rate when non-molesters were tested."

Dishonorable mention:
  • Dog-Scent Lineups (defunct in Texas, last known uses in Florida, communist Cuba). Former Ft. Bend Sheriff's Deputy Keith Pikett's dogs supposedly performed scent lineups in many hundreds of criminal cases, but nobody's ever tracked them all down.
  • Comparative bullet-lead analysis (defunct). As it turned out, an Aggie helped kill it.
  • Arson (older cases - Grits readers will recall the problems with arson science raised in the Todd Willingham case, the FSC, and the State Fire Marshall's arson review). Modern, 21st century arson investigation is much more science-based, derived after burning down hundreds of test buildings and gathering evidence. Under modern standards, arson investigators are also more likely to label a fire "inconclusive" than "arson"; many of the old indicators have been debunked but not always replaced.
  • Footwear and Tire Tracks 
Finally, as I publish this, I do so knowing that among the first comments will be defense attorneys suggesting Grits should have included proprietary breathalyzer software/equipment like the Intoxilyzer 5000, or some such, to which I'd reply, "I know virtually nothing about it, convince me." So if that was your first thought, get thee to the comment section and do so, with links, please. Otherwise, the above seem to me more like the sort of immediately emerging issues identified in 2009 by the National Academy of Sciences.


Unknown said...

Good article it's about time they put some science in forensic science

Anonymous said...


Let me give you another. Drug dogs and issue of "false positives". Dog searches offer probable cause for a warrant but the defense has no way of proving that a dog is unreliable. Dog handlers keep no records when the dog hits on a car, for example, but no dope is found. Yet they use their dogs to justify currency seizures in cars that have no drugs after the dog hits on the auto. How does one prove it was a false hit? Proving a negative is all but impossible.

What is also interesting is that it is my understanding that bomb dog handlers do have to document a false hit. Why shoudn't a similar procedure be required for drug dogs? :~)

Anonymous said...

Surprising to see that DPS hasn't mentioned certain members of the state legislature in their gang assessment. Then again, that's an internal operation.

Gaye Webb said...

Thanks, Grits. Enlightening. What has been your experience with or knowledge of blood spatter analysis forensics. Know of any cases where modern BSA has overturned or impacted prior convictions in any way? It's my understanding that it has advanced significantly over the last 20 years or so and am wondering how often it has been challenged in the courts.

Gritsforbreakfast said...

Gaye, I haven't looked closely at blood spatter, but I know in some cases it's been called into question.

To the extent it has "advanced," though, pre-advancement testimony would definitely be called into question.

George said...

Another junk science is the use of polygraphs, especially in the probation and parole departments. Why do these departments continue to use a debunked "science" that purports to catch someone in a lie? The state legislature did pass a law that states a person cannot have their probation and parole revoked if they fail a polygraph test without corroborating evidence.

So why continue to use it? Seems like it's about money. Most treatment providers who administer the annual polygraphs to people undergoing sex offender treatment charge anywhere from $200.00 to $300.00 for these tests, that's each year. They also charge close to the same for the Abel/plethysmograph tests as well. There are other psychological tests that the clients pony up for as well.

It serves no purpose and should be abandoned along with the other "sciences" that were created either to try and convict someone of a crime or for a person or corporation to start a business, raking in money off the back of the disenfranchised. People with money can find ways to escape this form of persecution.

Anonymous said...

Toolmark analysis to link a fired projectile to other projectiles or to a specific firearm is a problem area: there are no real standards in the field, no accepted definitions for the terms used, no requirement that testifying technicians clarify terms like "consistent with" when testifying, no protocols for establishing error rates - it's all down to the subjective judgment of the individual examiner - usually someone employed by a law enforcement agency. This "technique" was discussed and criticized in "Strengthening Forensic Science." The toolmark "experts" need to be, er, shot down (in a metaphorical sense, I mean).

walt said...

Drug Dog False Hits.

The Texas Highway Patrol keeps records of every dog use in the field. The report records whether drugs have been found prior to the search or whether it's based on the on scene officer's suspicion. Next, the dog handler must record whether the dog hits or not. Finally, the dog handler must report whether the dogs hit revealed drugs or not. The best part is that the handler must "guess" why the drug dog hit did not reveal drugs.

I found a report where the dog handler reported, "the dog is hitting on every location the trooper touched the car prior to the search.

I presented a paper at a TCDLA MCLE in Amarillo which included the forms used by THP.

Anonymous said...

I don't know specifically about DPS' procedures, but I know that a lot of time when handlers have to "guess" why a hit did not reveal drugs they claim it must be due to "residue" and therefore they don't count it as a false positive. The whole drug dog thing is a sham to manufacture probable cause. When used by officers who have already developed some suspicion, the accuracy rates are between 40 and 60 percent - or in other words, as accurate as a coin flip. When used at random on the general population, accuracy rates are in the teens. Why the courts continue to find that this is good enough to establish probable cause, I don't understand.

Anonymous said...

There was a study a few years ago at UC Davis. They had drug dogs and their handlers run through rooms with the dogs. I didn't go back and look at it but from what I recall there were more than 200 hits even though there were no drugs at all. The hits were in location where certain objects had been placed. Basically, the study indicated the hits were most likely due to handler cuing.

Anonymous said...

On the issue of dog scent tests:
Speaking as an scientist who performs method develop, there is nothing conceptually different between a dog used by an operator to test for a substance, and a complicated electronic instrument used by an operator to test for a substance. Both involve 1) a substance being detected, 2) a detection tool (dog/instrument), 3) an operator, 4) a test protocol, and 5) an interpretation procedure. The reliability of the entire process depends upon the reliability of each component. And none of the components will be 100% reliable.

On the false positive issue, it is bad scientific practice to use casework logs to determine false positive error rates. It is bad whether your tool is a dog, or a $100,000 chemical instrument. False positive (and false negative) error rates can only be determined by testing known positive samples and known negative samples. False positive/false negative error rates can be determined 1) by testing pre-designed known positives and known negatives where the correct outcome is known before hand; or 2) by performing an additional "gold-standard" test which is accepted as having an extremely high degree of sensitivity and reliability.

Louis Akin said...

The list should have included the skill of blood pattern analysis which is always passed off as a science. I intend to do an article on it one of these days.
Louis Akin

Thomas R. Griffith said...

Grits, since you asked (thanks for allowing others to chime in about 'Junky' Stuff, btw).

Regarding Junk used to obtain subsequent illegal convictions, one must take in to consideration if the arrest and so-called - Investigation was performed utilizing similar Junk. While some attempted to lay blame of wrongful convictions at the feet of the crime victim(s) and their eye witness identification(s) of suspects, it's the duty of Detectives to properly perform Line-Ups. When they are allowed to use rogue science and do so without it being recorded, you get the HPD's version of closing cases.

Enter The Live Show-Up. - Police rightfully arrest the Guilty & wrongfully the Non-Guilty alike with built-in stages of immunity that allows this cycle to lead to false arrest(s) that leads to wrongful convictions 'especially' if the arrested is currently on probation. Guilty or, Not - both types of arrest(s) are disposed of via the Texas TapOut with no one giving a rat's ass about the goofy, junky crap that the Detectives pulled prior to seeking charges (calling Bob over in INTAKE). Can you say Live Show-Up. The Detectives and those they work for consider their version of Identifying suspects 'scientific' but not anywhere close to Junky. Since the procedure isn't Digitally Documented (recorded) they actually get away with utilizing off the charts, in your face, Junk-Line-Ups. Anyone knowing another term please share, until then consider if this qualifies as Junky.

*Two suspects are described as in their 20's.
*Two people in their 20's are arrested for being in a suspicious vehicle while on a Test drive.
*Two Detectives open up a crowded cell and point to 5 Inmates.
*One Detective takes each Inmate and places him/her in a line up against the wall.
*One Detective leads the line to a door where he tells them to follow instructions.
*One Detective goes through another door where the alleged crime victim awaits.
*The Line is told to enter through the door and turn with their backs to the wall.
*The alleged crime victim is instructed to try to Identify the suspect(s).
*He Positively Identifies one suspect out of the 5 Inmates.
*The Detective asks if can exclude #5?
*He Identifies #5 as being in on it.
*One Detective seeks charges over in INTAKE.
*One Detective confronts the alleged crime victim about his original description and the description of the Inmate he previously Positively Identified as 'not' being close.
*Charges sought by the first Detective are accepted by Bob in INTAKE.
*The alleged crime victim is allowed to have a - "It's all coming back to me" moment where he describes the suspect(s) again but is noted to have 'not' described the person he just Picked-Out, where he is allowed to changed the hair color and type.
*The conversation between the alleged crime victim and the Detective regarding his original description and the second description is noted in the Police Incident Report.
*The Inmates are taken back to the crowded cell as the Positively Identified suspect turned Inmate is charged, indicted and becomes a Defendant.
*The lawyer (paid and appointed alike) will not take time to seek the Police Incident Report, therefore, he/she will not notice the Falsified Positive Identification, the post-Identification conversation nor, the description(s) of those personally chosen to participate as 'fillers'.
*With the Judge, the ADA and the lawyer all accustomed to allowing cases to be disposed of via: the Texas TapOut, none of them will ever know the following: (in order to learn about this, one must pay for copies of the Police Incident Report before someone goes back and re-words it or, simply deletes shit). With mine costing $58.00, the taxpayer's tab is in the thousands per wronged.

Fillers & Suspects - Can you Positively Identify the two 20 yrs. old suspects?
#1 - 30 yrs. old
#2 - 17 yrs. old
#3 - 27 yrs. old
#4 - 20 yrs. old
#5 - 20 yrs. old

JUNKY CASE CLOSED, Take the Plea. Next.

PNG of Texas

Anonymous said...

The UC Davis study had some problems: 1) it only looked at false positive events, and not at false negative events; 2) some of the dogs were trained for drugs, some for explosives, and some for both drugs and explosives. The problem with that design is that for explosive detection, you would ideally like a very high level of false positives and a very low level of false negatives (better to be wrong than dead). However, for drug detection where there is no immediate public safety issue having a high false positive rate would be less acceptable to most people.

So the study wasn't designed to answer questions about the true false positive/false negative error rate. It was designed to determine if subtle handler cues could impact the testing, which they did.

Anonymous said...

Outside of not understanding the subjectively interpretive sciences, there is also the issue of analysts and lab management simply changing data to fit their narrative -- be it to cater to the Prosecution for a "win" or to cover-up their own incompetence.

Without an independent accountable oversight authoritative body with teeth (e.g. not FSC, not DPS, not ASCLD/LAB), there's no imperative to enforce the scientific methodology for those that wish to present to a jury...whatever they want.


Anonymous said...

So, anon 3:07, an accuracy rate in the teens is acceptable to establish probable cause? That shows how little respect you have for the Constitution.

You attack the UC Davis study - can you point to any valid independent studies that show anything different (not those done by groups that train and sell these dogs).

Anonymous said...

War story!

Once moved 400 kilos of pure Colombian coke (wrapped in the traditional brick format) in the trunk of a clean vehicle moving it from an aircraft to a hanger (about 1000 yards). Next night (8 to 16 hours later) I stopped at the base gate where a military drug dog was working and asked that it be run over the car and the trunk interior. It didn't hit. Made me question the competency of the dogs ever since.

geebee2 said...

Scott Peterson is good famous example of junk dog-scent evidence ( Laci was actually killed by serial killer Edward Wyane Edwards, who even left a signed confession on the internet ).

Another example is bogus time-of-death estimates which underestimate uncertainty, see here ( another Edwards murder for which someone else was convicted ).

Anonymous said...

The UC Davis study seems valid enough to me. It shows that these dogs will frequently indicate a hit when no drugs are present. And, it demonstrates that these hits are often related to handler cueing. It seems perfectly valid when related to the question of whether a dog will hit on a car when no drugs are present. It doesn't need to show false negative to answer that question, does it?

Johntheoldguy said...

It's good to see fire investigation not topping the list anymore. The improvements in Texas can be attributed to the scientists on the Forensic Science Commission (FSC), and to State Fire Marshal Chris Connealy, who has turned Texas into the leader in bringing science into the picture, and enthusiastically embracing every one of the FSC recommendations after their report in the Willingham case. Wish Mr. Connealy luck in persuading his colleagues from other states to follow his lead.

Gritsforbreakfast said...

@John Lentini, I couldn't agree more about Chris Conneally, he's a fine man and has done a tremendous, courageous job. Especially compared to his predecessors, it's been an amazing transformation at the State Fire Marshal. They've gone from reflexively opposing new science to, under Chris, embracing and championing it. As a Texan, I'm really proud of him.