Wednesday, January 25, 2012

Blind administration would improve accuracy of forensics

The Economist argues that even the most powerful forensic tools, including DNA can be tainted by "cognitive bias" when scientists are given too much "contextual information" about the case, citing a study where DNA analysts unfamiliar with case details were less likely to find a match than the original examiners who knew the case details. The magazine grants that:
one example does not prove the existence of a systematic problem. But it does point to a sloppy approach to science. According to Norah Rudin, a forensic-DNA consultant in Mountain View, California, forensic scientists are beginning to accept that cognitive bias exists, but there is still a lot of resistance to the idea, because examiners take the criticism personally and feel they are being accused of doing bad science. According to Dr Rudin, the attitude that cognitive bias can somehow be willed away, by education, training or good intentions, is still pervasive.
Medical researchers, by contrast, take great care to make drug trials “blind”, so that neither the patient nor the administering doctor knows who is receiving the drug being tested, and who is getting a control drug or placebo. When someone’s freedom—and, in an American context, possibly his life, as well—is at stake, it surely behooves forensic-science laboratories to take precautions that are equally strong.
Blind administration turned out to be a key reform for eyewitness identification, and your correspondent has long believed the same approach is justified in other forensic disciplines. Why does a DNA analyst need to know case details before deciding if two samples match? Not only is it irrelevant to the analysis, it may actually turn out to reduce its accuracy.

30 comments:

Anonymous said...

For sexual assault kits, our crime lab analysts read the physicians report (including the statements given by the victim), the initial police reports, and consults with the primary investigator before performing a single test on the evidence inside the kit.

We've (the analysts) suggested a more "blind" approach, but out supervisors state that we need to have this information so that we know which items to analyze in the sexual assault kit, or if we need to analyze anything at all, based on the credibility of the written reports.

But I'm just a blissful analyst.

Am I allowed to claim ignorance while providing expert witness testimony?

Gritsforbreakfast said...

Just to ask: How many different items are there to test in a sexual assault kit? I didn't realize there were multiple parts. Can you give an example of why you might test one part of the kit and not another?

And is it your call whether there's a need to "analyze anything at all," or is that the investigators' decision? I understand, say, if the issue isn't whether they had sex but whether there was consent, there's not necessarily a need for testing. But it seems like that's a conclusion the investigator could/should reach before the evidence gets sent off for testing.

I have to say, it's a bit distressing to imagine that supervisors are pressuring DNA analysts to adjust their findings in any way "based on the credibility of the written reports." If the DNA is truly a "match," or not, it seems like that conclusion should stand on its own independent of any contextual variables.

Prison Doc said...

Having collected numerous sexual assault kit samples, I cannot understand why there is a need for the analyst to know ANYTHING about the assault details. As you correctly note, "blinded" studies are the norm in almost all medical testing--even in the hospital--so it is pretty hard for me not to assign a sinister motive to anyone who is pushing the opposite approach.

If I'm wrong, I hope another analyst will step up and offer an explanation.

Anonymous said...

Context is key. A Sexual Assault Nurse Examiner collects certain evidence based on the allegations presented. In my experience, labial swabs may be collected in the event of penile penetration with possible ejaculation. They may also be collected if there was oral contact to the vaginal area, which would mean that the swabs should be tested for saliva rather than screening for semen. Other forms of evidence collected could prove likewise. There is a difference between screening evidence for potential biological evidence and performing DNA analyses. When screening, it can sometimes be beneficial to know the circumstances in order to ascertain what tests should be run. The evidence is finite and should be preserved whenever possible, so running needless screening tests just to look for material that may or may not be there may not be the most effective practice. Additionally, as Grits has referenced in the post preceding this topic, laboratory resources are finite and DNA analyses per test are expensive. A forensic examiner must utilize all of the tools that are available to them so that they can determine the best course of action to take.

Let me provide an example of how factual knowledge of the scene has aided in investigations. I was asked by an investigator to analyze clothing removed from the deceased on a homicide that occurred nearly 20 years prior. The victim had been stabbed numerous times and nearly decapitated. The homicide detective was requesting that I analyze the victim's clothing for blood. As one can imagine with a victim who had been stabbed numerous times, the clothing was inundated with blood. Finding the victim's blood on her own clothing would not prove probative. What would have been beneficial is finding DNA foreign to the victim so that it could be uploaded into the Combined DNA Index System for searching. I requested that the homicide detective also produce the crime scene photos. Without the knowledge of how the victim was found, where the stab wounds were, as well as knowing that the victim appeared not to have been moved, I likely would not have keyed into a small bloodstain on the victim's back. This bloodstain ended up being a mixture of the victim and an unknown male, whom we were able to later identify.

The development and comparison of a DNA profile is a separate issue to those that have been raised in the comments section.

Scientists should always strive to maintain their impartiality in the investigation and prosecution of a case. However, to not utilize all of the information at hand would prove a massive disservice to the victims, wrongful suspects that we exonerate, and the public at large.

The Comedian said...

Anon 02:21, Now I understand why crime labs are so backed up. In my own work, I frequently have occasion to read the various types of police reports and witness statements to which you referred. They are often quite lengthy and time consuming to review. In addition, if you can accurately judge the credibility of the reports just by reading them you are in the wrong line of work.

It sounds like by the time you guys are finished reading all that paperwork, there may not be much time left for the actual testing!

Anonymous said...

GFB-

Anon 2:21 here.

I run the risk of giving away where I work -- but per our protocols, sexual assault kits are automatically submitted from the hospital. They could contain (depending on the story of the alleged victim) 1) swabs (x4): vaginal, anal, and/or oral, 2) smears (duplicates) created from the swabs, 3) buccal swab from the victim (i.e. DNA standard), 4) pubic hair combings, 5) public hair sample (standard from victim), 6) Oral wash, 7) victim's underwear, 8) any other swabbing of biological material that the alleged attacker may have left.

The investigator contacts the forensic analyst and discuss what needs testing based on reports and/or interviews that the investigator may have had with the victim and suspects. We (the forensic analysts) are supposed to "consult" the investigator as to what the results of the tests would provide in regards to a lab report and/or expert witness testimony (which, for the uninitiated, can be almost two entirely different and opposing presentations). And as you stated, there is a fine line between "sexual assault" and "consensual sex". Also an evidential fine-line between "assault without a condom" and "assault with the suspect wearing a condom". Each scenario represents either "testing requested" or "never mind".

Although a complex situation which is related to the story or interview of the victims and suspects, the general idea is to only test as much evidence as is needed to get a lab report which a DA can use for plea bargaining. More testing is performed if a plea bargain can not be reached...in order to strengthen analyst's expert testimony. And, which we know, most of the time an analyst won't be called to give testimony.

Often reiterated by our supervisors, the defense attorney can have any remaining untested items analyzed. That is,
if they have the time, money, access to experts, or if they actually know the items exist.

Soronel Haetir said...

Grits,

I'm not in the field, of course, but my understanding is that an examination kit may include such things as hairs that were found, multiple swabs from various locations (in particular any location that might be a bite injury, for example), fingernail scrapings, possible cuttings of clothes or even the entire piece of clothing, possibly photos of the victim, along with just about anything else the evidence collector thought might possibly be relevant. It's way easier to be over-inclusive during the collection phase, after all if you don't collect such evidence when it's presented you simply aren't going to have it.

And the newer a kit is the more such I would expect any given kit to contain, as people start to realize that lots of material besides semen might turn out to be valuable evidence.

rodsmith said...

i agree. for a lab tech to get anything more than a set of items to test included in a sealed package with a BLIND CONTROL number is just idiotic!

Anonymous said...

For expediency sake, the laboratory is best served by knowing the nature of the assault. You would begin testing the swabs from the orifice of penetration. Any additional information could bias the analysis. Reference samples and information from suspects should not be introduced until the analyst has completed testing and arrived at their conclusions on DNA profiles developed.

Only after evidence conclusions are documented do you run reference for comparison or comparison with a data base. There would be occasions when a suspect is immediately apprehended that evidence and reference samples from a suspect may be included with the victim's evidence.

I know some labs follow a similar protocol, while others do not.

Anonymous said...

The idea for forensic analysts is to test as little as evidence as possible to create a lab report that the DA can use for plea bargaining. (Greater than 95% of all criminal cases are plea bargained -- often a lab report is used). If a plea bargain can not be reached, more testing is done to strengthen the expert's testimony during trial (which we know is a rarity).

According to my lab supervisors, any untested evidence remaining is the responsibility of the criminal defense attorney. That is, if the attorney has the resources, time, rebuttal expert witness, or knowledge that untested items even exist.

A Texas PO said...

Maybe I'm just too far removed from forensics, but I fail to see how a lab can determine a DNA match where none exists solely based on the nature of the reports. Seems to me it'd be akin to declaring that Bevo is a cat when the evidence clearly shows he's a steer. Is DNA really that subjective?

Prison Doc said...

Lots of wordy responses here but to me they seem to explain why the examiner should be "blinded" rather than supporting the opposite approach.

With this thinking, it's no wonder there is so much bad forensics out there.

Anonymous said...

I also work in a lab, and I don't know any lab that works in the way described by 2:21 in terms of selecting what to analyze.

There is an appropriate and necessary amout of information about case circumstances that an analyst needs in order properly work. This is acknowledged in the National Academy of Sciences report. For instance, in a sexual assault case the decision about whether to work clothing items may depend on whether there was one assailant or more than one assailant, or whether the victim had consensual sexual relations within several days prior to the assault. If the victim is a sexually inactive 80 year old woman, and the intimate swabs from the sexual assault kit test positive for seminal fluid, then testing of clothing or bedding is not called for.

That doesn't mean it shouldn't be collected and stored for possible testing. It is relatively cheap to collect evidence, and relatively expensive to test it. If the police are doing their job, they will cast a wide net in the collecting of evidence. But then there needs to be a very deliberate evaluation of what evidence to test. This depends upon the context of the evidence within the overall case. Often the analysts are important contributors to that evaluation.

The best way to prevent bias in laboratory testing is to make labs independent of police departments and district attorney's offices. If labs are truly independent, then there would be greatly reduced pressure to be biased. There are lots of labs that are truly independent, so it can be done.

Anonymous said...

Anon 5:01-

I disagree. The analyst does not need to know the medical or sexual history of the victim. Simply because...some victims lie. (I don't care if she is 80 years old and claims she's sexually inactive. She still could be banging the local Wal-Mart Greeter every Thursday after Matlock.)

There is no reason that an analyst needs to know any storyline to the evidence collected. That is the job of the Investigators or DAs. Sometimes their theories are wrong. (Ask Todd Willingham how that worked out for him.)

The Investigators and DAs should be smart enough to put together a possible scenario of the crime and have the analyst test the items needed for the DAs to present to a courtroom jury.

Once an analyst becomes vested in the story (and comes to believe that they are on the same side as the prosecutor), then contextual bias comes into play. (And, ask Duane Deaver how that worked out for him.)

An analyst's work should almost be robotic. Does evidence item X have seminal fluid on it? Yes or No. Is evidence item Y cocaine? Yes or No.

The reasons evidence items X and Y were collected for testing is (should be) irrelevant to the analyst.

I'll bet not a single analyst will tell you that they will refuse to test an item if there is no logic or story or written history associated with that item.

Anonymous said...

3:05:

5:01 here.

I can tell you from experience that no defense attorney worth his/her salt would want analysts to work in the way you describe.

I work in a public laboratory where most of our work is done in response initially to requests by police departments or district attorney's offices. But we also discuss the work that we did extensively with the defense attorneys and with staff of innocence projects.

Uniformly, all of those end users in the information stream want the input of laboratory staff on what would be the best approach to testing in specific cases, based upon specific case circumstances. Defense attorneys in particular. We are constantly asked by defense attorneys about the feasibility of testing items that were not initially tested by the police. We could simply say, "Sure we can test that," but that doesn't provide them with the information that they are needing in order to make the best choices for their client. They want the input of laboratory staff in evaluating the potential meaningfulness of testing before the testing begins. In order to perform that function that laboratory staff need to know the context of the evidence.

Again my experience is that when defense attorneys are confident that a laboratory is operating independently from the investigating agencies, they have no problem approaching the laboratory staff with those sorts of questions.

The purpose of the laboratory is not just to provide data. It is also to provide scientific expertise about the data to the folks who have to make decisions. They want that expertise to be fair and impartial, but they don't want it blind or theoretical. They want the expertise applied to the specific case circumstances they are concerned with at that moment in time.

Gritsforbreakfast said...

"all of those end users in the information stream want the input of laboratory staff on what would be the best approach to testing in specific cases, based upon specific case circumstances."

But does that mean the same person who provides input on evaluating evidence must perform the actual tests? Why not separate those functions?

Anonymous said...

Anon 3:05 here.

GFB-
You just described the difference between a technician in the lab versus the supervisor of the lab. Sometimes they are one-in-the-same, which is the problem.

Anon 8:49-
You stated "best approach" and "feasible" and "scientific expertise about the data". Could you mean "sensitivity of the test", "selectivity of the test", "certainty of the test", "reliability of the test", "limitations of the test" as they are performed in the (your) lab?

Of course, these are important input for attorneys that a forensic scientist should know and communicate before testing items.

But this is not "contextual information".

Contextual bias arises when the attorneys (either defense or prosecutor) presents the story to the analyst before testing and the analyst, most often unintentionally, fits the conclusions of the tests to the attorneys hypothesis.

Analysts aren't given the freedom to try to disprove the attorneys hypothesis.

Let me ask you this...Do your annual proficiency tests come with case-related background information? If "No", would the answers for your proficiency test be different or the same, if you were given case-related information?

Anonymous said...

5:01 here.

Sure. Absolutely you have different individuals give that input.

I am a strong advocate for defense experts reviewing reviewing results and doing retesting of evidence. The more that happens, the better the system becomes.

In DNA there is a lot of defense expert review and retesting. It is so common that every analyst starts a case with the knowledge that the case may be reviewed and retested. That is a good thing for quality assurance in general, but also specifically for the avoidance of bias.

In some disciplines, like firearms and toolmarks, there is virtually no review and retesting by defense experts.

I always advise defense attorney to review and retest. But they make their own decisions based upon what is going to be in the best interests of their clients, not based on what is going to improve the system. Sometimes it is in the best interest of the client to attack the credibility of the work that was done, instead of doing retesting that may confirm the work that was done.

But the fact of the matter is that analysts will often need to have some appropriate level of case information in order to effectively provide the service that is expected of them. When an analyst reviews case documentation, it is that information that they are looking for.

If a bloody shirt is screen for blood, one of the decisions that the analyst needs to make is how many stains to take for DNA testing. If there are 50 stains on the shirt, anywhere from 1 to 50 stains could be taken, depending on the case circumstances. Those circumstances may include: Is it the victim's shirt or the suspect's shirt? Were there multiple victims or a single victim? Were there multiple suspects or a single suspect? If it is a single bleeder case, then the sample selection is relatively simple. If there are multiple bleeders, then then it could get much more complicated and much more expensive.

Gritsforbreakfast said...

I'm not talking about defense experts, 1:00. I mean in YOUR lab, shouldn't y'all separate those parsing evidence with cops and prosecutors from scientists physically performing the in-lab tests?

Separating out those job functions would allow for "blind" testing and still let labs to perform that advisory role.

Anonymous said...

Sure, you could. But it would add to the expense and it would not be effective. Ineffective solutions aren't really solutions.

Anonymous said...

Anon 1:00/2:30-

Ineffective how?

As questions which would influence your protocol, you state, "...Is it the victim's shirt or the suspect's shirt? Were there multiple victims or a single victim? Were there multiple suspects or a single suspect?..."

The analyst does not know these answers, and the investigator may be only guessing at the answers for his/her hypothesis of what happened. The scientific protocol remains the same (or should).

I would hope all 50 stains are tested and collected for DNA testing. I don't quite understand why guessing at the answers to the case circumstances would influence the number of stains to collect and test.

Anonymous said...

Being a scientist, I've found this an interesting discussion thread. But I'm a bit bothered by the underlying presumption that good science is defined by blind testing. It should be recognized that blind testing as a standard of practice is really specific to the medical community and is a reaction to the placebo effect in the testing of drugs. In other types of science blinded testing protocols are not routinely employed, and are not considered required for the production of reliable scientific data and conclusions. I would be surprised if any Nobel prize winning science research used blind testing protocols. FWIW.

Anonymous said...

4:55-

Considering the consequences of getting a wrong answer (loss of liberty, or execution), the difference is between "good science" and "better science" (if not the "BEST science"). And most evidence is destroyed during the testing process, meaning re=testing may not be an option. There may be only one shot at getting the correct answer. It should be the BEST shot.

And, the adversarial process automatically places the analyst as either on the side of the prosecution or on the side of the defendant, depending on who the analyst is providing services for. This necessarily assigns motive to the analyst.

The "blind" analyst has motive removed (or at least reduced).

Anonymous said...

to 5:01-
You have some very interesting, if not curious, remarks for a forensic lab analyst.

"...I always advise defense attorney to review and retest..."

Do you advise Prosecutors to review and retest also, perchance the lab (or you) got the wrong answer and are convicting the wrong suspect? How often does re-testing occur, for quality control purposes?

"...They [the folks who have to make decisions] want the expertise applied to the specific case circumstances they are concerned with at that moment in time..."

So, you're not actually independent, but are for one or the other side, depending on who is paying for your services?

Gritsforbreakfast said...

How would it add to the expense, 2:30? The same number of analysts would be doing the same amount of work, just with more specialization. I don't get that argument.

4:55, the cognitive bias demonstrable in forensics justifies the same precautions as the equally well documented placebo effect, for exactly the same reasons. True, maybe it's not necessary in every branch of science, but where documented bias problems exist and can be rectified through blind testing, I don't see any good reason, as Prison Doc says, not to do it.

Anonymous said...

GFB says:

"...where documented bias problems exist and can be rectified through blind testing..."

I would absolutely agree with both parts of that statement, because is based on a) a demonstrable problem, and a) a demonstrably effective solution to the problem.

The difficulty I have is with the study that was the basis for the Economist article. There have been a number of critiques of that article. The bottom line is, the study was poorly designed, had an insufficient sample size, did not adequately control for differences in interpretation due to inter-laboratory differences in policy and procedure, and did not contain both experimental and control study groups.

The profile in question was a low level mixed profile. The 17 examiners that were sent the profiles produced a range of interpretations. But that was an expected result because interlaboratory studies done over the years with these sorts of crappy samples have repeatedly shown that.

What was notable in the study was that the original interpretation of the case work laboratory fell within the range of interpretations returned by the 17 analysts who did not have access to the extraneous case information. It might have been the minority interpretation. But the minority interpretation was also seen in the group that did not have access to the case information.

Bottom line: the study failed to demonstrate that the availability of case information had a biasing effect.

So the authors are guilty of the first cardinal sin of researchers: they over-interpreted their results.

It also needs to be noted that one of the authors is the director of a state innocence project, and the over-interpretation is in the direction that you would expect of someone who might biased in favor of innocence claims.

It does seem to me that poorly designed studies ought not to be used for anything except designing better studies. To use them for developing public policy is not a particularly wise or prudent thing to do.

Gritsforbreakfast said...

9:02 - It's equally imprudent to ignore evidence of cognitive bias - which has been demonstrated in forensic settings beyond DNA by far more than this one, small study - in order to pretend that all is well and you don't need to ever consider changing how you do things. This study extended a theory about cognitive bias to the DNA realm; to pretend it's the only study showing such bias in forensics is completely disingenuous. But then, you probably know that.

What information do you actually need to decide if two samples match? Extraneous information may be important for a screener to help cops and lawyers identify what should be tested, but for the person actually conducting the analysis, no one here has made a single, substantive argument why the extra info would improve lab decisions/outcomes, whereas it's quite clear why failure to use "blind" testing might bias it. There's just no argument for non-blind testing except "That's how we've always done it." And that's not really good enough, is it?

Anonymous said...

GFB -

I'm going to assume from your response that you have read the Dror & Hampikian paper, and that you agree with the assessment that the data and observations reported in the paper do not demonstrate that bias was a factor in the original case work.

If you have read the paper you know that the study was intended to ask the question, Does cognitive bias impact DNA interpretation? The reason the study was performed, according to the authors, was because there is a lack of "empirical research" (their words) that addresses the question.

So are you saying that the question related to DNA interpretation has already been settled, and there is no need for additional empirical research?

Or are you saying that the correct public policy has been determined, and empirical research be damned?

If it is the latter, then can we look forward to a day when you stop trotting out bogus, pseudo-scientific clap-trap in support of your policy objectives??

Anonymous said...

Wow, anon 9:14.

It's almost as if you took contextual information from the remainder of GFB's blog and came to the conclusion that, in regards to GFB's opinion on the short article printed in The Economist, GFB preaches "bogus, pseudo-scientific clap-trap in support of your [GFB's] policy objectives".

Classic.

If one were inclined to educate oneself, they would search Google Scholar with "forensic contextual bias". Numerous papers are available for critiquing.

Gritsforbreakfast said...

9:14, I'm not going to waste time arguing points I never made, nor defending every red herring you wish to attribute to me.

If you'll re-read more closely, what I said was that this study wasn't the first time researchers, including Dror but also others, have alleged cognitive bias in forensic settings. This is a known phenomenon. This study isn't the first you've heard of it if you work anywhere near the field, it's just the first time field experimentation has extended to DNA. Elsewhere, Dror has found errors with fingerprint examiners as well.

I'm curious, what do you think are my "policy objectives" on this beyond reducing lab errors in order to prevent false convictions? I fail to see what agenda blinding lab techs would promote other than accuracy, but clearly you think you know otherwise, so please, better inform me. What is my ulterior motive that's got you so animated?

It's not personal; get over yourself.