The biggest difference?
Users can now input a range for the possible number of DNA contributors. In prior versions, they had to guess how many people's DNA is in a sample, even if they didn't and couldn't know.
For example, say an analyst is confronted with a "touch DNA" sample taken from a swab on a doorknob. They know for sure there is DNA there, but how many contributors are represented in the sample? Two, three, six, fourteen? How many people have touched the doorknob since it was last cleaned? And when it was cleaned, was old DNA wiped away or just damaged or deformed by the cleaning product?
Under the old method, analysts had to guess at the number of contributors - say, "5" - then the software spit out a probability based on those assumptions. The software did not adjust for the possibility they guessed wrong. Now it does.
Which raises the question, what happens when one re-runs old tests where analysts guessed the number of contributors? According to the press release, users can use the new software to "calculate multiple" likelihood ratios for old samples based on "multiple reference inputs." So which "likelihood ratio" should courts rely upon if there are multiple choices? How are judicial gatekeepers supposed to evaluate a situation when the original likelihood ratio testified to in court based on their software is now either deemed wrong, or is just one of multiple choices now being offered?
It seems inevitable that the software in those earlier cases overstated the probability that any given DNA belonged to a defendant. In one notable example, STR-Mix's old software accused a defendant while another, competing black-box service, TrueAllele, excluded the defendant as a suspect. One wonders, if they re-ran the test with the new software, using a range of possible contributors to the mixture, might some of STR-Mix's "likelihood ratios" now agree with TrueAllele's exclusion?
Grits does not believe any black-box software whose inner workings are not publicly available for peer review by opposing experts - right down to the all-important coding language - should be used in court to interpret DNA mixtures. Too many well meaning people keep getting the math wrong.
See related Grits posts:
- Courts punt on forensics surrounding DNA mixtures
- A reluctant scoop: Changing interpretations of DNA mixtures vex legal system
- Labs must correct wrong DNA mixture analyses, learn when to recognize 'crap'
- Resources on DNA mixtures
- The challenge of notifying defendants in large-scale forensic error cases
- DNA mixture SNAFU a mess but don't expect 'deluge' of innocence claims
"Which raises the question, what happens when one re-runs old tests where analysts guessed the number of contributors?"
ReplyDeleteThis same question could be asked by ANY of the forensics disciplines, most notably hair comparison analysis, arson/fire science investigation, bite mark, tool mark, ballistics, fingerprint analysis, etc.
But more importantly, WHO is going to re-run old tests? WHO is going to pay for the efforts to reassess old data? Certainly not the lab analyst.
This is not just a DNA analysis problem. This is an ethical/moral dilemma among the forensics community. Far too many forensic analysts are of the mindset that their past work product is unquestionable. Once a lab report is completed, or once a court case is concluded, the "finality" of the case seals the infallibility. They will not revisit old evidence or dare suggest that their prior analysis has problems. And a forensic analyst certainly can't question a previous analyst's conclusions, especially if they were both employed at the same lab. If they are both following the same protocols, or were trained by the same supervisor, then all results should be essentially identical, right? They don't worry about reproducibility. Any discrepancy can be dismissed by the huge number of variables that exist when analyzing evidence. Any questioning of their conclusions can be whitewashed with plausible deniability "nothing is perfect"...a statement they will not admit to in front of a defense attorney or jury before the case concludes.
[If you've noticed recently, the phrase "scientific certainty" has become verboten because attorneys have only discovered recently that it is essentially meaningless.]
The Black-Box description not only applies to the unscrutinized patented software, it also applies to the inner thought process of each individual forensic scientist. Who knows exactly what data the forensic analyst was using when the evidence was being tested? They will tell you that you'll just have to trust them that they got the right answer.
Grits, you're asking the right questions, but they are disappearing into the ether. Accountability has faded long ago.
With modern forensics, truth is sacrificed for expedience. Lab tests will always confirm the "gut feeling" of the investigating officer or the lab won't be hired by the LEOs to test evidence any more. Truth be damned and have no regard for the unfortunate cretin who wrongfully stands accused, forensics serves to support the findings of the investigator, not discover the perpetrator.
ReplyDeleteI don't agree with that, Steven, but I do think that few analysts understand the math behind DNA mixture analysis and we aren't seeing "experts" testify about these STR-Mix results so much as stenographers. They run the sample and record the "result," but keep running into trouble trying to describe mathematically what it means.
ReplyDelete@Alex, often it takes a while for a monied defendant to get accused to challenge this stuff, but eventually it happens. Won't be a systemic thing where the crime labs all change at once out of the goodness of their hearts, but one by one, the flawed forensics are getting challenged.
Virtually every analysis in science includes computer-based components. So I am wondering what makes the software code for the statistical calculations underlying DNA interpretation more special and of greater concern than, say, the code underlying calculation of retention times that are the basis of drug identification in HPLC analysis and gas chromatography, or the code used in calculators and spreadsheets to calculate the mean and standard deviation of a set of measurements. For that matter, there is extensive software code underlying the capture and rendering of the voluminous digital photograph and video files that are presented at trials, yet there is no apparently no problem with the accuracy of those proprietary algorithms, and no problem with the fact that crime scene technicians have no understanding of what those algorithms do. Even accepting that DNA evidence is of special importance, it is not clear why the software for interpreting a DNA profile is more important than the software used to run the capillary electrophoresis instrument that generated the raw data file, or the genotyping software that interpreted the raw instrument data to produce the DNA profile, or the calibration software that was used to determine that the pipette used at the bench to extract and process the evidence sample was measuring working properly.
ReplyDelete@8:50, the answer is that the math underlying DNA-mixture evidence is in dispute and errors have been made, while math underlying drug identification in HPLC analysis and gas chromatography is well established. See the links at the end of the post for more detail.
ReplyDeleteSo no one said the software is more important on DNA mixtures than on other types of forensics. However, the underlying science is much more in dispute, and the math is buried in code.
Grits for breakfast make a number of statements that deserve correction but we focus only on the most obvious.
ReplyDeleteThe new capabilities of the software allow an analyst to specify two possible number of contributors, say 3 or 4. Previously an analyst had to specify one number. However it is unlikely that any number would change downwards markedly if past cases were rerun. Empirical data suggest that if an incorrect number of contributors was used then the number is more conservative than necessary. Hence number of contributors is either correct or an unnecessarily conservative statistic results.
The code for STRmix™ is available and has been inspected three times by the same analyst. The terms of disclosure are available here. http://strmix.esr.cri.nz/assets/Uploads/Defence-Access-to-STRmix-April-2016.pdf
All the algorithms for STRmix™ are published in the public domain. A list of publications appears here. https://johnbuckleton.files.wordpress.com/2018/08/peer-reviewed-publications-for-strmix-ii1.pdf
We are happy to answer inquiries prior to posting if that helps accuracy in future grits.
John Buckleton
@Dr. Buckleton, that'd be fine to the extent your comments are corrections.
ReplyDeleteI quoted your press release saying analysts could previously use only one input and now they can use "multitple." Here you say that by "multiple," you meant you can now put in two two inputs instead of one. Okay. But that's what I said, quoting your press release, in fact. You repeating it isn't a correction.
Then, you tell us "it is unlikely that any number would change downwards markedly if past cases were rerun." Define "unlikely." Because one could also read that sentence to say, "it is POSSIBLE that any number would change downwards markedly if past cases were rerun." You're just downplaying the possibility. People selling a product often say problems are "unlikely." (The guy who sold me a dishwasher said the same thing last year and it just conked out three months after the warranty expired.)
Please explain: If the data on number of contributors is "unnecessary," why would you change your system to include unnecessary data? Either this upgrade made meaningful improvements or it didn't. In your press release, you implied it did. But if the data is "unnecessary," why bother?
Regardless, I'm surprised to see you say that the assumption of how many contributors there were is "unnecessary." That doesn't make sense logically given problems with low-quality and damaged DNA, allele dropout, stacking, etc. that your method is trying to solve. I'm not a scientist but I've heard experts brought in by the Forensic Science Commission explain the sources of the problems with mixture analysis, and one of them is guessing at the number of contributors. Clearly it's something your company felt was important enough to address with an upgrade.
Glad to learn the algorithms are in the "public domain." But about those articles, can you tell me how many of the authors are employees, contractors, have their research funded by, or otherwise have benefited economically from STR-Mix, the company? I ask because I don't see the names I most associate with DNA mixture expertise on the list - Budowle, Butler, Coble, etc.. So I'm wondering how much of that is in-house work?
Finally, which one of those articles explains why your algorithm in the NY episode (story here) where y'all came up with a different result from TrueAllele, which is supposedly using the same statistical method to generate its estimates? If different proprietary methods are supposedly using the same math and getting opposite answers, that raises more questions than you're acknowledging here.
Thanks for commenting, though, even if you didn't contradict much I've said. If you keep going, we can definitely clarify the debate more than you were able to so far.
Grits, thanks for your reply. i have a permission requirement at this end which I will strat and get back to you. Do you have an email I can use? John
ReplyDeleteOne more thing, Dr. Buckleton, from an article you co-authored: "While the computation of the statistical estimate, itself, does not require assumptions about the number of contributors, an assumption of the number of contributors is necessary to help inform decisions about whether allele drop-out is likely at particular loci in the evidentiary sample. For example, if only four allelic peaks appear at a locus in a profile assumed to be from two donors, then it is reasonable to assume that allele drop-out has not occurred at that locus."
ReplyDeleteFrom the same article: "An actual number of contributors, not a minimum number, is needed, as a different number of contributors for the same DNA mixture will result in more or less allele drop-out to explain the observed profile. Consider, for example, a mixture profile with exactly 4 alleles at every locus, under the assumption of a two-person mixture there is no evidence of allele drop-out. However, if the assumption is that there are five contributors for the same mixture profile, then probability of allele drop-out is extremely high."
So that tells me assumptions about the number of contributors are more important to the process than you are implying in your comments above.
The email is gritforbreakfast@gmail.com.
ReplyDeleteIf we discredit DNA evidence we can see the suspect set free.
ReplyDeleteGrits
ReplyDeleteDr. Buckleton did not say that the number of contributors is an unnecessary parameter. His words were: "...unnecessarily conservative statistic results."
To me the meaning is that the statistics given will most probably be correct or then more conservative than necessary.
Be that as it may, I also feel that the technology and its mathematical underpinnings are not yet at a level where the trier-of-fact should be relying on it as a major piece of evidence in a case if there aren't other factors also pointing to the same result.
Probabilistic genotyping is definitely preferable to manual deconvolution of complex mixtures. But there should be a well-understood and well-articulated explanation for any differences in stats that occur due to differing algorithms so that the Court can make an informed decision.