A fascinating development has emerged in the forensic testing of controlled substances. A "white box" study aimed at establishing error rates for this commonly-practiced forensic discipline is currently underway. Below is my interview with the man who conceived the study, Jeremy Triplett.

Jeremy, you are a former President of the American Society of Crime Laboratory Directors (ASCLD), so you've played an important leadership role in forensic science.  Can you tell us a bit about the white box study recently announced by ASCLD?

First, I want to thank you, John, for inviting me to talk about the study. I’m very excited about it and I appreciate the opportunity to share what we’re doing with your readers.

The drug chemistry white box study is something that I’ve had bouncing around in my head for a couple years now. Essentially, we want to get at the foundational question of error rates in drug chemistry with respect to different analytical techniques – techniques both by themselves and in various combinations.

So there’s a very foundational science question here and that is, “What are the error rates associated with performing individual analytical techniques and how do those error rates change when we perform techniques in particular combinations?” Are some combinations better than others? Do we get vast improvements with 2 or 3 different techniques, or are they marginal increases? What really is a sufficient amount and type of testing?

That’s the foundational science aspect, but there’s a really nice applied aspect to the work, too. We want to go into the evaluation with no assumptions, but just for the sake of discussion, what if we find that there really aren’t significant error rate improvements at some number of additional techniques? And what if we find that some very simple tests give very reasonable error rates with respect to very expensive and/or “advanced” instrumentation? I don’t know what we’re going to find, but whatever the case is, there’s an operational component here aside from the principal foundational science question, where we can really put some empirical data behind some purchasing and operational decisions in crime labs. We’re all trying to make the most of strained budgets. It would be great to have some analytical data that informs laboratory administrators on what instrumentation is sufficient within an acceptable error bound.

How did you go about seeking volunteer laboratories to participate in the study?  How many laboratories are involved, and were there any special selection criteria used?

We have 87 laboratories signed up to participate, which I’m very excited about, but doesn’t surprise me. The community has embraced continuous improvement for a long time. I’m really pleased that so many laboratories have seen value in the project and have been so gracious to commit their time and their resources. I can’t thank them all enough.

We essentially sent out a call for participants through any channel willing to distribute the information. We used the ASCLD Crime Lab Minute. AFQAM, CLIC, MAFS, and CACLD also assisted in getting the word out. We wanted laboratories of all sizes in all different types of jurisdictions. We ended up getting local municipal labs, state labs, private labs – everything you can imagine. We actually had several international labs that wanted to participate, but unfortunately the shipping of blinded controlled substances was too difficult to accomplish.

We didn’t want to put barriers in place to prevent participation. We wanted to get a sense of what was really happening in the field every day, so we did not apply any significant selection criteria. The only criteria were that labs have a valid DEA license to handle controlled substances, and we asked that participants be active, case-working analysts. It’s been an overwhelming response, so we are really pleased.

What other key partners are involved in the study and how is your work being funded?

The evaluation is funded by the National Institute of Justice through the RTI Forensic Technology Center of Excellence. I’m extremely grateful to NIJ for committing their resources to the project and I’m grateful to John Morgan and Jeri Ropero-Miller at RTI for listening to my pitch and believing in the project enough to commit the FTCoE to facilitate it and ensure the community was engaged and able to complete it.

I knew that for this study to matter we needed to make sure we got the project design right, so I also assembled a rock star advisory team that has been instrumental in the project. Megan Grabenauer at RTI is my co-PI. The advisory team also consists of Sandra Rodriguez-Cruz of the Drug Enforcement Administration, Jeff Salyards, retired Chief Scientist and Director of the Defense Forensic Science Center, Jeremy Morris of the Johnson County Sheriff’s Office, and Darryl Creel at RTI, who is our statistician. The team has been incredible to work with and I think I’d be hard pressed to find a better collection of talent.

Lastly, I want to take a quick moment and recognize the immense assistance of Jeff Comparin and Steve Toske at the DEA Special Testing Laboratory. DEA graciously offered to help us with synthesizing some of the materials for the study and we quite literally couldn’t have done this without them.

Looking back at the 2009 report published by the National Research Council, Strengthening Forensic Science in the United States - A Path Forward, it seems that your study could go a long way in addressing some of the issues raised in the report.  Do you agree with this opinion?

I definitely agree. I think this study speaks directly to Recommendation 3, which called for studies on quantifiable measures of reliability and accuracy. I also think it will hopefully speak to some of the recommendations published by the President’s Council of Advisors on Science and Technology in 2016, which called for studies like this one.

I think it also responds to what our own forensic scientists have been asking for. The Seized Drugs subcommittee of the Organization of Scientific Area Committees and the National Institute of Justice Technical Working Group for Drug Chemistry have both published recommendations in the last several years calling for error rate studies in drug chemistry. Both of these groups are significantly composed of forensic scientists that do the day-to-day work of identifying controlled substances in casework. So it’s really a call from outside as well as a call from within the forensic community that sparked my interest in exploring this idea.

If the study proceeds according to plan, what impact do you think it could have on forensic science laboratories in the United States?

If we get the project design right, my hope is that this study will be accepted inside and outside the forensic science community as reliable empirical data about the sufficiency of analytical testing for forensic drug chemistry cases. We are probably well beyond Daubert hearings in drug chemistry, but my hope is that publishing these results will give forensic drug chemists confidence in their work and their testimony. My ultimate desire would be that we know we’re getting it right when we go to work each day.

I also hope the study will help inform laboratory decision makers about the types of testing and capabilities they implement in the laboratory. Having solid analytical data to inform management decisions is so important. We are all working with limited resources and wanting to steward tax dollars well, all while ensuring we get the science right. Hopefully this study provides laboratory administrators with data on which to make purchasing decisions.

For forensic drug chemists who regularly testify as expert witnesses in court, do you see any potential changes to how expert opinions are reported, both in writing and in verbal court testimony?

To be honest, I’m not sure. I think the drug chemistry community has already made significant advances in the last ten years about how we report our findings and testify. Accreditation changes have advanced the way we report measurement uncertainties. With the proliferation of novel psychoactive substances and regularly-changing controlled status of new drugs, we are seeing laboratories be very specific in their reports regarding how a particular drug may or may not be controlled and whether that status has changed over time. We are also seeing laboratories regularly include, on the lab report itself, which analytical techniques were performed on the item in question.

In relation to this study, we are also seeing a change in how drug chemists express their analytical confidence both in written and testimony forms. We have seen labs move away from language like “no uncertainty” or “zero error rate,” which is a positive change. It’s my hope that this study provides empirical data that can replace that language. The ultimate “win” for this project would be that analysts can report and testify with confidence what some notional error rates have been determined to be.

When the study is completed, how do you anticipate getting the results out into the community?  

We will disseminate the results widely. I anticipate that we will publish the results via the usual peer reviewed journal mechanism. My hope is that there will be enough interest that we can present at multiple conferences like the ASCLD Symposium and as many others as will have us.

How satisfying is it to be part of a study like this?

It’s extremely satisfying to be working on this. As I said earlier, this is something that people I really respect inside and outside the forensic science community have been asking for. The team we’ve assembled is brilliant. The support from NIJ, RTI, and DEA has been incredible. And the excitement from the community, particularly those participating labs, has been really great. I’ve wanted to do this study for several years now, so seeing it all start to come together has been really gratifying. We still have a LOT of work to do, but I really think this is something the whole forensic drug chemistry community can be excited about and can look back on and be proud of once we’re finished. FSE