A Q&A with the visionary "nerd" who's trying to make expert testimony more reliable
A family tragedy drove Peter Stout's quest to improve forensics. Will the criminal justice system let him?
Peter Stout runs the best crime lab in the country — or at least the most scientific one. After scandals rocked the Houston Police Department crime lab throughout the 1990s and early 2000s, the city created a new lab that would be independently funded, would report to its own board (as opposed to, say, police or prosecutors), and would operate on a commitment to accuracy and scientific principles. Stout was brought on to head up the new project — called the Houston Forensic Science Center (HFSC) in 2015. In the years since, he has emerged as a pioneer of forensics reform.
In addition to a finely-honed intellect, Stout is affable and self-deprecating. His jovial demeanor stands in stark contrast to a dark and traumatic personal history. Growing up, Stout was repeatedly sexually abused by a neighbor. Another childhood neighbor was murdered in a mob hit. But the trauma that’s probably most relevant to who he is today was the murder of his grandfather in 1977, a case in which the state may have used faulty forensic analysis to obtain a wrongful conviction.
Stout is wonky. He has a corny, disarming sense of humor, and he wears his nerdiness like a science fair ribbon. His most important innovation — or at least the reform for which he’s best known — is implementing blind testing into the day-to-day routines of the analysts he oversees. While many crime labs and forensic disciplines have been resistant to rigorous proficiency testing, Stout not only embraces the idea, he’s revolutionizing it.
Most proficiency tests are given outside the lab. This means the analysts are aware they’re being tested. We know that people tend to change their behavior when they know they’re being evaluated. External tests also lack the pressure analysts often feel to plow through casework quickly, or the subtle, often subconscious bias that can prod them to produce results favorable to the party that ordered the analysis. In state labs, this is almost always police or prosecutors. All of this means that even the most well-designed proficiency tests probably aren’t all that accurate a measure of how crime lab analysts perform day to day.
Stout’s solution to this problem was to assemble a quality control team that creates fake “cases,” which he then incorporates into his analysts’ daily work. He also added a smart, low-stakes gamified incentive structure that pits his quality control team’s ability to mask the fake cases against his analysts’ ability to sniff them out. The “blinds,” as Stout calls them, make up about 5 percent of analysts’ casework. The goal is to keep them unsure about which cases are real and which cases are testing them.
Because Stout knows the “ground truth” for the fake cases, he can use them to test his analysts’ accuracy. The data he has collected from his blind test cases are really the only figures we have on how accurate analysts are in their everyday work.
Stout also runs one of the most transparent labs in the country. The HFSC regularly posts data about backlogs, turnaround times, and wait times — despite the fact that doing so can open Stout and the lab to criticism.
Stout and I spoke by phone late last month.
(Fun side note: Because of a flight cancelation, I did this interview in a rental car, from the parking lot of a Buc-ees in Sevierville, Tennessee. Journalism is a glamorous job!)
How did you get into forensic analysis?
It’s funny you ask that. It’s taken me about 40 years to figure out why I ended up in this field. I have a pretty significant and lengthy abuse history — sexual abuse when I was a child. So I’m sure that’s played a role.
But also, when I was seven, my grandfather was murdered and my grandmother was beaten into a coma.
Wow.
Yeah, it gets worse. It was here in Texas, in a suburb of Austin called Westlake Hills. But the thing is, after the homicide, I am reasonably confident that my family participated in a wrongful conviction. A young woman named Cherrie Ann Porter was convicted for my grandfather’s murder. The only real evidence was a couple of lousy prints in the car and my grandmother’s eyewitness testimony.
Even on a good day, I’m not sure my grandmother could tell very many people apart. But she had some pretty severe brain damage after the attack. I think there was also some hair fiber analysis that linked some things together. Porter’s conviction was overturned in 1982 due to lack of evidence. And here in Texas, at that time, that was a big deal.
So I think that’s probably something that really drove me into this — not knowing whether this big event in my family was a miscarriage of justice. It has definitely motivated me. I know the implications of what crime does to people — that the trauma can span across entire lifetimes and generations. And I also know the harm an injustice can inflict on people.
That’s an incredible story.
Victims often become victimizers. So you know, for the trauma history that I have, I really shouldn’t be where I am today. I should be a drug addict living under a bridge somewhere, or I should have been a suicide long ago. I’ve gotten lucky. I’ve had a few lucky breaks. But it’s very clear to me now how it’s all tied together. Being in forensic science now — I see now why I do this.
So was Porter tried again — or did they ever try someone else?
It isn’t clear. The records were so bad back then. I actually hired an investigator to go dig into stuff, and so now out of everybody in the family, I probably have the best view of it. I’ve got some of the old news articles, and I think you can find the appeals court ruling where her murder conviction was reversed. But there are a lot of holes in the timeline. We just don’t know. She could well have been part of that murder. I just don’t know.
So did you immediately want to go into forensics from childhood on?
Not necessarily, but I think it’s what made me end up here. I’ve got a bachelor's in biology, a master’s in engineering, and a Ph.D. in toxicology. At some point, my wife said, would you please get a real job?
So I showed her: I joined the Navy. I was in the Biomedical Service Corps, and I ran one of the Navy’s drug screening laboratories. So I was basically a narc for the Navy. I ran the lab in Jacksonville, Florida. We ran about a million samples a year. It was very similar to workplace drug testing in the civilian world — and actually, civilian workplace drug testing is an outgrowth from the military program.
We did analysis of Navy and Marine personnel east of the Mississippi, the Atlantic Fleet, and the European Theater. We had about 70 civil service federal employees who did urine-based testing. We had to provide testimony to support those results because to test positive in the military is actually a felony.
After active duty, I went to a commercial laboratory in Nashville, Tennessee that did sports doping control. It was very small company. We did sports doping control, federally regulated workplace drug testing, and a little bit of seized drug analysis. I then went to RTI International, a big not-for-profit research institute in North Carolina. I was part of the Center for Forensic Sciences there. One of the longstanding programs at the center was the National Laboratory Certification Program, which is under the Department of Health and Human Services.
That’s the program that regulates federally regulated workplace drug testing labs. We also had a number of different research projects there into all kinds of different stuff in forensics. I had a pretty successful research career there. I’d been there about 10 years when I saw this new project in Houston and for some insane reason, I decided to jump in. It seemed really unique. It’s been an adventure ever since.
Could you talk about what was happening in Houston in the 1990s and early 2000s — the events the led to the creation of the HFSC?
It’s such a long history. There have been several histories written about what happened here in Houston. Brandon Garrett’s book has quite a bit about Houston. Sandy Thompson’s book is all about Houston.
But basically in the late 1990s to early 2000s, it got really bad around here. If you can conceive of something that might go wrong in a crime lab, it was going wrong with the one in Houston. Most of it was due to underfunding. But there were also all kinds of poor processes and procedures. There were allegations of fraud and other wrongdoing.
There was an enormous backlog with almost everything. The DNA backlog is probably the one that got the most attention. They had a backlog of about 10,000 rape kits that ran back to the 1980s. So none of those cases were getting solved. But at the worst point, I think there was also a backlog of something like 45,000 seized drug cases. Just enormous backlogs.
The lab also had a leaky roof that destroyed and contaminated evidence. Literally, the roof leaked onto the floor when it rained. In a crime lab! I mean, there were just all kinds of problems. At one point, Adam Liptak wrote a New York Times article about it, and the headline was, “Is Houston the worst crime lab in the country?”* And it may have been.
[Note: The original headline for the Liptak article was “Worst Crime Lab in the Country — or Is Houston Typical?]
After all of that, there were several different audits, culminating with the Bromwich report. The head of that report, Michael Bromwich, was brought in by the city of Houston.
As far as I know, that audit of the lab is still the single largest, most expansive audit of a crime lab in U.S. history. They crawled through everything. It’s a very big report.
From there, the Houston Police Department went to work to improve things. Between 2003 and about 2012, they managed to get parts of the laboratory accredited. They made a lot of progress, but for most of that decade, the lab was constantly in the news. It was just this constant struggle with failure. They had to shut the lab down a couple times, and there was this cycle where accreditations would be revoked and then reinstated.
Finally, in about 2011, Matt Slinkard, who’s the current executive chief of HPD, wrote a white paper with Tim Oettmeier, a former chief who is now retired, that recommended the city do something different with the lab. They pointed to the [National Academy of Sciences] report recommendation that crime labs should be separate from police departments. So they said let’s do that. Let’s have a crime lab that is completely separate from law enforcement — put it under a separate entity.
So it was really HPD’s idea to start this. The first board was formed in 2012. They spent about two years negotiating all of the foundational documents and determining which components of HPD would get rolled into the new lab. All of that stuff was the first couple of years. And then in 2014 — April 3rd, I think it was — the switch was flipped, and the HFSC took management responsibility for forensic operations.
It would be a local government corporation overseen by a nine-member board, and the board took full management responsibility for forensic operations.
Three parts out of HPD were rolled into HFSC: There was what everybody referred to as the lab, which was DNA, seized firearms, drugs, and toxicology. And then there was what was called the ID unit, which was latent prints and multimedia evidence. And then there was crime scene.
I came on in February of 2015 as the first Chief Operating Officer. I saw an ad for the position and was drawn to it. As I said, it’s taken me quite a few years to understand why I was so drawn to it. But there was something that just spoke to me about somebody actually trying to do something different with a crime lab, to try to start a lab that was only beholden to getting at the truth of what happened.
“The number one way our analysts could tell we were testing them was that the handwriting was neat and everything was spelled right. They’d tell me, ‘There’s no way a cop wrote that.’”
What else about the operation made you think, okay, this is crime lab that I'm excited to be a part of?
I think fundamentally it was the fact that it was such a different structure, that it was a separate board of directors that wasn’t just an advisory group. Being on the advisory board in North Carolina when that lab blew up, I could see just how toothless an advisory board can be. In Houston, you’d have a board that had actual fiduciary management responsibility and authority. I joke about this a lot, but I work with the chief of police, and I work with the DA and I work with the mayor. And I’m quite certain I piss all of them off regularly.
But this position does not work for or report to any of them. I work for the board and only the board. My employment contract is with the board of directors. That’s so different, and so important. I’m not sure that at the time I applied that I could see all the possibility.
It also really wasn’t clear then if the whole thing would survive — if it would be able to withstand the various pressures that are put on a crime lab. And there have been pressures. We get it from all sides. If you check Google, it’s been all over the news here — just last Wednesday the police union demanded my resignation because of our backlog. So it’s been an entertaining week.
But our job is to be independent. And I think the structure we have in place has done a good job resisting those pressures so we can actually make a little progress. Next year it will be 10 years since HFSC was started.
How is the lab funded?
We’re a separate line item in the city of Houston’s budget. So the vast majority of our budget comes from the city of Houston. But it’s not in HPD’s budget. We are a separate item in the budget. We have what’s called an inter-local agreement. You can think of it like we’re a contractor with the city of Houston to provide forensic services.
I want to talk more about the pressures on the lab, but I want to ask you more about the situation in North Carolina, because I remember writing about that scandal at the time. That lab was nuts.
Oh yeah. The board that advised the lab was called the Forensic Science Advisory Board. Okay. That was formed in response to the Swecker report, which exposed all the crazy stuff that was going on in the lab. I was one of the first appointees to that board.
If I’m remembering correctly, didn’t prosecutors do the annual performance reviews of the crime lab analysts? That’s just an absolutely twisted incentive system.
Yep. That was part of it. It got so tangled up with ASCLD (the American Society of Crime Lab Directors], and ACLSD Lab, and ASCLD Consulting. Bud Keaton had been heavily involved in ASCLD Lab and ASCLD Consulting, and then he was the director of the lab. It didn’t look good at all. There was a lot of conflation of accreditation, private consulting, and the crime laboratory.
So a lot of intermingling of private and public and private.
Yeah. There was a blog called Bullet Path that was all over the news in North Carolina for quite a while because it published a detailed account of this kind of unfortunate tangling of ASCLD, ASCLD Lab, and ASCLD Consulting. And they were all in this little strip mall, but only one of them had a mailbox, and the other two didn’t actually exist. They were all run out of the same mailbox.
One anecdote from that scandal has always stuck with me — they had security camera footage of these two blood spatter analysts. They kept doing this same experiment over and over again, because they weren’t getting the result the prosecution wanted. When they finally got it, they high-fived!
They never told the defense how many times they had to run the experiment to get the blood spatter patten that supported the state’s theory of the crime.
Yep. Good old Duane Deaver. So here’s story for you: When I was on the Forensic Science Advisory Board, I was the toxicologist. And one of the first things we were asked to do for the crime lab was review its Standard Operating Procedures.
So I said, give me the SOPs and gimme some data so I can compare. And they said, “We can’t give you any data.”
What, how am I supposed to review your procedures if you won’t give me any data? They wouldn’t give me anything. I carped about that all the way up through Roy Cooper, who was the attorney general at the time. Nope. They wouldn’t give me any data. So the defense attorney community finally started giving me some of the data they had received through discovery in their cases. So I finally had some data to look at. It wasn’t perfect because it was tough to match up with SOPs. But there were some issues. They weren’t serious issues, but there were some things that needed to be addressed.
So we had a meeting. Sat down with everybody. I went through all of my points and all of the red flags in their SOPs. And that was great. Or at least that’s what I thought.
At the next advisory board meeting, I walked in to find a lovely plaque that thanked me for my service.
They kicked you off the board?
Yep. To put it accurately, I think I got fired.
I want to ask you about accreditation. I suppose it goes without saying that accreditation is better than no accreditation. But does it really mean that much? Many of the big crime lab scandals we’ve seen over the years occurred at accredited labs.
As you say, accreditation is better than no accreditation. But here are a couple things I point out when people ask me about this: When I was at RTI, we ran the National Laboratory Certification Program, which accredited labs that do workplace urine drug testing. It’s relatively simple testing. It’s big volume, but it’s not terribly complex. Those laboratories have two onsite inspections a year and two record audits a year. Those labs have to provide a list of everything that they report. The program also selects what the laboratory has to pull in terms of records before auditors come on site and audit. They get four proficiency tests a year, every year.
So for this relatively simple, relatively small version of workplace drug testing, you have all of that structure, all of that testing for an accreditation program.
Now let’s talk about accreditation for crime labs. The accreditation cycle for crime labs is once every four years. And we’re supposed to do one proficiency test per year. In terms of rigor, it’s not even in the same ballpark.
That’s remarkable. With workplace drug testing, a false positive might mean someone gets fired. In a crime lab, a false positive can send someone to prison.
Exactly. I would say our lab is one of the biggest invoices that ANAB [the national organization that accredits crime labs] cuts to an individual laboratory because we accredit everything in the organization, even the stuff that we don’t have to under Texas law. But it’s still just a tiny fraction of the cost of NLCP (National Lab Certification Program) accreditation for what I would view as a comparable scale and complexity laboratory in workplace drug testing. I mean, it’s like a tenth of the cost per year. But this is the reality for crime labs. The entities that fund labs are unwilling to provide the resources for more extensive accreditation programs. So of course the accreditation will be less extensive.
Let’s talk about proficiency tests. You’re well known for the blind proficiency tests you give your analysts — you mix the proficiency tests into their day-to-day casework, and you mask the tests so they don’t know when they’re being tested. Most labs only do open testing — where analysts sit for sessions in which they know they’re being tested. Are open tests sufficient if we really want to know how proficient forensic analysts are at their jobs?
The short answer is no. And I don’t think anybody would be surprised that I’d give that answer, because I’m the weirdo who makes his analysts do blind proficiency tests.
I’ve yet to find any lab that even comes close to doing blind testing at the scale at which we do it. But I’m crazy and weird.
But blind proficiency isn’t the only solution. You need both blind and open proficiency. Open tests are going to be much more homogenous between laboratories, so they’re good when you’re comparing the performance of the analysts in one lab to those in another. Blind testing lets you assess the performance of the entire system.
And nearly all labs only do open testing, right?
Right. And the problem there is that it’s not going to be quite like actual case work because all you’re measuring is how well analysts do under these artificial circumstances in comparison to analysts in other analogous laboratories. You can’t stop there, because that’s not actually testing all of the places in the system where things can go wrong. You can get me on a soapbox really quick about this. Maybe it’s my engineering background. But how in the world do you know what you need to fix if you don’t go deliberately try and break things — to see where the things are breaking? You have to test to the point of failure.
As you said, you’re one of the few labs in the country that does blind proficiency testing. What do your analysts think about it? Does it affect morale?
I have never felt much pushback on it. But people sure seem to think it will hurt morale. It’s one of the most common questions I get when I tell other laboratories what we’re doing — how does the staff take this? Do staff look at at these tests like, ooh, we’re trying to sneak up on them and get them?
But it’s just not something that’s been an issue here, at least as far as I can tell. When we started the tests, I made an offhand bet with the staff: If you find a blind test and it turns out to be a blind, I’ve got a Starbucks card for you. If you flag a case that you think is a blind and it’s not, you owe me a buck.
So we tried to make it a bit of a game. They took me at my word. I’ve got all kinds of stories about what has tipped my analysts off to blinds.
Can you share a couple?
Oh, sure. First and foremost, they’ve said the number one way they could tell it was a blind was that the handwriting was neat and everything was spelled right. They’d tell me, “There’s no way a cop wrote that.”
But it can be more nuanced than that. The packaging was too clean. The material was too neat. It’s rarely the actual technical nature of the material. It’s almost always to do with the packaging, the documentation.
Sometimes it can be really, really subtle. For example, because everything comes from HPD, analysts get familiar with an officer’s badge number relative to their employee number. And they put these things on the documentation, so my analysts can spot a blind because the badge number we tried to mock up and the employee number that we’ve tried to mock up wouldn’t align chronologically.
Or there’s things like, we’d say this evidence was seized by a narcotics officer, but our analysts know a narcotics officer wouldn’t be caught dead submitting such a small amount of seized drugs — like an amount that small could only come from a patrol officer. I have a paper coming out soon on all of this in PNAS that Tom Albright invited me to write.
So you’re constantly trying to mask these blind tests in a way that fools your analysts, because it’s important that they not know when they’re being tested.
Yeah, it becomes kind of a game. I really nerd out on this stuff.
We’ve seen two different types of scandals in crime labs over the years. The first is just outright fraud — dry-labbing, lying on the witness stand. The Annie Dookhan-type scandals. But more commonly, it’s well-intentioned analysts unknowingly affected by cognitive bias. One seems worse than the other, but both can result in disastrous wrongful convictions. How comfortable should we be with the quality of forensic testimony in criminal trials today?
What I think is very widespread — and what drives a lot of unfortunate behavior in laboratories — is exactly what I’m dealing with right now. Backlogs. Right now, yes, we have some backlogs. We have some capacity issues. We’ve had lots of staff turnover. But on Wednesday, the police officer’s union held a press conference criticizing our backlogs and our turnaround times, and demanding my resignation over that. So even with all of the structures that we have around us and the independence that we’ve created here, we still aren’t immune from this gigantic overwhelming pressure from the rest of the system to just give them the answer now. They want the answer immediately. They often don’t even care what the answer is. They just want it quickly.
That sort of pressure on laboratories is corrosive. It probably isn’t causing intentional or malicious misconduct, but it is this massive pressure to just get an answer out the door, any way you can. And that’s when mistakes are made.
Most labs in the country face that sort of pressure daily. If we didn’t have the structural independence we have, I can guarantee you I’d have a boss saying to me, “Why in the world are you doing all those blinds? Quit spending money on all of that and just get the answer out the door.”
But to answer your question — how much should we trust testimony from labs — we’re almost 15 years after the NAS report, now. I think there’s been an awful lot of progress in laboratories toward better procedures, and more thoughtful attempts at improving testimony. But another harsh reality for crime labs is DA’s offices all over the country have hemorrhaged staff. The experience and competency of most prosecutors has gone way down. It has gotten really risky for analysts because the rest of the system trying these cases doesn’t know how to handle the scientific forensic evidence. So most testimony we give is basically: Please state your name for the record. Is this the report? Do you recognize that evidence? And what was the answer? Thank you very much.
And then defense gets up and says, no questions. That’s the testimony.
So there’s all kinds of stuff that could have gone poorly at the lab that nobody has any chance of seeing, because at trial no one bothered to ask. Meanwhile, the lab is having to over-function for the rest of the system’s challenges, and without the resources to do it.
One of the big questions in forensics right now is how much external information an analyst should be given before starting an analysis. There are studies showing, for example, that fingerprint analysts are more likely to find a match when they’re told that someone has confessed to the crime — information that shouldn’t affect their analysis at all. But I’ve also had analysts and medical examiners who I think are conscientious and well-meaning tell me that it’s critical for them to discuss a case with police or prosecutors first. How common is that sort of consultation, and do you think it’s appropriate?
Well first I would say yes, It’s very common. And yes, I think you need to do everything you can to limit that interaction between investigator and analyst, and between a DA and an analyst. But there is some information an analyst needs to know, so we’re trying to figure out how to get that information to them in a way that won’t bias their conclusions. And it turns out that this is wickedly difficult.
The blind testing is part of what’s taught me this. When we put a test case together, we have to go to extraordinary lengths to emulate all kinds of information. And there’s no easy way, really, to separate that information so it’s tied to an individual analyst. For example, there’s off-the-shelf laboratory information management systems that any laboratory can use, and there’s only like four manufacturers of these things in the world. None of those systems allow you to customize what information flows through to an individual analyst. It just doesn’t exist in the systems.
We’re also engaged in a constant fight to stop investigators and prosecutors from directly contacting our analysts. We ask them not to do it, and they’re determined to do it anyway. I'll give you an example that really happened: We had an analyst on vacation. The prosecutor found the analyst’s spouse’s personal cell number to call them on vacation and harangue them about a case.
And there’s really nothing you can do about something like that other than complain. We try to keep their cell numbers insulated. We try to drive them through our case management unit.
But sometimes it’s hard to avoid. For example, you need to be able to answer some questions to determine if a DNA sample is eligible for CODIS (the national database of DNA profiles). But there’s a lot of information in those questions that can be potentially contaminating, and bias an analyst’s conclusions. And to get those answers, the analyst would traditionally need to talk to the police investigator, which can also bias them.
But we have to have that information in order to determine if the profile is eligible to be put into CODIS. So we created a request portal with dropdown menus that can get that information up front, and put in a place where my CODIS unit can understand it. But this way, that information doesn’t go through to the analyst, nor does an analyst have to then call up the investigator and ask those questions, which could potentially expose them to biasing information. But it’s taken us years to get to this point. And not enough crime labs are thinking about these things.
And after you go through all of that and get that system in place to stop those interactions so your analysts aren’t biased, then an ADA gets your analyst’s cell phone number and just straight calls them up. Happens all the time.
Are there other crime labs trying to implement what you’re doing in Houston?
There are actually quite a few laboratories trying to have some kind of case management system. The Kentucky State Police. Miami-Dade has done some good stuff. DFS in Virginia.
Here’s the thing: There’s all kinds of stuff that goes wrong in labs. There’s all kinds of wrongheaded things that labs do. But in my experience, for the most part labs are really actually trying to do this stuff correctly. They want to do it right. It’s the way they interface with the rest of the system that causes problems. Prosecutors’ offices can apply just merciless, unrelenting pressure. The economic demands can be relentless. And so even though the lab wants to do the stuff we’re doing, by and large they just can’t, because they don’t have the budget or the staff. When I go out and talk to other labs about what we’re doing in Houston, the reaction is overwhelmingly more “I wish we could do that,” than “What the hell do you think you’re doing?”
That seems encouraging.
It is. But the discouraging part is that they just can’t implement any of these ideas because of the pressure from prosecutors. And that pressure can be just mind-boggling.
But just to be clear, you’re saying that pressure is more often about just getting results quickly, and not necessarily pressure for results that aid prosecutions?
It’s “Gimme the result.” Yeah. When I first started in Texas, I’d always hear this very Texas way of putting it from police. It was usually someone like a sergeant, and he’d say, “Son, I don’t care how the sausage is made, just get me my dinner on time.”
I used to try to point out the many problems with how that euphemism was framing the situation. Now I just say to them, look, when the sausage eventually poisons you, we’re both going to care a lot about how it got made.
I can see the problems with quality control when you’re being pushed to get results more quickly than your analysts are comfortable. But to push back a bit on this, we’ve both seen cases where analysts have been pressured to get a result favorable to prosecutors — that’s certainly what was happening in North Carolina. In Houston, you currently have a pretty progressive police chief and a relatively progressive DA, at least in comparison to previous administrations. Do you think outcome-driven pressure on analysts might be more common in other parts of the country?
Sure, that may be. What I try to point out to people is that I run a laboratory that compared to just about every other lab is pretty well-funded. That does not mean we are adequately funded. But relative to a lot of labs, we’re pretty well-funded. And you’re right. We really have a pretty progressive police department here. A pretty progressive prosecutor in the grand scheme of things. There’s a pretty functional public defender’s office here. There’s a strong defense bar here. I’ve got a board, I’ve got independence. And it is still this stupid hard for us. So I can only imagine how hard it is for most labs in the country.
So you run these blind tests on your analysts. What happens then? For example, when I interviewed you for an article a couple years ago, you told me that in the blind tests your forensic firearms analysts were able to correctly state when two bullets had been fired by the same gun 76 percent of the time. And when asked to determine if two bullets had been fired by different guns, they were right just 34 percent of the time. So how do those results factor into how they testify in court?
So yeah, this is going to be another really harsh, cold reality about the limits of what we can do. What we would like to do and what is realistic are two different things. I think my folks do a good job testifying about these things. But it can be really difficult to find language that expresses what you’ve found in a way that will accurately represent those findings to a jury.
So are we doing enough to ensure we’re using language that even the lowest common denominator isn’t going to hear as an absolute? [Which is to say, that two bullets are a “match.”] I don’t know. We try to be as overt and smack people in the head as we can about it — that this is not an absolute answer.
When an analyst does think it’s a true match the likelihood of a false association is relatively low, and the likelihood of a false negative is probably a little bit higher, but still fairly low. It’s the false inconclusives that I worry about.
Yeah, I want to ask you about that. The research shows that it’s rare — but not unheard of — for firearms analysts to falsely implicate someone. But they’re quite a bit more likely to say “inconclusive” when the bullets are not a match. So they seem hesitant to exonerate.
That’s right. The data from our lab certainly points at this. That’s worrying.
When I was researching a long piece about firearms analysis, I was really surprised to learn that many labs, including prestigious labs like the FBI, refuse to exonerate as a matter of policy. They’re happy to testify that one gun and only one gun could have fired a particular bullet, but they won’t say that a particular bullet could not have been fired from a particular gun. The best they’ll do is say the results are “inconclusive.”
That seems crazy to me. How can you claim to be objective, scientific analysts when you’re willing to use this analysis to implicate suspects, but not to exonerate them?
The logic certainly seems off, doesn’t it? At our lab, we will exclude. But I should add that I’m a toxicologist, not a firearms examiner, and there’s always a risk I’m going to get out over my skis on this. So I’d have to talk to our firearms folks to discuss the nuance of that if you want a more complete answer.
I will say I’m hopeful that these 3-D scopes become the way things are practiced, and that we can narrow up that “inconclusive” gap.
There’s probably always going to be some version of an “inconclusive” determination out there, though, because we get crap evidence all the time. Some of this stuff just isn’t usable, and so then what do you do? They still want you to testify. So how do we communicate to the rest of the world that the evidence is useless so that they understand what it means — and how do we make sure prosecutors and jurors interpret those reports and results correctly?
I’ve always pointed to the Lidell Grant case. Are you familiar with that one?
I don’t think so.
I’ve backed off of using it because unfortunately, after he was exonerated, about nine months ago, he got in a car accident, got road rage, and shot and killed somebody.
But he was innocent of that first crime. The victim was stabbed outside of a club called the Blur, which seems appropriate because he was convicted mostly on eyewitness testimony. But there were some fingernail scrapings from the victim that were sent to us for DNA testing.
This was just after the PopStats problems had come to light. Everyone was scratching their head about what to do with the revelations that his company had been wrong about the probability calculations on how to appropriately handle DNA data, particularly for mixtures.
This case was a mixture, and we could not get a complete profile with the evidence we had. At that time, we required the same level of data to include or exclude, and we just weren’t comfortable including or excluding Grant based on the evidence. So we reported it as “inconclusive.”
Now, should we have been able to determine at that point that it was exclusive? Maybe. And there’s been a lot of internal discussion about that. But we felt we are on more solid ground just saying we didn’t know.
So then at Grant’s trial, if you read through the transcript, you’ll see that the prosecution asks the DNA expert, “You reported the result as inconclusive, which means you can’t exclude Grant as the source of the DNA, is that correct?”
When an expert says “I can’t exclude the defendant,” the jury hears something inculpatory. It basically turns an inconclusive result into an inculpatory piece of evidence. So you attach that to the eyewitnesses who all say they saw Grant kill this guy, and it’s an easy conviction.
The defense tried to clean it up, and got the expert to say yea, “inconclusive” means you can’t include the suspect either. But at that point, the cat’s out of the bag, and he gets convicted.
So then, years later, using probabilistic genotyping, they were able to isolate a single profile. There's lots of twists and turns, but that profile is eventually entered into CODIS. A single hit comes back. It’s a match to another guy who lived here in Houston and had just been let out of prison for a remarkably similar crime. HPD chased him down and he confessed to the crime. Not only that, but the probabilistic analysis excluded Lidell Grant from the profile.
So the way a true inclusion or a true exclusion can be masked in an “inconclusive” determination can be really problematic.
“I’m probably not going to find people who will let me beat them with various objects to see what kind of pattern their blood leaves on the wall.”
I know Brandon Garrett has written about this. There’s a long and sordid history — including in Houston — of crime lab analysts just saying “inconclusive” when their results should have exonerated a suspect. But they knew that wasn’t what police and prosecutors wanted to hear. So instead of the forensic analysis clearing someone as it should have, it just becomes a neutral piece of evidence, and the state proceeds with its case on, say, eyewitness or jailhouse snitch testimony instead. And outside of DNA testing, many jurisdictions have never really reckoned with that.
Yes. And unquestionably, that happens. But it’s not always that simple. A lot of times, we analysts are in this circumstance where I’ve got a piece of evidence. I can’t even tell you what this thing is. But I have to tell you something. Because everyone is expecting me to tell you something.
What do you make of the recent rulings in Chicago and Maryland, in which courts have expressed doubt about the reliability of forensic firearms analysis?
Again, I’m a toxicologist, so there’s a lot of nuance here that’s beyond my skill set. But I’ve had a lot of conversations with the folks who have been critical of firearms analysis and, how do I say this — their criticisms aren’t wrong, but they also aren’t entirely correct.
And again, I think it goes back to this really sticky circumstance where the evidence that we get at the crime lab is so often so imperfect, sometimes just to the fundamental nature of what the evidence is — you’ve got a mangled bullet that’s bounced off several objects. But we’re still backed into a position having to do something with it.
So how do you do that? This is a big concern in firearms. There’s definitely stuff out there that, looking at it with my science nerd eyes, makes me go, hmm, that’s not real comfortable, scientifically.
But I also get that firearms is a fundamental piece of essential evidence in so many cases — serious cases. And without it, I’m not sure that it’s going to go any better. Because those cases are going to get prosecuted, and they’re going to get prosecuted on even less perfect evidence. And does that really do anybody any good — to bar firearms analysts from testifying while we work to figure out how to do this stuff better? I don’t know.
Just to be sure I’m understanding you, you’re not defending analysts who overstate their conclusions. You’re saying it’s better to let a cautious, careful analyst properly describe why, say, it’s impossible to say if a particular bullet came from a particular gun, than to leave prosecutors and defense attorneys to spin the evidence for the jury with no input from an expert. Correct?
Right.
The academic research on the discrepancy between what an analyst tries to convey to a jury and what the jury actually hears is fascinating. There’s some really weird psychological stuff going in the courtroom.
Yeah. When an analyst says the evidence is inconclusive, the prosecution wants to get them to say, “I can’t exclude the defendant,” and the defense wants them to say, “I can’t include the defendant.” Both are true, but only in the sense that the evidence doesn’t tell us anything. But those statements push juries in opposite directions, and juries tend to hear them in very different ways.
You talked about the importance of independence — of not putting crime labs under the auspices of law enforcement. Over the years, I’ve asked analysts — especially medical examiners — if they’d ever be comfortable unilaterally reaching out to defense attorneys if they thought the state was perpetrating an injustice. Nearly all of them have told me they probably wouldn’t. They’d certainly tell the truth and testify if the defense reached out to them. But if the defense didn’t reach out, or if defense attorneys were never told that the analyst’s conclusions were exculpatory, they wouldn’t feel comfortable contacting the defense on their own.
Why do you think that is? And would you be okay with your analysts reaching out to the defense in that sort of scenario?
That really wouldn’t be an issue for us here in Texas because of the Michael Morton Act. That law really changes the dynamic, more than I’ve seen elsewhere in the country. Because of that law, the state now has an obligation to disclose anything that could be mitigating, impeaching, or potentially exculpatory. It’s a very broad standard, and it basically means that the state has a positive obligation to disclose everything.
So first, a defense attorney wouldn’t have to ask me for any exculpatory results. I’m obligated to disclose them from the start. And if I don’t, all you have to do is ask me. You wouldn’t need a court order. So because of that law, relative to other places in the country, there’s very little resistance from prosecutors here about sending stuff to defense. Our lab actually puts everything on an external website. And now there’s another new law that will create a statewide discovery portal. Once it’s built, all laboratory documentation will be available through this portal, and the portal will be accessible to all the attorneys.
That seems way too sensible.
Right? It’s been unique to Texas, though I think things are changing now. I think maybe in New York they’re trying to figure out a new universal disclosure policy, too. But the movement now is definitely toward full disclosure for crime labs — toward everything we do being out there. The lab’s information is just the lab’s information — here you go, have at it. There’s no reason it needs to be filtered through the prosecution first. That’s where things are headed, I think. And that’s a good thing.
Over the last 15 years, we’ve seen numerous scientific bodies critique a number of forensic disciplines, including some used pretty regularly in criminal trials like blood spatter and partial fingerprint analysis. There seems to be this broad spectrum of credibility — you have single-source DNA analysis, which just about everyone agrees is sound science, and you have fields like bitemark analysis, which most people agree is quackery. But there’s a huge range in between. As a crime lab director, how do you decide which disciplines are scientific and reliable enough to be included in the services you offer, and which aren’t?
We’ve explored these questions a lot here. We don’t do trace evidence, pretty much in any form. We don’t do questioned documents. We do document the blood patterns that we find at crime scenes, but we don’t do interpretation of blood pattern evidence. We don’t do bite marks.
But we do do firearms and latent prints. So how do I decide what we will or won’t do? My answer is that if I can conceive of and implement a control, then it’s a discipline we might do. I can do controls, both positive and negative, for latent prints and firearms.
I don’t quite see how you make a control for blood pattern evidence. I don’t quite see how you make a control for questioned documents. I can see how you might do it for trace evidence, but to do a control for trace evidence the way I would want to do it would be very, very expensive. So we don’t do it.
Just for readers who may not be familiar with the terminology, when you say a “control,” what you mean is that you can give your analysts blind tests — tests where you know the ground truth — in order to gauge their accuracy. Is that correct?
Exactly. So take blood pattern analysis. How do you make a control for that? How would I send my analysts test cases where I know exactly how the blood pattern was created? Well, I would need to find a six-foot volunteer who was willing to let me smack him in the head with a baseball bat.
I’d also need to be able to control things like the humidity in the air and the roughness of the wall. Those are subtler things, but can be critically important to an analyst’s conclusions.
But mostly, I’m just probably not going to find people who will let me beat them with various objects to see what kind of pattern their blood leaves on the wall.
If you could correct one misconception about crime labs, what would it be?
Crime labs sit in a very precarious place. Largely we have very little ability to defend ourselves and our needs within the larger system. What keeps labs working are the nerds that quietly take a beating day after day from all the other components of the criminal justice system, but still keep coming back, desperately trying to do the right thing. It is far from perfect. But amazingly, they keep making progress, in spite of underfunding, endless criticism, and constant, merciless pressure.
"But mostly, I’m just probably not going to find people who will let me beat them with various objects to see what kind of pattern their blood leaves on the wall."
I mean, has he tried Craigslist?