A little over a month ago now, I got in to a discussion about doing another post for the True Crime Times, this time about modifying some old school scientific reasoning tools like the Bradford Hill criteria to apply to true crime type stories and evidence assessments for better thinking. Amusingly, they appear to lock posts after a certain period of time so I now can’t go back and see what exactly sparked the discussion, but I liked the idea and wrote up a draft. While I enjoyed the heck out of actually writing the whole thing and it clarified a lot of stuff I had been thinking about, I ultimately wasn’t entirely convinced it worked all that well. First, it got incredibly long. The Bradford Hill criteria are pretty lengthy, and explaining the background took a while, then it took even longer to explain each criteria, then even longer to explain why I thought they applied. All told I think it ended up at like 3000 words, which on this blog I would have probably split up over at least two posts and also made some snarky commentary to lessen the blow of that many words. Writing more formally, even I felt like it was a slog by the end.
It occurred to me that this is why I’ve always liked having a blog like this, even as blogs have fallen out of fashion, because they really are a place to work out some long form ideas without having to feel like you’re trying to get subscribers or condensing your thinking in to little snippets. It’s how I process stuff. I’ve actually taken a lot of what I’ve written here over the years and polished it up to use elsewhere, and it’s somewhat rare I’ve been able to publish something in a different outlet without working it out here first. So I realized I need to come back here and work out a few things before I tried to write anything up.
One of the reasons I like writing here so much is that in a very real way, anyone who sticks around here for any length of time tends to be, on some level, one of my type of people. When I named this site Graph Paper Diaries, I was serious. I tend to think in numbers, and I like drawing lines around things. I count things when I get bored. My first question when I hear a statistic is “hold up, where did that come from”. And most importantly “is that true?”. In other words, I like quantification over feelings, I like definitions, I like numbers, I like sources, and I like to know if I have my facts straight. It was always my goal with this site to de-emphasize debates on particular hot button topics, and instead focus on the underlying data to see if we could at least get agreement there to help inform bigger discussions. It was (and still is) my belief that agreeing on baseline facts and standards of truthfulness and certainty was a way of fostering respectful debate around important topics. I’m never going to get everyone to agree with me on everything, but I can certainly try to help create a world where I enjoy the process of disagreeing with people more.
While I get some drive by comments from people who don’t understand any of this, I think anyone who sticks around here for more than a post or two generally gets the value of at least some of this stuff. You may at times question how well I actually execute any of my goals, but I don’t think most of you question the aim. That’s a lot of fun to hang out with.
What gets a little tougher is trying to jump in to a different subculture and translate all of that stuff. I have fun here because I started with a group of people who were interested the rather number based place called “Graph Paper Diaries”, but how do I translate that to a group of people interested in to the incredibly narrative driven world of true crime?
That’s what got me thinking about Sir Austin Bradford Hill. He was a British epidemiologist who helped prove smoking caused lung cancer and subsequently came up with nine “viewpoints” from which he thought all evidence should be assessed before assuming it proved that one thing caused another thing. Epidemiology seems like a pretty uniquely good analogy for true crime since epidemiology is by definition the study of disease in messy population based conditions. Unlike lab based science where you get to control your experiments, epidemiologists are often just expected to work with what they have, and there are no redoes if they get things wrong. I think you can see why the analogies to crime investigation jumped out to me. While it would be great if you could have unlimited time or resources and have it only hit perfect victims in a more ideal location at a better time of year, in both cases, you have to go where the problem is and work with what you have.
Because in both cases, the stakes are actually pretty high. Never figuring out how to stop a disease outbreak has consequences, as does never solving a crime. It’s extremely easy to get annoyed people don’t have better evidence, but we have to accept that in life some problems are just going to have messy evidence. If we don’t accept messy evidence, we’re going to settle for no evidence. And I don’t think any of us want that.
So how do we muddle through this? Well first we obviously gather as much evidence as possible. But after that what do we do with it? As I mentioned last week all the data in the world can’t save us if we don’t have a good question, so what questions should we be asking as we look at the information we have? This is where Bradford Hill comes in. He asked people to take a look at the data they had from 9 different viewpoints to evaluate evidence. I’ve gone over these before in a strict public health context, but I’ve adapted them for true crime stories.
- Strength: If this person were innocent, how weird would this evidence be? When we look at heavy smokers, the likelihood of lung cancer wasn’t just a little bit higher, it was 20-30 times higher. That’s a compelling piece of evidence. Similarly in true crime, some pieces of evidence are more compelling than others. One piece of strong evidence trumps 10 small coincidences.
- Consistency: Does the same story show up when the evidence comes from different places? The smoking/lung cancer connection shows up in lots of different populations in different locations. Similarly, in crime investigations, digital data agreeing with witness testimony agreeing with physical evidence is a pretty strong story.
- Specificity: Does this evidence actually point to one person and one version of events? Yeah, I know “they” did it. “They” are responsible for everything. But lets narrow that down just a bit.
- Temporality: Did things happen in this order, based on what people knew at the time (not what we know now)? When you learn all the evidence during a one hour podcast, it can be incredibly hard to remember the events actually unfolded over the course of several months and that people could only react to what they knew at the time. Keeping the actual timeline in mind is important.
- Evidence Gradient: As more evidence is added, does the story get clearer or more complicated? When hearing new evidence that contradicts something they already believe, a lot of people start to over complicate their theories without even realizing it. “Sure that evidence looks bad, but maybe it was planted” Okay, but you just traded one problem for another. You explained away the contrary evidence at the price of now needing to explain how someone planted it. That’s not a clearer theory, that’s just shuffling your problems around.
- Plausibility: How much would have to go exactly right for this story to be true? Ocean’s 11 is a fun movie, but rarely in life are things that perfectly timed.
- Coherence: Does this explanation fit with the physical evidence, the timeline, and how people usually behave? Much as with plausibility, if you take a step back, does a full picture start to emerge or does it get murkier?
- Experiment: Is there any part of this that could be checked or tested instead of argued about? This isn’t the most common situation but can certainly clear some points up pretty quickly if it’s possible.
- Analogy: Am I convinced by the facts of this case, or because it reminds me of another one? I used to read advice columns a lot and I would always be interested to see how much people would read in to situations based on what were clear issues from their personal life. I know women like this. Men like that will always act like this. While analogies can be useful in suggesting questions to ask, they can also lead you to make assumptions about people that aren’t true.
So there they are, nine questions to help people think through messy evidence when that’s the only option. While this was never supposed to be an explicit checklist that would prevent every error, it was supposed to help you look at things from enough angles that you reduced your chances of missing something or getting hung up on a pet theory as evidence mounted pointing in other directions. Because that’s a key thing with messy evidence, it’s not an easy thing to wade through, and it’s easy to get stuck on one or two piece and to start missing the big picture.
But I suspect you already know that’s a good idea. I think this way of thinking is solid, it’s worked on some of our most important public health problems after all. I’m still workshopping the delivery.
If you have thoughts on how to introduce a framework like this to a true-crime audience, I’d love to hear them. What would you lead with, what would you lose, or what would make you actually want to keep reading? I’ll keep working on the piece in the next few days, so open to any ideas! I’ll probably publish whatever I come up with here at the very least even if I don’t find another spot for it. This is just my favorite problem to noodle on at the moment.




