Ok folks, so we’re nearing the end of our Wikipedia list of issues, so I’m at the point where I don’t know what to call this one. We have a bunch of random issues I’ll run through in order. Ready? Let’s go!
Context Sensitivity
In scientific study, context sensitivity refers to the idea that the same study performed under two different sets of circumstances might yield different results in ways people didn’t expect. This seems somewhat obvious when you say it directly, but often isn’t actually on people’s minds when they are reading a study. I have actually covered this a LOT on my blog over the years, as often people will make huge claims about how men or women (in general) view marriage, and you’ll find out the whole study was done on a group of 18 year old psychology students who are almost certainly not married or getting married any time soon. Zooming out, there’s a big criticism that most psychological research is done on “WEIRD” people, meaning Western, Educated, Industrialized, Rich and Democratic. What we consider settled science around human behavior may not be so settled if you include people from wildly different countries and contexts.
So how does this apply to true crime? Well, just like when I look up a paper the first thing I do is go to the methods section to understand the context in which the data was collected, I think the most important thing in a true crime story is to understand the big picture of where and how things happened. As I mentioned previously, true crime cases are often really unusual cases, so it’s important to flag any abnormalities will be heightened substantially. A few questions: how much crime is in the area in general? Were there any unusual events challenging people’s behavior? True crime often goes over this stuff, but I’ve noticed some cases breeze through contextualizing things or not acknowledging that unusual circumstances might change people’s behavior.
The other odd context thing is that a lot of people seem to think that because a case became well known later, the initial investigators should have been thinking from the get go how things would look on Dateline. Unfortunately most investigators/witnesses/defendants don’t have the luxury of knowing in the first 24 hours that people will be reviewing their actions for decades to come. If the case is OJ Simpson? Well yes, you should be prepared for that. If the case is Jon Benet Ramsey? You should give them some grace for not predicting the firestorm. Context matters.
Bayesian Explanation
This is similar to some of the statistical concerns I mentioned last week, but basically if you have a “surprising” result and a low powered study, Bayes theorem suggests you will have a high failure to replicate rate. Bayesian statistics can be powerful to help think through this, because they force you to consider how likely you thought something was before you ran your study, which can help you put your subsequent results in context.
So what’s the true crime equivalent? Well, I think it’s actually a good reminder to put all the evidence in context. Here’s an example: imagine a police department (or podcaster) believes a suspect is guilty mainly because they failed a polygraph. The polygraph has a low ability to detect real guilt (low power) and many innocent people fail it (high false-positive rate), and the prior likelihood that this particular person committed the crime is low. Even though the polygraph result says “guilty,” it does not mean there is a 95% chance they did it. Just like a weak psychological study, a “positive” polygraph doesn’t reliably tell you whether the hypothesis is true or whether the result will replicate.
This can be reapplied to all sorts of evidence, and should be, particularly when you have one piece of evidence that flies in the face of the rest of them. We even have a legal standard for this: circumstantial evidence, which can only be let in under certain circumstances. However in true crime reporting, a lot of circumstantial evidence is treated as extremely weighty, regardless of how discordant it is with everything else. You have to be honest about the prior probability or all your subsequent calculations are going to be skewed.
The Problem With Null Hypothesis Testing
This is a somewhat interesting theory, based on the idea that null hypothesis testing may not be appropriate for every field. For example, if you are testing whether or not a new drug helps cure cancer, you want to know if it has an effect or not. Pretty simple. But with a field like social psychology, human behavior may be too nuanced to have a true yes or no question. Running statistical tests that suggest there is a clear yes/no might end up with unreliable results because the whole set up was inappropriate for the question asked.
In true crime, this reminds me of people using legal standards as though they are moral standards or everyday standards we might use. For example, a person accused of rape may not be convicted under a reasonable doubt standard, but that doesn’t mean that you’d be ok with them dating your daughter/sister/friend. In murder cases, even when the police get things wrong they often had a good reason to start believing people were guilty. Drug or alcohol use can make people looks suspicious, lying up front to the police can make you look suspicious, prior similar convictions can make you look suspicious etc etc. I’ve seen a strong tendency for people to decide that whoever they favor is blameless (null hypothesis = absolutely nothing wrong), but as we covered last week a lot of people mixed up with legal trouble have something working against them.
Base Rate Fallacy
I’ve written about the base rate fallacy before, and it can be a tricky thing to overcome. In short, the base rate fallacy happens when something is extremely uncommon and you use an imperfect method to try to find it. For example, if you use an HIV test to test a thousand random people in the US for HIV, we know that 3-4 might have it. If you are using a test that is 99% accurate but has a 1% false positive rate, that actually means more people (10) will get a false positive result than a true positive result. When the frequency of something is low, false positives become a much bigger problem. In publishing, the theory is that previously unnoticed phenomena are getting rare, so surprising findings are increasingly likely to be false positives.
So how does this apply to true crime? Well, it’s a little hard to make a clear comparison, because so many crimes have unusual things happening by default. To take OJ Simpson as an example, it’s unusual for a celebrity of his stature to be accused of a crime. However, it’s also pretty unusual for a celebrity’s ex wife to end up dead like his did. Our base rate doesn’t totally work because we actually know something weird has happened. This is where we have to get back to judging people by evidence, not statistics.
However, in the broader scheme of true crime content, I think it’s good to note that the demand for new cases is currently exceeding the supply. As we’ve continued to cover, people want attractive articulate defendants with “interesting” cases, and we just don’t have that many of them. This creates a vacuum where people are very incentivized to make their cases “interesting” enough for true crime podcasters to pick up on. This is challenging because overall the murder rate in the US is down substantially from the 80s and 90s, so we have fewer current cases to draw from.
Alright, that’s all I have for this week. I’ll be looking to wrap up next week with a few lessons learned and thoughts. Thanks all!
To go to part 8, click here.
Thank you for the link back to your post about base rate. Reading it, I had the same thought I had n9ine years ago, about false positives for suicidality resulting in being locked up
LikeLike