Proof: Using Facts to Deceive (Part 7)

Note: This is part 7 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to Part 6 here.

Okay, now we come to the part of the talk that is unbelievably hard to get through quickly. This is really a whole class, and I will probably end up putting some appendices on this series just to make myself feel better.  If the only thing I ever do in life is to teach as many people as possible the base rate fallacy, I’ll be content. Anyway, this part is tough because I at least attempt to go through a few statistical tricks that actually require some explaining. This could be my whole talk, but I’ve decided against it in favor some of the softer stuff. Anyway, this part is called:

Crazy Stats Tricks: False Positives, Failure to Replicate, Correlations, Etc

Okay, so what’s the problem here?

Shenanigans, chicanery, and folks otherwise not understanding statistics and numbers. I’ve made reference to some of these so far, but here’s a (not-comprehensive) list:

  1. Changing the metric (ie using growth rates vs absolute rates, saying “doubled” and hiding the initial value, etc)
  2. Correlation and causation confusion
  3. Failure to Replicate
  4. False Positives/False Negatives

They each have their own issues. Number 1 deceives by confusing people, Number 2 makes people jump to conclusions, Number 3 presents splashy new conclusions that no one can make happen again, and Number 4 involves too much math for most people but yields some surprising results.

Okay, so what kind of things should we be looking out for?

Well each one is a little different. I touched on 1 and 2 a bit previously with graphs and anecdotes. For failure to replicate, it’s important to remember that you really need multiple papers to confirm findings, and having one study say something doesn’t necessarily mean subsequent studies will say the same thing. The quick overview though is that many published studies don’t bear out. It’s important to realize that any new shiny study (especially psychology or social science) could turn out to not be reproducible, and the initial conclusions invalid. This warning is given as a boilerplate “more research is needed” at the end of articles, but it’s meant literally.

False positives/negatives are a different beast that I wish more people understood.  While this applies to a lot of medical research, it’s perhaps clearest to explain in law enforcement.  An example:

In 2012, a (formerly CIA) couple was in their home getting their kids ready for school when they were raided by a SWAT team. They were accused of being large scale marijuana growers, and their home was searched. Nothing was found.  So why did they come under investigation? Well it turns out they had been seen buying gardening equipment frequently used by marijuana growers, and the police had then tested their trash for drug residue. They got two positive tests, and they raided the house.

Now if I had heard this reported in a news story, I would have thought that was all very reasonable. However, the couple eventually discovered that the drug test used on their trash has a 70% false positive rate. Even if their trash had been perfectly fine, there was still at least a 50% they’d get two positive tests in a row (and that assumes nothing in their trash was triggering this). So given a street with ZERO drug users, you could have found evidence to raid half the houses.  The worst part of this is that the courts ruled that the police themselves were not liable for not knowing that the test was that inaccurate, so their assumptions and treatment of the couple were okay. Whether that’s okay is a matter for legal experts, but we should all feel a little uneasy that we’re more focused on how often our tests get things right than how often they’re wrong.

Why do we fall for this stuff?

Well, some of this is just a misunderstanding or lack of familiarity with how things work, but the false positive/false negative issue is a very specific type of confirmation bias. Essentially we often don’t realize that there is more than one way to be wrong, and in avoiding one inaccuracy, we increase our chances of different types of inaccuracy.  In the case of the police departments using the inaccurate tests, they likely wanted something that would detect drugs when they were present. They focused on making sure they’d never get a false negative (ie a test that said no drugs when there were). This is great, until you realize that they traded that for lots of innocent people potentially being searched. In fact, since there are more people who don’t use drugs than those who do, the chances that someone with a positive test doesn’t have drugs is actually higher than the chance that they do….that’s the base rate fallacy I was talking about earlier.

To further prove this point, there’s an interesting experiment called the Wason Selection task that shows that when it comes to numbers in particular, we’re especially vulnerable to only confirming an error in one direction. In fact 90% of people fail this task because they only look at one way of being wrong.

Are you confused by this? That’s pretty normal. So normal in fact that the thing we use to keep it all straight is literally called a confusion matrix and it looks like this:

If you want to do any learning about stats, learn about this guy, because it comes up all the time. Very few people can do this math well, and that includes the majority of doctors. Yup, the same people most likely to tell you “your test came back positive” frequently can’t accurately calculate how worried you should really be.

So what can we do about it?

Well, learn a little math! Like I said, I’m thinking I need a follow up post just on this topic so I have a reference for this. However, if you’re really not mathy, just remember this: there’s more than one way to be wrong. Any time you reduce your chances of being wrong in one direction, you probably increase them in another. In criminal justice, if we make sure we never miss a guilty person, we might also increase the number of innocent people we falsely accuse. The reverse is also true. Testings, screenings, and judgment calls aren’t perfect, and we shouldn’t fool ourselves in to thinking they are.

Alright, on that happy note, I’ll bid you adieu for now. See ya next week!

Read Part 8 here.

4 thoughts on “Proof: Using Facts to Deceive (Part 7)

  1. Okay! “Shenanigans, Chicanery, and folks otherwise not understanding statistics and numbers” is the title of your course, though I would switch numbers and statistics just for emphasis and sound.

    If students can write any reasonably intelligent thing about all four on the list they should pass. Students who demonstrate real knowledge of all four should get awards at graduation. Multiply by 10,000 school districts.

    Like

Comments are closed.