Number Blindness

“When the facts are on your side, pound the facts. When the law is on your side, pound the law. When neither is on you side, pound the table.” – old legal adage of unclear origin

Recently I’ve been finding it rather hard to go on Facebook. It seems like every time I log in, someone I know has chosen that moment to start a political debate that is going poorly. It’s not that I mind politics or have a problem with strong political opinions, but what bugs me is how often suspect numbers are getting thrown out to support various points of view. Knowing that I’m a “numbers person”, I have occasionally had people reach out asking me to either support or refute whatever number it is that is being used, or use one of my posts to support/refute what is being said. While some of these requests are perfectly reasonable requests for explanations, I’ve gotten a few recently that were rather targeted “Come up with a reason why my opponent is wrong” type things, with a heavy tone of “if my opponent endorses these numbers, they simply cannot be correct”. This of course put me in a very meta mood, and got me thinking about how we argue about numbers. As a result, I decided to coin a new term for a logical fallacy I was seeing: Number Blindness.

Number Blindness: The phenomena of becoming so consumed by an issue that your cease to see numbers as independent entities and view them only as props whose rightness or wrongness is determined solely by how well they fit your argument

Now I want to make one thing very clear up front: the phenomena I’m talking about is not simply criticizing or doubting numbers or statistics. A tremendous amount of my blogging time is spent writing about why you actually should doubt many of the numbers that are flashed before your eyes. Criticism of numbers is a thing I fully support, no matter whose “side” you’re on.

I am also not referring to people who say that numbers are “irrelevant” to the particular discussion or said that I missed the point. I actually like it when people say that, because it clears the way to have a purely moral/intellectual/philosophical discussion. If you don’t really need numbers for a particular discussion, go ahead and leave them out of it.

The phenomena I’m talking about is actually when people want to involve numbers in order to buffer their argument, but take any discussion of those numbers as offensive to their main point. It’s a terrible bait and switch and it degrades the integrity of facts. If the numbers you’re talking about were important enough to be included in your argument, then they are important enough to be held up for debates about their accuracy. If you’re pounding the table, at least be willing to admit that’s what you’re doing.

Now of course all of this got inspired by some particular issues, but I want to be very clear: everyone does this. We all want to believe that every objective fact points in the direction of the conclusion that we want. While most people are acutely aware of this tendency in whichever political party they disagree with, it is much harder to see it in yourself or in your friends. Writing on the internet has taught me to think carefully about how I handle criticism, but it’s also taught me a lot about how to handle praise. Just like there are many people who only criticize you because you are disagreeing with them, there are an equal number who only praise you because you’re saying something they want to hear. I’ve written before about the idea of “motivated numerancy” (here and for the Sojourners blog here), but studies do show that ability to do math rises and falls depending on how much you like the conclusions that math provides….and that phenomena gets worse the more intelligent you are. As I said in my piece for Sojourners “Your intellectual capacity does NOT make you less likely to make an error — it simply makes you more likely to be a hypocrite about your errors.”

Now in the interest of full disclosure, I should admit that I know number blindness so well in part because I still fall prey to it. It creeps up every time I get worked up about a political or social issue I really care about, and it can slip out before I even have a chance to think through what I’m saying. One of the biggest benefits of doing the type of blogging I do is that almost no one lets me get away with it, but the impulse still lurks around. Part of why I make up these fallacies is to remind myself that guarding against bias and selective interpretation requires constant vigilance.

Good luck out there!

Stats in the News: February 2017

I’ve had a couple interesting stats related news articles forwarded to me recently, both of which are worth a look for those interested in the way data and stats shape our lives.

First they came for the guys with the data

This one comes from the confusing world of European economics, and is accompanied by the rather alarming headline “Greece’s Response to Its Resurgent Debt Crisis: Prosecute the Statistician” (note: WSJ articles are behind a paywall, Google the first sentence of the article to access it for free). The article covers the rather concerning story of how Greece attempted to clean up it’s (notoriously wrong) debt estimates, only to turn around and prosecute the statistician they hired to do so. Unsurprisingly, things soured when his calculations showed they looked even worse than they’d said and were used to justify austerity measures. He’s been tried 4 times with no mathematical errors found, and it appears that he adhered to general EU accounting conventions in all cases. Unfortunately he still has multiple cases pending, and in at least one he’s up for life in prison.

Now I am not particularly a fan of economic data. Partially that’s because I’m not trained in that area, and partially because it appears to be some of the most easily manipulated data there is. The idea that someone could come up with a calculation standard that was unfair or favored one country over others is not crazy. There’s a million ways of saying “this assumption here is minor and reasonable but that assumption there is crazy and you’re deceptive for making it”. There’s nothing that guarantees that the EU recommended way of doing things was fair or reasonable, other than that they claim they are. Greece could have been screwed by German recommendations for debt calculations, I don’t know. However, prosecuting the person who did the calculations as opposed to vigorously protesting the accounting tricks is NOT the way to make your point….especially when he was literally hired to clean up known accounting tricks you never prosecuted anyone for.

Again, no idea who’s right here, but I do tend to believe (with all due respect to Popehat) that vagueness in data complaints is the hallmark of meritless thuggery. If your biggest complaint about a statistic is it’s outcome, then I begin to suspect your complaint is not actually a statistical one.

Safety and efficacy in Phase 1 clinical trials

The second article I got forwarded was an editorial from Nature, and is a call for an increased focus on efficacy in Phase 1 clinical trials. For those of you not familiar with the drug development world, Phase 1 trials currently only look at drug safety without having to consider whether or not they work. Currently about half of all drugs that proceed to phase 2 or phase 3 end up failing to demonstrate ANY efficacy.

The Nature editorial was spurred by a safety trial that went terribly wrong and ended up damaging almost all of the previously healthy volunteers. Given that there are a limited number of people willing to sign up to be safety test subjects, this is a big issue. Previously the general consensus had been to leave this up to companies to decide what was and was not worth proceeding with, believing that market forces would get companies to screen the drugs they were testing. However, given some recent safety failures and recent publications showing how often statistical manipulations are used to push drugs along have called this in to question. As we saw in our “Does Popularity Influence Reliability” series, this effect will likely be worse the more widely studied the topic is.

It should be noted that major safety failures and/or damage from experimental drugs is fairly rare, so much of this is really a resource or ethics debate. Statistically though, it also speaks to increasing the pre-study odds we talked in the “Why Most Published Research Findings are False” series. If we know that low pre-study odds are likely to lead to many false positives, then raising the bar for pre-study odds seems pretty reasonable. At the very least the company’s should have to submit a calculation, along with the rationale. I still maintain this should be a public function of professional associations.