On Predictions and Definitions (After the Fact)

Twice recently I’ve seen minor characters on both sides of the political spectrum claim that they foresaw/predicted some recent event with “eerie precision”, on topics where their predictions had actually appeared (to me at least) only loosely connected to what actually happened.

While I was annoyed by these people, I was more annoyed by the fans of theirs who rushed to agree that it was clear that they had amazing foresight in making their calls. While obviously some of that is just in-group defensiveness, some of them really seem to believe that this person had done something amazing. While none of those fans are people who read my blog, I figured I’d blow off some steam by reminding everyone of two things:

  1. Redefining words makes improbably results quite common. In the John Ionnaidis paper “Why Most Published Research Findings Are False”, one of his 6 corollaries for published research is “Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.” This is true in research, and more true in political pontificating/predictions. Allowing yourself any latitude at all in redefining words will give you nearly unlimited predictive power after the fact. This is not a “minor quibble”, that’s literally the whole trick.
  2. Making lots of predictions has no consequences for a pundits career. Making predictions as a pundit is like playing roulette with the house’s money. You can’t actually lose. In “The Signal and the Noise” (a book which I once owned and have now lost to lending), Nate Silver reviews how often some pundits known for their “amazing predictions” actually make predictions. Answer: a lot. Most of them are wrong. However, the career of the pundit takes almost no hit for those wrong answers, yet skyrockets every time they are right. Thus they have no motivation to stop making crazy predictions, as they lose nothing for a wrong one and gain everything for a right one. When I read TSATN initially, I made this matrix to illustrate:

So yes, next time you see someone make an “amazing prediction”, take a deep breath and ask yourself how much redefining is going on and how many predictions they had to throw out to get to that one that hit. Well, probably not you specifically dear reader, you’re probably fine. This is almost certainly one of those “if you’re thinking about it enough to read this post, you’re probably not the problem” things. Regardless, thanks for letting me get that off my chest.

In Praise of Confidence Intervals

This past week I was having a discussion with my high school teacher brother about an experiment his class was running and appropriate statistical methods for analyzing the data. We were discussing using the chi square statistic to compare data from an in class calorimetry experiment to the expected/published values (this is the point where the rest of my family wandered off), and he asked what other statistical analysis his kids could do that might help them understand their results. I mentioned that I was a big fan of confidence intervals for understanding data like this, and started to rattle off my reasons. While testing that produces a p-value is more commonly used in scientific work, I think for most people confidence intervals are more intuitive to use and should be worked in to the mix. Since we were talking about all this at around 7am (prior to both the second cup of coffee AND a trip out to move the cows at his farm), I figured I’d use my blog post today to more thoroughly lay out a few reasons confidence intervals should be more widely used (particularly by teachers) and provide a few helpful links.

The foundations of my argument comes from a paper published a few years ago called “Why the P-value culture is bad and confidence intervals a better alternative“, which gets in to the weeds on the stats, but makes a good overall case for moving away from a reliance on p-values and towards a focus on confidence intervals. The major reasons are:

  1. Confidence intervals use values you already have p-values and confidence intervals are mathematically similar and take the same basic variables in to account: number of observations (n) and variation of those observations (standard error or SE). More observations and lower variability within those observations generally are considered good things. If you can get a p-value, you can get a confidence interval.
  2. Confidence intervals give you more information than the p-value Where a p-value tells you just the statistical significance of the difference, the confidence interval tells you something about the magnitude of the difference. It gives an upper and lower limit, so you get a sense of what you’re really see. For kids learning about variation, I also thinks this can give them a better sense of how each of their experimental values affects the overall outcome. For example, if you’re doing a calorimetry experiment and know that the expected outcome is 100, and your class of 30 kids gets an average of 98 with a standard deviation of 5, you would tell them that p=.03 and thus this a significant difference. Using the confidence interval however, you would give them the range 96.2 to 99.8. This gives a better sense of how different the difference really is, as opposed to just accepting or rejecting a binary “is there a difference” assumption.
  3. Confidence intervals are more visual. The paper I mentioned above has a great figure with it that illustrates what I’m talking about:On this graph you can draw lines to show not just “is it different” but also “when do we really care”. I think this is easier to show kids than just a p-value by itself, as there’s no equivalent visual to show p-values.
  4. It’s easier to see the effect of sample size with a confidence interval. For the calorimetry experiment mentioned above, let’s show what happens if the class is different sizes, all with the same result of 98 with a standard deviation of 5:
    n 95% Confidence interval p-value
    10 94.9-101.1 .2377
    15 95.5-100.5 .1436
    20 95.8-100.2 .0896
    25 96-100 .0569
    30 96.2-99.8 .0366

    I think watching the range shrink is clearer than watching a p-value drop, and again, this can easily be converted in to a graph. If you’re running the experiment with multiple classes, comparing their results can also help show kids a wider range of what the variation can look like.

  5. Confidence intervals reiterate that some variation is to be expected. One of the harder statistical concepts for people to grasp is how predictable a bit of unpredictability really is. For some things we totally get this (like our average commute times), but for other things we seem to stumble (like success of medical treatments) and let outliers color our thinking. In the calorimetry experiment, if 1 kid gets 105 as a value, confidence intervals make it much easier to see how that one outlier fits in with a bigger picture than a single p-value.

So there you go. Confidence intervals are a superior way of presenting effect size, significance of the finding, and are easy to visualize for those who have trouble with written numbers. While they don’t do away with all of the pitfalls of p-values, they really don’t add any new pitfalls to the mix, and they confer some serious benefits for classroom learning.  I used Graphpad to quickly calculate the confidence intervals I used here, and they have options for both summary and individual data.

How to Make Friends and Influence Doctors

I got a little behind in my reading list this year, but I’m just finishing up Ben Goldacre’s Bad Pharma and it’s really good. Highly recommended if you want to know all sorts of excruciating detail about how we get the drugs we do, and lose faith in most everything.

The book introduced me to a paper from 2007 called “Following the Script: How Drug Reps Make Friends and Influence Doctors“, where a former pharma salesman lays out the different categories of doctors he encountered and how he worked to sell to each of them. This includes a whole table with doctor categories including “The friendly and outgoing doctor” and the “aloof and skeptical” doctor, along with the techniques used to sell to each.

Since Goldacre is the absolute epitome of “aloof and skeptical” he added his own explanation of the tactic they use on evidence based doctors:

“If they think you’re a crack, evidence-based medicine geek, they’ll only come to you when they have a strong case, and won’t bother promoting their weaker drugs at you. As a result, in the minds of bookish, sceptical evidence geeks, that rep will be remembered as a faithful witness to strong evidence; so when their friends ask about something the rep has said, they’re more likely to reply, ‘Well, to be fair, the reps from that company have always seemed pretty sound whenever they’ve brought new evidence to me…’ If, on the other hand, they think you’re a soft touch, then this too will be noted.”

Maybe it’s just because I’ve never been in sales, but it really had not occurred to me that was a sales technique a person could use. Sneaky.

Of course I then realized I’ve seen other, similar things in other situations. While most people know better than to come to me with shoddy stats during political debates, I’ve definitely seen people who told me that they personally agree certain numbers are shoddy later use those same numbers in Facebook arguments with others who aren’t quite as grouchy. It’s an offshoot of the old “be pleasant while the boss/parent is in the room, show your true colors later” thing. Like a data Eddie Haskell. I may have a new bias name here. Gotta work on this one.

What I’m Reading: September 2017

If you’ve been seeing talk about the PURE study that recently was being reported under headlines like “Huge new study casts doubt on conventional wisdom about fat and carbs“? The study found that those with low fat diets were more likely to die by the end of the study than those with higher fat diets. However, Carbsane took a look and noticed some interesting things. First, the US wasn’t included, so we may need to be careful about generalizing the results there. They also included some countries that were suffering other health crises at the time, like Zimbabwe. Finally, the group they looked at was adults age 35 to 70, but they excluded anyone who had any pre-existing heart problems. This was the only disease they excluded, and it makes some of the “no correlation with heart disease”  conclusions a little harder to generalize. To draw an equivalency, it’s like trying to figure out if smoking leads to lung cancer by excluding everyone in your sample who has lung problems already. What you really want to see is both groups, together and separately.

For my language oriented friends: this article about how cultures without words for numbers get by was really interesting. They make the assumption that counting distinct quantities is an inherently an unnatural thing to do, but I have to wonder about that. Some people do seem more numbers oriented than others, so what happens to those folks? Do people who are good at numbers and quantities just get really depressed in these cultures? Do they find another outlet? As someone who starts counting things to deal with all kinds of emotions (boredom, stress, etc), I feel like not having words for numbers would have a serious impact on my well being.

There’s a lot of herbs and supplements out there being marketed with dubious health claims, but exactly how those claims are worded depends on who you are. This article on how the same products are marketed on InfoWars and Goop is an interesting read, and a good reminder about how much information we get can be colored by marketing spin.

On a political note, this Economist article about the concept of anti-trust laws in the data age was food for thought. 

Finally, I decided to do my capstone project for my degree on a topic I’ve become a little bit obsessed with: dietary variability. Specifically, I’m looking at those who identify that they are food-insecure (defined as not having the resources to obtain enough food to eat in the last 30 days) , and comparing their health habits to those who have enough. While I already have the data set, I’ve been looking for interesting context articles like this one, which explores the “food-insecurity paradox”. Apparently in the US, women who are food insecure are actually more likely to be obese than those who aren’t. Interesting stuff.

Measuring Weather Events: A Few Options

Like most people in the US this past week, I was glued to the news watching the terrible devastation Hurricane Harvey was inflicting on Houston. I have quite a few close friends and close work collaborators in the area, who thankfully all are okay and appear to have suffered minimal property damage themselves. Of course all are shaken, and they have been recommending good charities in the area to donate to.  As someone’s whose lived through a home flood/property destruction/disaster area/FEMA process at least once in my adult life, I know how helpless you feel watching your home get swallowed by water and how unbelievably long the recovery can be. Of course when I went through it I had a huge advantage….the damage was limited to a small number of people and we had easy access to lots of undamaged places. I can’t imagine going through this when the whole city around you has suffered the same thing, or how you begin to triage a situation like this.

However, as the total cost of Hurricane Harvey continues to be fully assessed, I want to make a comment on some numbers I’m seeing tossed around. It’s important to remember in the aftermath of weather events of any kind that there are actually a few different ways of measuring how “big” it is:

  1. Disaster Measurement:
    1. Lives lost
    2. Total financial cost
    3. Long term impacts to other infrastructure (i.e. ability to rebuild, gas pipelines,etc)
  2. Storm measurement
    1. Size of weather event (i.e. magnitude of hurricane/volume of rainfall)
    2. Secondary outcomes of weather event (i.e. flooding)

538 has a good breakdown of where Harvey ranks (so far) on all of these scales here.

Now none of this matters too much to those living through the storm’s aftermath, but in terms of overall patterns they can be important distinctions to keep in mind. For example, many people have been wondering how Harvey compares to Katrina. Because of the large loss of life with Katrina (almost 2,000 deaths) and the high cost ($160 billion) it’s clear that Katrina is the benchmark for disastrous storms. However, in terms of wind speed and power of the storm, Katrina was actually pretty similar to Hurricane Andrew in 1992 which resulted in 61 deaths and had a quarter of the cost. So why did Katrina have such a massive death toll? Well, as 538 explains:

Katrina’s devastation was a result of the failure of government flood protection systems, violent storm surges, a chaotic evacuation plan and an ill-prepared city government.

So the most disastrous storms are not always the biggest, and the biggest storms are not always the most disastrous. This is important because the number of weather events causing massive damage (as in > $1 billion) is going up:

Source of graph.
Source of disaster data.

However, the number of hurricanes hitting the US has not gone up in that time (more aggregate data ending in 2004 here, storm level data through 2016 here). This graph does not include 2017 or Hurricane Harvey.

Now all of these methods of measuring events are valid, depending on what you’re using them for. However, that doesn’t mean the measures are totally interchangeable. As with all data, the way you intend to use it matters. If you’re making a case about weather patterns and climate change, costly storm data doesn’t prove the weather itself is worsening. However if you’re trying to make a case for infrastructure spending, cost data may be exactly what you need.

Stay safe everyone, and if you can spare a bit of cash, our friends in Houston can use it.

Baby, You Can’t Drive My Car

This week I was rather surprised to find out that a early-20s acquaintance of mine who lives in a rather rural area did not have a driver’s license. I know this person well enough to know there is no medical reason for this, and she is now pursuing getting one. Knowing the area she lives in I was pretty surprised to hear this, particularly considering she has a job and a small child.

Now living around a large city with lots of public transportation options, I do know people who are medically unable to get a license (and thus stick close to public transit) and those who simply dislike the thought of driving (because they grew up around public transit), but I hadn’t met many people in the (non-big city) area I grew up in who didn’t get a license.

As often happens when I’m surprised by something, I decided to go ahead and look up some numbers to see how warranted my surprise was. It turns out I was right to be surprised, but there may be a trend developing here.

According to the Federal Highway Administration, in 2009 87% of the over-16 population had a driver’s license. There’s a lot of state to state variation, but the states I’ve lived in do trend high when it comes to the number of drivers per 1000 people:

Note: that map is licenses per 1000 people of all ages, so some states with particularly young populations may get skewed.

This appeared to confirm my suspicions that not having a license was relatively unusual, particularly in the New England region. Most of the places that have lower rates of licenses are those that have big cities, where presumably people can still get around even if they don’t know how to drive. I am a little curious about what’s driving the lowish rates in Utah, Texas and Oklahoma, so if anyone from there wants to weigh in I’d appreciate it.

I thought that was the end of the story, until I did some more Googling and found this story from the Atlantic, about the decreasing number of people who are getting driver’s licenses. The data I cited above is from 2009, and apparently the number of licensed drivers has been falling ever since then. For example, this paper found that in 2008, 82% of 20-24 year olds had driver’s licenses, but by 2014 76.7% did. In contrast, in 1983 that number was 91.8%.

So what’s the reason for the decline? According to this survey, the top 5 reasons for those aged 18 to 39 are “(1) too busy or not enough time to get a driver’s license (37%), (2) owning and maintaining a vehicle is too expensive (32%), (3) able to get transportation from others (31%), (4) prefer to bike or walk (22%), (5) prefer to use public transportation (17%)”.

Like most surveys though, I don’t think this tells the whole story. For example, the top reason for not having a license is that people are “too busy” to get one, but the study authors noted that those without licenses are less likely to be employed and have less education than those with licenses. This suggests that it is not extended outside commitments that are preventing people from getting licenses. Additionally, anyone who has ever lost the use of their car knows it can take a lot more time to get a ride from someone else than it does just to hop in your own vehicle.

My personal theory is that getting a drivers license is something that requires a bit of activation energy to get going. Over the last decade or two, state legislatures have progressively enacted laws that put more restrictions on teen drivers, so the excitement of “got my license this morning and now I’m taking all my friends out for a ride tonight” no longer exists in many states. For example, in Massachusetts drivers under 18 can’t drive anyone else under 18 (except siblings) for the first 6 months. This is probably a good practice, but it almost certainly decreases the motivation of some kids to go through all the work of getting their license. After all, this is an age group pretty notorious for being driven by instant gratification.

Additionally, with high costs for insurance and vehicles, many parents may not be as excited for their kids to get their license. Without parental support, it can be really hard to navigate the whole process, and if a parent starts to think it may be easier to keep driving you than to pay for insurance, this could further decrease the numbers. With younger generations spending more time living at home, parental support is an increasing factor. Anyone attempting to get a license has a certain reliance on the willingness of others to teach them to drive and help them practice, so the “too busy” reason may actually be driven just as much by the business of those around you as your own business. You can be unemployed and have plenty of time to practice driving, but if no one around you has time to take you out, it won’t help.

Finally, there may be a small influence of new technology. With things like Uber making rides more available more quickly and Amazon delivering more things to your door, it may actually be easier to function without a license than it was 10 years ago. Even the general shift from “have to go out to do anything fun” to “can stay home and entertain myself on line” may account for a bit of the decreased enthusiasm for getting licensed to drive. For any one person it’s doubtful that’s the whole reason, but for a few it may be enough to tip the scales.

It will be interesting to see if this corrects itself at some point…will those not getting their license at 18 now get it at 28 instead, or will they forego it entirely? The initial survey most (about 2/3rds) still plan on pursuing one at some point, but whether or not that happens remains to be seen.

The Real Dunning-Kruger Graph

I’m off camping this weekend, so you’re getting a short but important PSA.

If you’ve hung out on the internet for any length of time or in circles that talk about psych/cognitive biases a lot, you’ve likely heard of the Dunning-Kruger effect. Defined by Wiki as “a cognitive bias wherein persons of low ability suffer from illusory superiority, mistakenly assessing their cognitive ability as greater than it is.”, it’s often cited to explain why people who know very little about something get so confident in their ignorance.

Recently, I’ve seen a few references to it accompanied by a graph that looks like this (one example here):

While that chart is rather funny, you should keep in mind it doesn’t really reflect the graphs Dunning and Kruger actually obtained in their study. There were 4 graphs in that study (each one from a slightly different version of the study) and they looked like this:


Logic and reasoning (first of two):


And one more logic and reasoning (performed under different conditions):

So based on the actual graphs, Dunning and Kruger did not find that the lowest quartile thought they did better than the highest quartile, they found that they just thought they were more average than they actually were. Additionally it appears the 3rd quartile (above average but not quite the top), is the group most likely to be clearsighted about their own performance.

Also, in terms of generalizability, it should be noted that the participants in this study were all Cornell undergrads being ranked against each other. Those bottom quartile kids for the grammar graph are almost certainly not bottom quartile in comparison to the general population, so their overconfidence likely has at least some basis.  It’s a little like if I asked readers of this blog to rank their math skills against other readers of this blog….even the bottom of the pack is probably above average. When you’re in a self selected group like that,  your ranking mistakes may be more due to a misjudging of those around you as opposed to just an overconfidence in yourself.

I don’t mean to suggest the phenomena isn’t real (follow up studies suggest it is), but it’s worth keeping in mind that the effect is more “subpar people thinking they’re middle of the pack” than “ignorant people thinking they’re experts”. For more interesting analysis, see here, and remember that graphs drawn in MS Paint rarely reflect actual published work.