Probability Paper and Polling Corrections

This is another post from my grandfather’s newsletter (intro to that here). When I first mentioned his newsletter, I mentioned that he manufactured probability paper for people who needed to do advanced calculations in the days before computers. I found some cool examples while looking through the 1975 issues recently, so I thought I’d show them off here.  First was this paper, used to determine what the “true” polling percentage is when you have a lot of undecided voters. He was using an equation he called Seder’s method to adjust the pollsters predictions:

Probabilitypaper

To use it, you find the percent of people who responded to the survey with a definite answer as the x-axis, then look to the right to find the percentage of people who made a particular choice. Once you have that data point, you draw a line to the left (the traditional y-axis to find out how many people will probably end up going with a particular choice once they have to make one.

I decided to try it based on a recent Quinnipiac presidential election poll (from June 29th, 2016). This has Clinton polling at 39%, Trump at 37%, Johnson at 8% and Stein at 4%, with 12% answering some combination of Unknown/Undecided/Maybe Won’t Vote/Maybe someone else. Here what this would look like filled out:

Probabilitypaperpolls

As you can see, it adjusts everyone a little upward, with a little more going toward those polling with the low numbers. Whether or not this is the correct adjustment is up for debate, but it’s a fun little tool to use for those who don’t like equations.

This particular one was actually one of his easy ones. Here’s the paper for getting confidence intervals for Bernoulli probabilities:

Bernoullipaper

It looks complicated, but compared to doing it by hand, this was MUCH easier. To show how much time we have on our hands now that computers do the complicated stuff, check out my take on the Bernoulli distribution here. That’s what I do while SAS is importing my files. Ah, technology.

The Signal and The Noise: Chapter 5

Apparently we’re terrible at predicting earthquakes.

That’s what Chapter 5 is about, and it makes sense. Predicting rare events (Black Swans as Taleb would call them) is terribly difficult because you may only be working with a theoretical possibility and a limited data set. Even though we can get a general sense of where earthquakes may hit, we still don’t get much data on the major ones. This map from Wired shows some interesting regional information:

So with limited data points, the tendency for predictions is going to be to take every data point seriously and risk overfitting the model. The other problem is not going far enough back with the data. In Japan prior to the Fukashima disaster,  evidence that major earthquakes had hit thousands of years ago was left off the risk assessment.

SignalNoiseCh5

My most memorable earthquake experience was actually a few weeks after my son was born. I was feeding him, and I thought a large truck had gone by. Something felt off though, and he seemed surprisingly confused by it. When I went downstairs again, I checked the news and realized that “truck” had been an earthquake.

5 Things You Should Know About the Body Mass Index

This post comes from a reader question I got asking for my opinion on  the Body Mass Index (BMI). Quick intro for the unfamiliar: the BMI is a calculated value that related your height and weight. It takes your weight (in kilograms) and divides it by your height (in meters) squared. For those of you in the US, that’s weight (in pounds) times 703, divided by height (in inches) squared. Automatic calculator here.  A BMI score of less than 18.5 is considered underweight, 18.5-24.9 is normal, 25-29.9 is overweight, and >30 is obese. So what’s the deal with this thing?

  1. It was developed for use in population health, and it’s been around longer than you might think. The BMI was invented by Adolphe Quetlet in between 1830 and 1850. He was a statistician who needed an easy way of comparing population weights that actually took height in to account. Now this makes a lot of sense….height is more strongly correlated with weight than any other variable. In fact as a species we’re about 4 inches taller than we were when the BMI was invented. Anyway, it was given the name “Body Mass Index” by Ancel Keys in 1972. Keys was conducting research on the relative obesity of different populations throughout the world, and was sorting through all the various equations that related height and weight and how they correlated with measured body fat percentage. He determined this was the best, though his comparisons did not include women, children or those over 65, or non-Caucasians.
  2. Being outside the normal range means more than being inside of it. So if Keys was looking for something that correlated to body fat percent, how does the BMI do? Well, a 2010 study found that the correlation is about r = .66 for men and r=.84 for women. However, the researchers also looked at it’s usefulness as a screening test….how often did it accurately sort people in to “high body fat” or “not-high body fat”? Well, for those with BMIs greater than 30, the positive predictive value is better than the negative predictive value. So basically, if you know you have a BMI over 30, you are also likely to have excess body fat (87% of men, 99% of women). However, if you have a BMI of under 30, about 40% of men and 46% of women still had excess body fat.  If you move the line down to a BMI of 25, some gender differences show up: 69% of men with BMIs over 25 actually have excess body fat, compared to 90% of women. This means a full 30% of “overweight” males are actually fine. About 20% of both genders with BMIs under 25 actually have excess body fat. So basically if you’re above 30, you almost certainly have excess body fat, but being below that line doesn’t necessarily let you off the hook.
  3. It doesn’t always take population demographics into account One possible reason for the gender discrepancy above is height….BMI is actually weaker the further you fall outside the 5’5”-5’9” range. I would love to see the data from #2 actually rerun not by gender but by height, to see if the discrepancy holds. In terms of health predictions though, BMI cutoffs show variability by race. For example, a white person with a BMI of 30 carries the same diabetes risk as a South Asian with a BMI of 22 or a Chinese person with a BMI of 24. That’s a huge difference, and is not always accounted for in worldwide obesity tables.
  4. Overall it’s a pretty well correlated with early mortality. So with all the inaccuracies, why do we use it? Well, this is why:Obesitycurve That graph is from this 2010 paper that looked at 1.46 million white adults in the US. The hazard ratio is for their all cause mortality at the ten year mark (median start age was 58). Particularly for the higher numbers, that’s a pretty big difference. To note: some other observational studies have had a slightly different shaped curve, especially at the lower end (25-30 BMI) that suggested an “obesity paradox”. More recent studies haven’t found this, and there’s some controversy about how to correctly interpret these studies.  The short version is that correlation isn’t causation, and we don’t know if losing weight helps with these numbers.
  5. For individuals on the borderline, you need another metric Back to individuals though….should you take your BMI seriously? Well maybe. It’s pretty clear if you’re getting a number over 30 you probably should. There’s always the “super muscled athlete” exception, but you pretty much would know if that were you. If you need another quick metric to assess disease risk though, it looks like using a combination of waist circumference and BMI may yield a better picture of health than BMI alone, especially for men.  Here’s the suggested action range from that paper:ActionlevelWhile waist circumference is obviously not something that most people know off the top of their head, it should be easy enough for doctors to take in an office visit.

Overall, it’s important to remember that metrics like the BMI or waist circumference are really just screening tests and you get what you pay for. While we hope they catch most people who are at high risk, there will always be false positives and false negatives. While in population studies these may balance each other out, for any individual it’s important to take a look at all the various factors that go in to health. So, um, talk to your doctor and avoid over-interpretation.

The Tim Tebow Fallacy

I’ve blogged a lot about various cognitive biases and logical fallacies here over the years, but today I want to talk about one I just kinda made up: The Tim Tebow Fallacy. Yeah, that’s right, this guy:

6566856787_51d66657ef_z

I initially mentioned the premise in this post, but for those of you who missed it here’s the background: Tim Tebow is a Heisman trophy winning quarterback who played in the NFL from 2010-2012. Despite his short career, in 2011 he was all anyone could talk about. Everyone had an opinion about him and he was unbelievably polarizing, despite being a quite pleasant individual and a good-but-not-great player. It was all a little baffling, and writer Chuck Klosterman took a crack at explaining the issue here. In trying to work through the controversy, he made this observation:

On one pole, you have people who hate him because he’s too much of an in-your-face good person, which makes very little sense; at the other pole, you have people who love him because he succeeds at his job while being uniquely unskilled at its traditional requirements, which seems almost as weird. Equally bizarre is the way both groups perceive themselves as the oppressed minority who are fighting against dominant public opinion, although I suppose that has become the way most Americans go through life.

Ever since I read that, I’ve been watching political conversations and am stunned how often this type of thinking happens. It seems some people not only want to have a belief and defend it, but also get some sort of cache from having an unacknowledged or rare belief. It’s  like a combination of a reverse-Bandwagon effect (where someone likes something more because it’s not popular) combined with type of majority illusion (where people inaccurately assess how many people actually hold a particular opinion). So as a fallacy, I’d say it’s when you find a belief more attractive and more correct because it runs counter to what you believe popular perception is. To put it more technically:

Tim Tebow Fallacy: The tendency to increase the strength of a belief based on an incorrect perception that your viewpoint is underrepresented in the public discourse

Need an example? Take a group conversation at a party: Person A mentions they like the Lord of the Rings movies. Person B pipes up that they actually really didn’t like them. Person C agrees with Person B, and the two bond a bit over finding out they share this unusual opinion. After a minute or two, Person A is getting a little frustrated and is now even a BIGGER fan of the movies. If it stopped there it wouldn’t be a Tebow fallacy, just regular old defensiveness. What kicks it over the edge is when Person A starts claiming “no one ever talks about how good the cinematography was in those movies!” or “no one really appreciates how innovative those were!”. or “No one ever gives geeky stuff any credit!”. They walk away irritated and believing that saying you like the Lord of the Rings movies has been sort of a subversive act, and that general defensiveness is called for.

Now of course this is all kind of poppycock. The Lord of the Rings movies are some of the most highly regarded movies of all time, and set records for critical acclaim and box office draw. The viewpoint Person A was defending is the dominant one in nearly every circle except for the one they happened to wander in to that night, yet they’re defensive and feel they need to continue to prove their point. It’s the Tim Tebow Fallacy.

Now I think there’s a couple reasons this sort of thing happens, and I suspect many of them are getting worse because of  the internet:

  1. We’re terrible at figuring out how widespread opinions are. In my Lord of the Rings example, Person A extrapolated small group dynamics to the general population, likely without even realizing it. Now this is pretty understandable when it happens in person, but it gets really hard to sort through when you’re reading stuff online. Online you could read pages and pages of criticism of even the most well-loved stuff, and come away believing many more people think a certain way than they do. Even if 99.9% of American love something that still leave 325,000 who don’t. If those people have blogs or show up in comments sections, it can leave you with the impression that their opinions are more widely held than they are. And make no mistake, this influences us. It’s why Popular Science shut off their comments section.
  2. We feed off each other and headlines The internet being what it is, let’s imagine Person A goes home and vents their frustrations with Person B and C online. What started as an issue at one party no turns in to an anecdote that can be spread. The effect of this should not be underestimated. A few months ago someone sent me this story, about a writer with a large-ish Twitter following who had Tweeted a single picture of a lipstick name “Underage Red” with the caption “Just went shopping for some makeup.  How is this a lipstick color?”. The whole story is here, but by the end of it her single Tweet had made it all the way to Time Magazine as proof of a “major controversy” and being cited as an example of “outrage culture”. She was inundated with people calling her out (including the lipstick creator, Kat Von D) for her opinion, all seemingly believing they were fighting the good fight against a dominate narrative. A narrative that was comprised of a single rather reasonable Tweet about a lipstick. I don’t blame those people by the way….I blame the media that creates an “outrage” story out of a single Tweet, then follows up with think pieces about “PC culture” and “oversensitivity”. The point is that in 2016, a single anecdote going viral is really common, but I’m not sure we’ve all adjusted our reactions to account for the whole “wait, how many people actually think this way?” piece. It’s even worse when you consider how often people lie about stuff. Throw in a few fabricated/exaggerated/one sided retellings, and suddenly you can have viral anecdotes that never even happened.
  3. It’s human nature to strive for optimal distinctiveness. While going with the crowd gets a lot of attention, I think it’s worth noting that humans actually are a little ambivalent about this. The theory of optimal distinctiveness argues that we actually are all simultaneously striving to differentiate ourselves just enough to be recognized while not going so far as to be ostracized. Basically we want to be in the middle of this graph:

    From Brewer, M.B. (1991). “The social self: On being the same and different at the same time”. Personality and Social Psychology Bulletin, 17, 475-482.

    By positioning our arguments against the dominant narrative, we can both defend something we really believe in AND differentiate ourselves from the group. I think that makes this type of fallacy uniquely attractive for many people.

  4. We’re selecting our own media sources, then judging them. One of my favorite humor websites of all times is Hyperbole and a Half by Allie Brosh. After going viral, Brosh put together an FAQ that includes this question/answer:

    Question: I don’t think you’re funny and I’m frustrated that other people do Answer: It’s okay.  Try not to be too upset about it.  Humor is simply your brain being surprised by an unexpected variation in a pattern that it recognizes.  If your brain doesn’t recognize the pattern or the pattern is already too familiar to your brain, you won’t find something humorous.

    With the internet (and cable news, Facebook, Twitter, etc) we now have the ability to see hundreds or thousands of different opinions in a single week. What we often fail to recognize is that we actually select for most of these….who we follow or friend, webpages we visit etc.  Everyone else does too. We are all constructing very individualized patterns of information intake, and it’s hard to know how usual or unusual our own pattern is. Instead of just “those who loved Lord of the Rings movies” and “those who didn’t” there’s also “those who hated it because they are huge book fans”, “those who don’t like any movie with magic in it”, “those who hated it because they hate Elijah Wood”, “those who prefer Meet the Feebles”, “those who didn’t get that whole ring thing”,  “those who thought that was the one with the lion”, “those who liked it until they heard all the hype then wished everyone would calm down”, etc etc etc. Point is, we are often selecting for certain opinions, then reacting to the opinions we selected ourselves and taking it out on others who may legitimately never have encountered the opinion we’re talking about. If you combine this with point #3 above, you can see where we end up positioning ourselves against a narrative others may not be hearing.

  5. It’s a common mistake for great thinkers. Steve Jobs is widely considered one of the most innovative and intuitive businessmen of all time. His success largely came from his uncanny ability to identify gaps in the tech marketplace that were invisible to everyone else and then fill them better than anyone. What often gets lost in the quick blurbs about him though is how often he misfired. He initially thought Pixar should be a hardware company. He wiffed on multiple computer designs and had a whole confusing company called NeXT…..and he’s still considered one of the best in the world at this. Great thinkers are always trying to find what others are missing, and even the best screw this up pretty frequently. As with many fallacies, it’s important to remember that IQ points offer limited protection. What makes your mind great can also be your downfall.
  6. It gets us out of dealing with uncomfortable truths. One of the first brutal call outs I ever got on the internet was a glib and stupid comment I made about the Iraq War. It was back in 2004 or so, and I got in an argument with someone I respected over whether or not we should have gone in. I was against the war, and at some point in the debate I got mad and said that my issue was that George Bush “hadn’t allowed any debate”.  I was immediately jumped on and it was pointed out to me in no uncertain terms that I was just flat out making that up. There was endless debate, I just didn’t like the outcome. That stung like hell, but it was true. Regardless of what anyone believes about the Iraq War then or now, we did debate it. It was easier for me to believe that my preferred viewpoint had been systematically squashed than that others had listened to it and not found it compelling. This works the other way too. I’m sure in 2004 I could find someone claiming vociferously that we’d over-debated the Iraq War, based mostly on the fact that they made up their mind early. Most recently I saw this happen with Pokemon Go, where the Facebook statuses talking about how great it was showed up about 30 minutes before the “I’m totally sick of this can we all stop talking about this” statuses showed up. I’m not saying there’s a “right” level of public discourse, but I am saying that it’s hard to judge a “wrong” level without being pretty arbitrary. We all want the game to end when our team’s ahead, but it really just doesn’t work that way.

So that’s my grand theory right there. As I hope point #6 showed, I don’t think I’m exempt from this. A huge amount of this blog and my real life political discussion are based on the premise of “not enough people get this important fact I’m about to share”. I get it. I’m guilty.

On the other hand, I think in the age of the internet when we can be exposed to so many different viewpoints, we should be careful about how we let the existence of those viewpoints influence our own feelings. If the forcefulness of  our arguments is always indexed on what we think the forcefulness of our opposition is, that will leave us progressively more open to infection by the Toxoplasma of Rage.  As the opportunities for creating our own “popular narrative” increase, we have to be even more careful that we reality check that at times. Check the numbers in opinion polls. Read media that opposes you. Make friends outside your demographic. Consider criticism. Don’t play for the Jets. You know, all the usual fallacy stuff.

Tebow Image Credit

The Signal and the Noise: Chapter 4

I’ve been going through the book The Signal and the Noise, and pulling out some of the anecdotes in to contingency matrices. Chapter 4 covers weather forecasts.

Chapter 4 of this book was pretty interesting, as it covered weather predictions from various sources. It presented some data that showed how accurate weather predictions from various sources were. Essentially the graphs plotted the prediction (i.e. “20% chance of rain”) against the frequency of rain actually occurring after the prediction.  They found that the National Weather Service is the most accurate, then the Weather Channel, then local TV stations.

While that was interesting in and of itself, what really intrigued me was the discussion of whether an accurate forecast was actually a good forecast. People watching the local news for their weather are almost invariably going to make decisions based on that forecast, so meteorologists actually have a lot of incentives to exaggerate bad weather a bit. After all, people are much less likely to be annoyed by the time they brought an umbrella and didn’t need it than the time they got soaked by a storm they didn’t expect. The National Weather Service on the other hand is taxpayer funded to be as accurate as possible, and may end up seeing their track record put in front of Congress at some point. Different incentives mean different choices.

SignalNoiseCh4

To give you an idea of the comparison, when the National Weather Service says the chance of rain is 100%, it’s about 98%. When the Weather Channel says it, it’s about 92%. When a local station says it, it’s about 68%. When Aaron Justus says it….well, this happens:

Rewind: Politics and Polling in 1975

I’ve mentioned before on this blog that my grandfather was a statistician who ran his own company producing probability chart paper. For those of you under the age of 40 (50? 60?) who weren’t raised around such things, this was basically graphing software before there were computers. Probability chart paper manipulated the axes of charts and allowed you to graph fancy distributions without actually have to calculate every value out by hand. Kind of like a slide rule, but for graphing. Not helping the under 40 crowd with that analogy I’m sure.

ANYWAY, what I don’t think I’ve mentioned here is that my grandfather also happened to be a stats blogger before computers existed. From 1974 to 1985 he produced a quarterly newsletter teaching people how to use statistics more effectively. I found out a few months ago that my father had actually saved a copy of all of these newsletters, and I’ve made it my goal this summer to read and digitize every issue. While a lot of the newsletters are teaching people how to do hand calculations (shudder), I may be pulling out a few snippets here and there and posting them. Today I was reading the issue from Late Winter (January and February) of 1975, and stumbled across this gem I thought people would appreciate:

TEAMnewsletter1

I still don’t know how he typed all those equations with a typewriter.

Gee, glad things have improved so much.

Fun possibly exaggerated family legend: my grandfather was a Democrat for most of his life, but he hated Ted Kennedy so much he maintained a Massachusetts address for almost a year after he moved to New Hampshire just so he could continue voting against him.

Medical Marijuana and Painkiller Overdoses: Does One Reduce the Other?

I’ve talked before here about the issues with confusing correlation and causation, and more recently I’ve also talked about the steps needed to establish a causal link between two things.

Thus I was interested to see this article in the Washington Post recently about the attempts to establish a causal link between access to medical marijuana and a decrease in painkiller related deaths. There had been studies suggesting that access to medical marijuana was associated with lower rates of overdose related deaths since this JAMA paper was published in 2014, and those findings were repeated and broadened in 2015 with this paper. Both papers found increased access to medical marijuana reduced painkiller related deaths by up to 25% over states with no such access. This showed at least some promise of moving towards a causal link, as it established a reproducible consistent association.

This was not without it’s critics. When the Washington Post covered the story about the 2015 paper, they interviewed a skeptical researcher who pointed out that painkiller users are at higher risk for overdose when they use medical marijuana as well. Proponents of medical marijuana pointed out that this only studied those who were prescribed painkillers. If it could be established that access to medical marijuana reduced the number of painkiller prescriptions being written, then you could actually start to establish a plausible and coherent theory….2 more links on the chain of causality.

Long story short, that’s what this new paper did. They took a look at how many prescriptions your average physician wrote in states with legal medical marijuana vs those without, and found this:

As a balance, they also looked at other drugs that had nothing to do with medical marijuana (like antibiotics or blood thinners) and discovered there was no difference in those prescription rates.

While the numbers for anxiety and depression medication are interesting, they may only translate in to a handful of patients per year. That pain medication number on the other hand is pretty damn impressive. 1,826 doses of painkillers could actually translate in to at least half a dozen patients per physician (if you’re assuming daily use for a year) or more if you’re assuming less frequent use. This gives some pretty hefty proof that medical marijuana could be lowering overdose rates by lowering the number of patients getting a different painkiller prescription to begin with.

I’d be interested to see if there’s a dose response relationship here….within the states that have legal medical marijuana, do states with looser laws/more access see even lower death rates? And do those states with the lower overdose death rates see an increase in any other death rates, like motor vehicle accidents?

Interesting data to ponder, especially since full legalization is on my state ballot this November. Regardless of the politics however, it’s a great example of how to slowly but surely make a case for causality.

Type IV Errors: When Being Right is Not Enough

Okay, after discussing Type I and Type II errors a few weeks ago and Type III errors last week, it’s only natural that this week we’d move on to Type IV errors. This is another error type that doesn’t have a formal definition, but is important to remember because it’s actually been kind of a problem in some studies. Basically, a Type IV error is an incorrect interpretation of a correct result.

For example, let’s say you go to the doctor because you think you tore your ACL

A Type I error would occur if the doctor told you that your ACL was torn when it wasn’t. (False Positive)

A Type II error would occur if the doctor told you that you just bruised it, but you had really torn your ACL. (False Negative)

A Type III error would be if the doctor said you didn’t tear your ACL, and you hadn’t, but she sent you home missed that you had a tumor on your hip causing the knee pain. (Wrong problem)

A Type IV error would be if you were correctly diagnosed with an ACL tear, then told to put crystals on it every day until it healed. Alternatively, the doctor refers for surgery and the surgery makes the problem worse. (Wrong follow up)

When you put it like that, it’s decently easy to spot, but a tremendous number of studies can end up with some form of this problem. Several papers have found that when using ANOVA tables, as many as 70% of authors will end up doing incorrect or irrelevant follow up statistical testing.  Sometimes these affect the primary conclusion and sometimes not, but it should be concerning to anyone that this could happen.

Other types of Type IV errors:

  1. Drawing a conclusion for an overly broad group because you got results for a small group. This is the often heard “WEIRD” complaint, when psychological studies use populations from White Educated Industrialized Rich Democratic countries (especially college students!) and then claim that the results are true of humans in general. The results may be perfectly accurate for the group being studied, but not generalizable.
  2. Running the wrong test or running the test on the wrong data.  A recent example was the retraction that had to be made when it turned out the authors of a paper linking conservativism and psychotic traits had switched the coding for conservatives and liberals. This meant all of their conclusions were exactly reversed, and they now linked liberalism and psychotic traits. They correctly rejected the null hypothesis, but were still wrong about the conclusion.
  3. Pre-existing beliefs and confirmation bias. There’s interesting data out there that suggests that people who write down their justifications for decisions are more hesitant to walk back on those decisions when it looks like they are wrong. It’s hard for people to walk back on things once they’ve said them. This was the issue with a recent Politifact “Pants on Fire Ranking” ranking it gave a Donald Trump claim. Trump had claimed that “crime was rising”. PolitiFact said he was lying. When it was pointed out to them that preliminary 2015 and 2016 data suggests that violent crime is rising, they said preliminary data doesn’t count stood by the ranking. The Volokh Conspiracy has the whole breakdown here, but it struck them (and me) that it’s hard to call someone a full blown liar if they have  preliminary data on their side. It’s not that his claim is clearly true, but there’s a credible suggestion it may not be false either. Someone remind me to check when those numbers finalize.

In conclusion: even when you’re right, you can still be wrong.

 

The Signal and the Noise: Chapter 3

This is a series of posts featuring anecdotes from the book The Signal and the Noise by Nate Silver.  Read all the Signal and the Noise Posts here, or go back the Chapter 2 post here.

Baseball talk. The stats guys vs scouts debates are my favorite.SignalNoiseCh3

Two Ways to Be Wrong: I Swallowed Batman

I’ve often repeated on this blog that there are really two ways to be wrong. I bring it up so often because it’s important to remember that being right does not always mean preventing error, but at times requires us to consider how we would prefer to err.

I bring all this up because I had to make a very tough decision this past Saturday, and it all started with Batman.

It was 3 am or so when I heard my 4 year old son crying. This wasn’t terribly unusual…between nightmares or other middle of the night issues this happens just about every other week. I went out in the hall to see what was happening, and I found him crying hysterically. I picked him up and asked him what was wrong, noticing that he seemed particularly upset and very red. “Mama, I swallowed Batman and he’s stuck in my throat and I can’t get him out” he wailed. My heart shot to my throat. He had a small Batman action figure he had taken to bed with him. I had thought it was too big to swallow, and he was a little old for swallowing toys….but in his sleep I had no idea what he could have done. Before I could even look in his mouth he started making a horrible coughing/choking sound l’d never heard before and was gasping for air through the tears. I looked in his mouth and saw nothing, but thought I felt something.

I woke my husband up, and we briefly debated what to do. Our son was still breathing, but he sounded horrible. I was unsure what, if anything was in his throat. I had never called 911 to my own house before, and I ran down the other options. Call the pediatrician? They could take an hour to call back. Drive to the ER? What if something happened in the middle of the highway? Call my mother? She couldn’t do much over the phone.  Google? Seriously? Does “Google hypochondriac” have an antonym that means “person googling something that’s way to important for Google”?

Realizing I had no way of getting a better read on the situation and with my son still horrifically coughing and gasping in the background, I took a deep breath and thought about being wrong. Would I rather risk calling 911 unnecessarily, or risk my child starting to fully choke on an object that might be a funny shape and tough get out with the Hemleich manuever? Phrased that way, the answer was immediately clear. I made the call. The whole train of thought plus discussion with my husband took less than two minutes.

The police and EMTs arrive a few minutes later. My son had started to calm down, and they were great with him. They examined his mouth and throat, and were relatively sure there was nothing in the airway. They found the Batman toy still in his bed. Knowing that his breathing was safe, we drove to the ER ourselves to make sure he hadn’t swallowed anything that was now in his stomach, and that his throat hadn’t gotten irritated or reactive. He still had the horrible sounding cough. He brought Batman with him.

In the end, there was nothing in his stomach. He had spasmodic croup (first time he’s  had croup at all), and the doctor thinks that his “I swallowed Batman” statement was his way of trying to explain to us that he woke up with either a spasm or painful mucus blockage in his throat. The crying had made it worse, which was why he sounded so bad when I went to him. While we were there he picked up Batman, pointed to the tiny cloth cape and said “see, that’s what was in my throat!”. We got some steroids to calm his throat down, and we were on our way home. We all went back to bed.

In the end, I was wrong. I didn’t really need to call 911, and we could have just driven to the hospital ourselves. We needed the stomach x-ray for reassurance and the steroids so he could get some sleep, but there was no emergency. But I tell this whole story because this is where examining up front the preferred way of being wrong comes in handy: I had already acknowledged that being wrong in this way was  something I could live with. My decision making rested in part on being wrong in the right direction. I can live with an unnecessary call. I couldn’t have lived with the alternative way of being wrong.

911

Written out here, this seems so simplistic. However in a (potential) emergency, the choices that go in to each box can vary the calculation wildly.

  1. Can you get more information to increase your chances of being right? (I couldn’t, it was 3 in the morning)
  2. How soon will the consequences occur if you’re wrong? (Choking is a minutes and seconds issue)
  3. How prepared are you to deal with the worst outcome? (I know the Heimlich, but have never done it on a child and was worried that an oddly shaped object might make it difficult)
  4. How severe are the consequences? (Don’t even want to think about this one)

That’s a lot to think about in the middle of the night, but I was glad I had the general mental model on hand. I think it helped save some extra panic, and if I had it to do over again I’d make the same decision.

As for my son, the next morning he informed me that from now on “I’m going to keep my coughs in my mouth. They scare mama.” Someone clearly needs his own contingency matrix.