The Bullshit Two-Step

I’ve been thinking a lot about bullshit recently, and I’ve started to notice a bit of a pattern in the way bullshit gets relayed on social media. These days, it seems like bullshit is turning in to a multi-step process that goes a little something like this: someone posts/publishes something with lots of nuances and caveats. Someone else translates that thing for more popular consumption, and loses quite a bit of the nuance.  This happens with every share until finally the finished product is almost completely unrecognizable. Finally the story encounters someone who doesn’t agree with it, who then points out there should be more caveats. The sharer/popularizers promptly point at the original creator, and the creator throws their hands up and says “but I clarified those points in the original!!!!”. In other words:

The Bullshit Two-Step: A dance in which a story or research with nuanced points and  specific parameters is shared via social media. With each share some of the nuance or specificity is eroded, finally resulting in a story that is almost total bullshit but that no one individually feels responsible for. 

Think of this as the science social media equivalent of the game of telephone.

This is a particularly challenging problem for people who care about truth and accuracy, because so often the erosion happens one word at a time. Here’s an example of this happening with a Census Bureau statistic I highlighted a few years ago. Steps 1 and 2 are where the statistic started, step 4 is how it ended up in the press:

  1. The Census Bureau reports that half of all custodial (single) parents have court ordered child support.
  2. The Census Bureau also states (when talking about just the half mentioned in #1) that “In 2009, 41.2 percent of custodial parents received the full amount of child support owed them, down from 46.8 percent in 2007, according to a report released today by the U.S. Census Bureau. The proportion of these parents who were owed child support payments and who received any amount at all — either full or partial — declined from 76.3 percent to 70.8 percent over the period.
  3. That got published in the New York Times as “In 2009, the latest year for which data are available, only about 41 percent of custodial parents (predominantly women) received the child support they were owed. Some biological dads were deadbeats. ” No mention that this only covered half of custodial parents.
  4. This ended up in Slate (citing the Times) as “…. in a substantial number of cases, the men just quit their families. That’s why only 41 percent of custodial parents receive child support.” The “full amount” part got lost, along with all those with no court mandate who may or may not be getting money.

As you can see, very little changed between each piece, but a lot changed by the end. We went from “Half of all custodial parents receive court ordered child support. Of that half, only 41% have received the full amount this year.” to “only 41% of custodial parents receive child support at all”. We didn’t get there all at once, but we got there.  No one’s fully responsible, but no one’s innocent either. It’s the bullshit two-step.

I doubt there’s any one real source for this….sometimes I think these are legitimate errors in interpretation, sometimes people were just reading quickly and missed the caveat, sometimes people are just being sloppy. Regardless, I think it’s interesting to track the pathway and see how easy it is to lose meaning one or two words at a time. It’s also a good case for only citing primary sources for statistics, as it makes it harder to carry over someone else’s error.


Number Blindness

“When the facts are on your side, pound the facts. When the law is on your side, pound the law. When neither is on you side, pound the table.” – old legal adage of unclear origin

Recently I’ve been finding it rather hard to go on Facebook. It seems like every time I log in, someone I know has chosen that moment to start a political debate that is going poorly. It’s not that I mind politics or have a problem with strong political opinions, but what bugs me is how often suspect numbers are getting thrown out to support various points of view. Knowing that I’m a “numbers person”, I have occasionally had people reach out asking me to either support or refute whatever number it is that is being used, or use one of my posts to support/refute what is being said. While some of these requests are perfectly reasonable requests for explanations, I’ve gotten a few recently that were rather targeted “Come up with a reason why my opponent is wrong” type things, with a heavy tone of “if my opponent endorses these numbers, they simply cannot be correct”. This of course put me in a very meta mood, and got me thinking about how we argue about numbers. As a result, I decided to coin a new term for a logical fallacy I was seeing: Number Blindness.

Number Blindness: The phenomena of becoming so consumed by an issue that your cease to see numbers as independent entities and view them only as props whose rightness or wrongness is determined solely by how well they fit your argument

Now I want to make one thing very clear up front: the phenomena I’m talking about is not simply criticizing or doubting numbers or statistics. A tremendous amount of my blogging time is spent writing about why you actually should doubt many of the numbers that are flashed before your eyes. Criticism of numbers is a thing I fully support, no matter whose “side” you’re on.

I am also not referring to people who say that numbers are “irrelevant” to the particular discussion or said that I missed the point. I actually like it when people say that, because it clears the way to have a purely moral/intellectual/philosophical discussion. If you don’t really need numbers for a particular discussion, go ahead and leave them out of it.

The phenomena I’m talking about is actually when people want to involve numbers in order to buffer their argument, but take any discussion of those numbers as offensive to their main point. It’s a terrible bait and switch and it degrades the integrity of facts. If the numbers you’re talking about were important enough to be included in your argument, then they are important enough to be held up for debates about their accuracy. If you’re pounding the table, at least be willing to admit that’s what you’re doing.

Now of course all of this got inspired by some particular issues, but I want to be very clear: everyone does this. We all want to believe that every objective fact points in the direction of the conclusion that we want. While most people are acutely aware of this tendency in whichever political party they disagree with, it is much harder to see it in yourself or in your friends. Writing on the internet has taught me to think carefully about how I handle criticism, but it’s also taught me a lot about how to handle praise. Just like there are many people who only criticize you because you are disagreeing with them, there are an equal number who only praise you because you’re saying something they want to hear. I’ve written before about the idea of “motivated numerancy” (here and for the Sojourners blog here), but studies do show that ability to do math rises and falls depending on how much you like the conclusions that math provides….and that phenomena gets worse the more intelligent you are. As I said in my piece for Sojourners “Your intellectual capacity does NOT make you less likely to make an error — it simply makes you more likely to be a hypocrite about your errors.”

Now in the interest of full disclosure, I should admit that I know number blindness so well in part because I still fall prey to it. It creeps up every time I get worked up about a political or social issue I really care about, and it can slip out before I even have a chance to think through what I’m saying. One of the biggest benefits of doing the type of blogging I do is that almost no one lets me get away with it, but the impulse still lurks around. Part of why I make up these fallacies is to remind myself that guarding against bias and selective interpretation requires constant vigilance.

Good luck out there!

The White Collar Paradox

A few weeks back I blogged about what I am now calling “The Perfect Metric Fallacy“. If you missed it, here’s the definition

The Perfect Metric Fallacy: the belief that if one simply finds the most relevant or accurate set of numbers possible, all bias will be removed, all stress will be negated, and the answer to complicated problems will become simple, clear and completely uncontroversial.”  

As I was writing that post, I realized that there was an element I wasn’t paying enough attention to. I thought about adding it in, but upon further consideration, I realized that it was big enough that it deserved it’s own post. I’m calling it “The White Collar Paradox”. Here’s my definition:

The White Collar Paradox: Requiring that numbers and statistics be used to guide all decisions due to their ability to quantify truth and overcome bias, while simultaneously only giving attention to those numbers created to cater to ones social class, spot in the workplace hierarchy, education level, or general sense of superiority.

Now of course I don’t mean to pick on just white collar folks here, though almost all offenders are white collar somehow. This could just as easily have been called the “executive paradox” or the “PhD paradox” or lots of other things. I want to be clear who this is aimed at because  plenty of white collar workers have been on the receiving end of this phenomena as well, in the form of their boss writing checks to expensive consulting firms just to have those folks tell them the same stuff their employees did only on prettier paper and using more buzzwords. Essentially, anyone who prioritizes numbers that make sense to them out of their own sense of ego despite having the education to know better is a potential perpetrator of this fallacy.

Now of course wanting to understand the problem is not a bad thing, and quite frequently busy people do not have the time to sort through endless data points. Showing your work gets you lots of credit in class, but in front of the C-suite it loses everyone’s attention in less than 10 seconds (ask me how I know this). There is a value in learning how to get your message to match the interests of your audience. However, if the audience really wants to understand the problem, sometimes they will have to get a little uncomfortable. Sometimes the problem is arising precisely because they overlooked something that’s not very understandable to them, and preferring explanations that cater to what you already know is just using numbers to pad the walls of your echo chamber.

A couple other variations I’ve seen:

  1. The novel metric preference As in “my predecessor didn’t use this metric, therefore it has value”.
  2. The trendy metric  “Prestigious institution X has promoted this metric, therefore we also need this metric”
  3. The “tell me what I want to hear” metric Otherwise known as the drunk with a lamp post…using data for support, not illumination.
  4. The emperor has no clothes metric The one that is totally unintelligible but stated with confidence and no one questions it

That last one is the easiest to compensate for. For every data set I run, I always run it by someone actually involved in the work. The number of data problems that can be spotted by almost any employee if you show them your numbers and say “hey, does this match what you see every day?” is enormous. Even if there’s no problems with your data, those employees can almost always tell you where your balance metrics should be, though normally that comes in the form of “you’re missing the point!” (again, ask me how I know this).

For anyone who runs workplace metrics, I think it’s important to note that every person in the organization is going to see the numbers differently and that’s incredibly valuable. Just like high level execs specialize in forming long term visions that day to day workers might not see, those day to day workers specialize in details the higher ups miss. Getting numbers that are reality checked by both groups isn’t easy, but your data integrity will improve dramatically and the decisions you can make will ultimately improve.

The Perfect Metric Fallacy

“The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.” – Daniel Yankelovich

“Andy Grove had the answer: For every metric, there should be another ‘paired’ metric that addresses the adverse consequences of the first metric” -Marc Andreessen

“I didn’t feel the ranking system you created adequately captured my feelings about the vendors we’re looking at, so instead I assigned each of them a member of the Breakfast Club. Here, I made a poster.” -me

I have a confession to make: I don’t always like metrics. There. I said it. Now most people wouldn’t hesitate to make a declaration like that, but for someone who spends a good chunk of her professional and leisure time playing around with numbers it’s kind of a sad thing to have to say. Some metrics are totally fine of course, and super useful. On the other hand, there are times when it seems like the numbers subsume the actual goal, and those become front and center. This is bad. In statistics, numbers are a means to an end, not the end. I need a name for this flip flop, so from here on out I’m calling it  “The Perfect Metric Fallacy”.

The Perfect Metric Fallacy: The belief that if one simply finds the most relevant or accurate set of numbers possible, all bias will be removed, all stress will be negated, and the answer to complicated problems will become simple, clear and completely uncontroversial.

As someone who tends to blog about numbers and such, I see this one a lot.  On the one hand, data and numbers are wonderful because they help us identify reality, improve our ability to compare things, spot trends, and overcome our own biases. On the other hand, picking the wrong metric out of convenience or bias and relying too heavily on it can make everything I just named worse plus piss everyone around you off.


While I have a decent number of my own stories about this, what frustrates me is how many I hear from others. When I tell people these days that I’m in to stats and data, almost a third of people respond with some sort of horror story about how data or metrics are making their professional lives miserable. When I talk to teachers, this number goes up to 100%.

This really bums me out.

It seems after years of  disconnected individuals going with their guts and kind of screwing everything up, people decided that now we should put numbers on those grand ideas to prove that they were going to work. When these ideas now fail, people either blame the numbers (if you’re the person who made the decision) or the people who like the numbers (if you’re everybody else).  So why do we let this happen? Almost everyone up front knows that numbers are really just there to guide decision making, so why do we get so obsessed with them?

  1. Math class teaches us that if you play with numbers long enough, there will be a right answer There’s a lot of times in life when your numbers have to be perfect. Math class. Your tax return. You know the drill. Endless calculations, significant figures, etc, etc. In statistics, that’s not true. It’s a phenomena known as “false precision“, where you present data in a way that makes it look more accurate than it really can be. My favorite example of this is a clinic I worked with at one point. They reported weight to two significant figures (as in 130.45 lbs), but didn’t have a standard around whether or not people had to take their coat off before they weighed them. In the beginning of the post, I put a blurb about me converting a ranking system in to a Breakfast Club Poster. This came up after I was presented with a 100 point scale to rank 7 vendor against each other in something like 16 categories. When you have 3 days to read through over 1000 pages of documentation and assign scores, your eyes start to blur a little and you start getting a little existential about the whole thing. Are these 16 categories really the right categories? Do they cover everything I’m getting out of this? Do I really feel 5 points better about this vendor than that other one, and are both of them really 10 points better than that 3rd one? Or did I just start increasing the strictness of my rankings as I went along, or did I get nicer as I had to go faster, or what? It wasn’t a bad ranking system, but the problem was me. If I can’t promise I kept consistent in my rankings over 3 days, how can I attest to my numbers at the end?
  2. We want numbers to take the hit for unpleasant truths A few years ago someone sent me a comic strip that I have promptly sent along to nearly everyone who complains to me about bad metrics in the workplace: This almost always gets a laugh, and most people then admit that it’s not the numbers they have a problem with, it’s the way they’re being used. There’s a lot of unpleasant news to deliver in this world, and people love throwing up numbers to absorb the pain. See, I would totally give you a raise or more time to get things done but the numbers say I can’t. When people know you’re doing exactly what you were going to do to begin with, they don’t trust any number you put up. This gets even worse in political situations. So please, for the love of God, if the numbers you run sincerely match your pre-existing expectations, let people look over your methodology, or show where you really tried to prove yourself wrong. Failing to do this gives all numbers a bad rap.
  3. Good Data is Hard to Find One of the reasons statistician continues to be a profession is because good data is really really really hard to find, and good methods for analysis actually require a lot of leg work. Over the course of trying to find a “perfect metric” many people end up believing that part of being “perfect” is being easily obtainable. As my first quote mentions, this is ridiculous. It’s also called the McNamara Fallacy, and it warns us that the easiest things to quantify are not always the most important.
  4. Our social problems are complicated The power of numbers is strong. Unfortunately, the power of some social problems is even stronger. Most of our worst problems are multi faceted, which of course is why they haven’t been solved yet. When I decided to use metrics to address my personal weight problem, I came up with 10 distinct categories to track for one primary outcome measure. That’s 365,000 data points a year, and that’s just for me. Scaling that up is immensely complicated, and introduces all sorts of issues of variability among individuals that don’t exist when you’re looking at just one person. Even if you do luck out and find a perfect metric, in a constantly shifting system there is a good chance that improving that metric will cause a problem somewhere else. Social structures are like Jenga towers, and knocking on piece out of place can have unforeseen consequences. Proceed with caution, and don’t underestimate the value of small successes.

Now again, I do believe metrics are incredibly valuable and used properly can generate good insights. However, in order to prevent your perfect metric from turning in to a numerical bludgeon, you have to keep an eye on what your goal really is. Are you trying to set kids up for success in life or get them to score well on a test? Are you trying to maximize employee productivity or keep employees over the long term? Are you looking for a number or a fall guy? Can you know what you’re looking to find out with any sort of accuracy? Things to ponder.


The Forrest Gump Fallacy

Back in July, I took my first crack at making up my own logical fallacy. I enjoyed the process, so today I’m going to try it again. With election season hanging over us, I’ve seen a lot of Facebook-status-turned-thinkpieces, and I’ve seen this fallacy pop up more and more frequently. I’m calling it “The Forrest Gump Fallacy”. Yup, like this guy:

For those of you not prone to watching movies or too young to have watched this one, here’s some background: Forrest Gump is a movie from 1994 about a slow-witted but lovable character who manages to get involved in a huge number of political and culturally defining moments over the course of his life from 1944 to 1982. Over the course of the film he meets almost every US president for that time period, causes Watergate, serves in Vietnam and speaks at anti-war rallies, and starts the smiley face craze.  It has heaps of nostalgia and an awesome soundtrack.

So how does this relate to Facebook and politics? Well, as I’ve been watching people attempt to explain their own political leanings recently, I’ve been noticing that many of them seem to assume that the trajectory of their own life and beliefs mirrors the trajectory of the country as a whole. To put it more technically:

Forrest Gump Fallacy: the belief that your own personal cultural and political development and experiences are generalizable to the country as a whole.

There are a lot of subsets of this obviously….particularly things like “this debate around this issue didn’t start until I was old enough to understand it” and “my immediate surroundings are nationally representative”. Fundamentally this is sort of a hasty generalization fallacy, where you draw conclusions from a very limited sample size. Want an example? Okay, let me throw myself under the bus.

If you had asked me a few years ago to describe how conservative vs liberal the US was in various decades that I’d lived through, I probably would have told you the following: the 1980s were pretty conservative, the 1990s also had a strong conservative influence, mostly pushing back against Clinton. Things really liberalized more around the year 2000, when people started pushing back against George W Bush. I was pretty sure this was true, and I was also not particularly right. Here is party affiliation data from that time:

Republican affiliation actually dropped during the 90s and rose again after 2000. Now, I could make some arguments about underdogs and the strength of cultural pushback, but here’s what really happened: I went to a conservative private Baptist school up through 1999, then went to a large secular university for college in the early 2000s. The country didn’t  liberalize in the year 2000, my surroundings did.  This change wasn’t horribly profound, after all engineering profs are not particularly known for their liberalism, but it still shifted the needle. I could come up with all the justifications in the world for my biased knee jerk reaction, but I’d just be self justifying. In superimposing the change in my surroundings and personal development over the US as a whole, I committed the Forrest Gump Fallacy.

So why did I do this? Why do others do this? I think there’s a few reasons:

  1. We really are affected by the events that surround us Most fallacies start with a grain of truth, and this one does too. In many ways, we are affected by watching the events that surround us, and we do really observe the country change around us. For example, most people can quite accurately describe how their own feelings and the feelings of the country changed after September 11th, 2001. I don’t think this fallacy arises around big events, but rather when we’re discussing subtle shifts on more divisive issues.
  2. Good cultural metrics are hard to come by A few paragraphs ago, I used party affiliation as a proxy for “how liberal” or “how conservative” the country was during certain decades. While I don’t think that metric is half bad, it’s not perfect. Specifically, it tells us very little about what’s going on with that “independent” group…and they tend to have the largest numbers. Additionally, it’s totally possible that the meaning of “conservative” or “liberal” will change over time and on certain issues. Positions on social issues don’t always move in lock step with positions on fiscal issues and vice versa. Liberalizing on one social issue doesn’t mean you liberalize on all of them either. In my lifetime, many people have changed their opinion on gay marriage but not on abortion. When it’s complicated to get a good picture of public opinion, we rely on our own perceptions more heavily. This sets us up for bias.
  3. Opinions are not evenly spread around This is perhaps the biggest driver of this fallacy, and it’s no one’s fault really. As divided as things can get, the specifics of the divisions can vary widely in your personal life, your city and your state. While the New Hampshire I grew up in generally leaned conservative, it was still a swing state. My school however was strongly conservative and almost everyone was a Republican, and certainly almost all of the staff. Even with only 25% of people identifying themselves as Republican there are certainly many places where someone could be the only Democrat and vice versa. Ann Althouse (a law professor blogger who voted for Obama in 2008) frequently notes that her law professor colleagues consider her “the conservative faculty member”. She’s not conservative compared to the rest of the country, but compared to her coworkers she very much is. If you don’t keep a good handle on the influence of your environment, you could walk away with a pretty confused perception of “normal”.

So what do we do about something like this? I’m not really sure. The obvious answer is to try to mix with people who don’t think like you, aren’t your age and have a different perspective from you, but that’s easier said than done. There’s some evidence that conservatives and liberals legitimately enjoy living in different types of places and that the polarization of our daily lives is getting worse. Sad news. On the other hand, the internet does make it easier than ever to seek out opinions different from your own and to get feedback on what you might be missing. Will any of it help? Not sure. That’s why I’m sticking with just giving it a name.

The Tim Tebow Fallacy

I’ve blogged a lot about various cognitive biases and logical fallacies here over the years, but today I want to talk about one I just kinda made up: The Tim Tebow Fallacy. Yeah, that’s right, this guy:


I initially mentioned the premise in this post, but for those of you who missed it here’s the background: Tim Tebow is a Heisman trophy winning quarterback who played in the NFL from 2010-2012. Despite his short career, in 2011 he was all anyone could talk about. Everyone had an opinion about him and he was unbelievably polarizing, despite being a quite pleasant individual and a good-but-not-great player. It was all a little baffling, and writer Chuck Klosterman took a crack at explaining the issue here. In trying to work through the controversy, he made this observation:

On one pole, you have people who hate him because he’s too much of an in-your-face good person, which makes very little sense; at the other pole, you have people who love him because he succeeds at his job while being uniquely unskilled at its traditional requirements, which seems almost as weird. Equally bizarre is the way both groups perceive themselves as the oppressed minority who are fighting against dominant public opinion, although I suppose that has become the way most Americans go through life.

Ever since I read that, I’ve been watching political conversations and am stunned how often this type of thinking happens. It seems some people not only want to have a belief and defend it, but also get some sort of cache from having an unacknowledged or rare belief. It’s  like a combination of a reverse-Bandwagon effect (where someone likes something more because it’s not popular) combined with type of majority illusion (where people inaccurately assess how many people actually hold a particular opinion). So as a fallacy, I’d say it’s when you find a belief more attractive and more correct because it runs counter to what you believe popular perception is. To put it more technically:

Tim Tebow Fallacy: The tendency to increase the strength of a belief based on an incorrect perception that your viewpoint is underrepresented in the public discourse

Need an example? Take a group conversation at a party: Person A mentions they like the Lord of the Rings movies. Person B pipes up that they actually really didn’t like them. Person C agrees with Person B, and the two bond a bit over finding out they share this unusual opinion. After a minute or two, Person A is getting a little frustrated and is now even a BIGGER fan of the movies. If it stopped there it wouldn’t be a Tebow fallacy, just regular old defensiveness. What kicks it over the edge is when Person A starts claiming “no one ever talks about how good the cinematography was in those movies!” or “no one really appreciates how innovative those were!”. or “No one ever gives geeky stuff any credit!”. They walk away irritated and believing that saying you like the Lord of the Rings movies has been sort of a subversive act, and that general defensiveness is called for.

Now of course this is all kind of poppycock. The Lord of the Rings movies are some of the most highly regarded movies of all time, and set records for critical acclaim and box office draw. The viewpoint Person A was defending is the dominant one in nearly every circle except for the one they happened to wander in to that night, yet they’re defensive and feel they need to continue to prove their point. It’s the Tim Tebow Fallacy.

Now I think there’s a couple reasons this sort of thing happens, and I suspect many of them are getting worse because of  the internet:

  1. We’re terrible at figuring out how widespread opinions are. In my Lord of the Rings example, Person A extrapolated small group dynamics to the general population, likely without even realizing it. Now this is pretty understandable when it happens in person, but it gets really hard to sort through when you’re reading stuff online. Online you could read pages and pages of criticism of even the most well-loved stuff, and come away believing many more people think a certain way than they do. Even if 99.9% of American love something that still leave 325,000 who don’t. If those people have blogs or show up in comments sections, it can leave you with the impression that their opinions are more widely held than they are. And make no mistake, this influences us. It’s why Popular Science shut off their comments section.
  2. We feed off each other and headlines The internet being what it is, let’s imagine Person A goes home and vents their frustrations with Person B and C online. What started as an issue at one party no turns in to an anecdote that can be spread. The effect of this should not be underestimated. A few months ago someone sent me this story, about a writer with a large-ish Twitter following who had Tweeted a single picture of a lipstick name “Underage Red” with the caption “Just went shopping for some makeup.  How is this a lipstick color?”. The whole story is here, but by the end of it her single Tweet had made it all the way to Time Magazine as proof of a “major controversy” and being cited as an example of “outrage culture”. She was inundated with people calling her out (including the lipstick creator, Kat Von D) for her opinion, all seemingly believing they were fighting the good fight against a dominate narrative. A narrative that was comprised of a single rather reasonable Tweet about a lipstick. I don’t blame those people by the way….I blame the media that creates an “outrage” story out of a single Tweet, then follows up with think pieces about “PC culture” and “oversensitivity”. The point is that in 2016, a single anecdote going viral is really common, but I’m not sure we’ve all adjusted our reactions to account for the whole “wait, how many people actually think this way?” piece. It’s even worse when you consider how often people lie about stuff. Throw in a few fabricated/exaggerated/one sided retellings, and suddenly you can have viral anecdotes that never even happened.
  3. It’s human nature to strive for optimal distinctiveness. While going with the crowd gets a lot of attention, I think it’s worth noting that humans actually are a little ambivalent about this. The theory of optimal distinctiveness argues that we actually are all simultaneously striving to differentiate ourselves just enough to be recognized while not going so far as to be ostracized. Basically we want to be in the middle of this graph:

    From Brewer, M.B. (1991). “The social self: On being the same and different at the same time”. Personality and Social Psychology Bulletin, 17, 475-482.

    By positioning our arguments against the dominant narrative, we can both defend something we really believe in AND differentiate ourselves from the group. I think that makes this type of fallacy uniquely attractive for many people.

  4. We’re selecting our own media sources, then judging them. One of my favorite humor websites of all times is Hyperbole and a Half by Allie Brosh. After going viral, Brosh put together an FAQ that includes this question/answer:

    Question: I don’t think you’re funny and I’m frustrated that other people do Answer: It’s okay.  Try not to be too upset about it.  Humor is simply your brain being surprised by an unexpected variation in a pattern that it recognizes.  If your brain doesn’t recognize the pattern or the pattern is already too familiar to your brain, you won’t find something humorous.

    With the internet (and cable news, Facebook, Twitter, etc) we now have the ability to see hundreds or thousands of different opinions in a single week. What we often fail to recognize is that we actually select for most of these….who we follow or friend, webpages we visit etc.  Everyone else does too. We are all constructing very individualized patterns of information intake, and it’s hard to know how usual or unusual our own pattern is. Instead of just “those who loved Lord of the Rings movies” and “those who didn’t” there’s also “those who hated it because they are huge book fans”, “those who don’t like any movie with magic in it”, “those who hated it because they hate Elijah Wood”, “those who prefer Meet the Feebles”, “those who didn’t get that whole ring thing”,  “those who thought that was the one with the lion”, “those who liked it until they heard all the hype then wished everyone would calm down”, etc etc etc. Point is, we are often selecting for certain opinions, then reacting to the opinions we selected ourselves and taking it out on others who may legitimately never have encountered the opinion we’re talking about. If you combine this with point #3 above, you can see where we end up positioning ourselves against a narrative others may not be hearing.

  5. It’s a common mistake for great thinkers. Steve Jobs is widely considered one of the most innovative and intuitive businessmen of all time. His success largely came from his uncanny ability to identify gaps in the tech marketplace that were invisible to everyone else and then fill them better than anyone. What often gets lost in the quick blurbs about him though is how often he misfired. He initially thought Pixar should be a hardware company. He wiffed on multiple computer designs and had a whole confusing company called NeXT…..and he’s still considered one of the best in the world at this. Great thinkers are always trying to find what others are missing, and even the best screw this up pretty frequently. As with many fallacies, it’s important to remember that IQ points offer limited protection. What makes your mind great can also be your downfall.
  6. It gets us out of dealing with uncomfortable truths. One of the first brutal call outs I ever got on the internet was a glib and stupid comment I made about the Iraq War. It was back in 2004 or so, and I got in an argument with someone I respected over whether or not we should have gone in. I was against the war, and at some point in the debate I got mad and said that my issue was that George Bush “hadn’t allowed any debate”.  I was immediately jumped on and it was pointed out to me in no uncertain terms that I was just flat out making that up. There was endless debate, I just didn’t like the outcome. That stung like hell, but it was true. Regardless of what anyone believes about the Iraq War then or now, we did debate it. It was easier for me to believe that my preferred viewpoint had been systematically squashed than that others had listened to it and not found it compelling. This works the other way too. I’m sure in 2004 I could find someone claiming vociferously that we’d over-debated the Iraq War, based mostly on the fact that they made up their mind early. Most recently I saw this happen with Pokemon Go, where the Facebook statuses talking about how great it was showed up about 30 minutes before the “I’m totally sick of this can we all stop talking about this” statuses showed up. I’m not saying there’s a “right” level of public discourse, but I am saying that it’s hard to judge a “wrong” level without being pretty arbitrary. We all want the game to end when our team’s ahead, but it really just doesn’t work that way.

So that’s my grand theory right there. As I hope point #6 showed, I don’t think I’m exempt from this. A huge amount of this blog and my real life political discussion are based on the premise of “not enough people get this important fact I’m about to share”. I get it. I’m guilty.

On the other hand, I think in the age of the internet when we can be exposed to so many different viewpoints, we should be careful about how we let the existence of those viewpoints influence our own feelings. If the forcefulness of  our arguments is always indexed on what we think the forcefulness of our opposition is, that will leave us progressively more open to infection by the Toxoplasma of Rage.  As the opportunities for creating our own “popular narrative” increase, we have to be even more careful that we reality check that at times. Check the numbers in opinion polls. Read media that opposes you. Make friends outside your demographic. Consider criticism. Don’t play for the Jets. You know, all the usual fallacy stuff.

Tebow Image Credit