Misreprecitation

A few weeks ago, I wrote a post about a phenomena I had started seeing that I ended up dubbing premature expostulation. I defined this phenomena as “The act of claiming definitively that a person, group or media outlet has not reported on, responded to or comment on an event or topic, without first establishing whether or not this is true. ” Since writing that post, I have been seeing mention of a related phenomena that I felt was distinct enough to merit its own term. In this version, you actually have checked to see what various sources say, enough that you cite them directly, but you misrepresent what they actually say anyway. More formally, we have:

Misreprecitation: The act of directly citing a piece of work  to support your argument, when even a cursory reading of the original work shows it does not actually support your argument.

Now this does not necessarily have to be done with nefarious motives, but it is hard to think of a scenario in which this isn’t incredibly sketchy. Where premature expostulation is mostly due to knee jerk reactions, vagueness and a failure to do basic fact checking, misreprecitation requires a bit more thought and planning. In some cases it appears to be a pretty direct attempt to mislead, in others it may be due to copying someone else’s interpretation without checking it out yourself, but its never good for your argument.

Need some examples? Let’s go!

The example that actually made me think of this was the recent kerfluffle over Nancy MacLean’s book “Democracy in Chains”. Initially met by praise as a leftist take down of right wing economic thought, the book quickly got embroiled in controversy when (as far as I can tell) actual right wing thinkers started reading it. At that point several of them who were familiar with the source material noted that quotes were chopped up in ways that dramatically changed the meaning, and other contextual problems. You can read a pretty comprehensive list of issues here, and overview of the problems and links to all the various responses here, and Vox’s (none to flattering) take here. None of it makes MacLean look particularly good, most specifically because this was supposed to be a scholarly work. When your citations are your strong point, your citations better be correct.

I’ve also seen this happen quite a bit with books that endorse popular diets. Carbsane put together a list of issues in the citations of the low carb book “Big Fat Surprise”, and others have found issues with vegan promoting books. While some of these seem to be differences in interpretation of evidence, some are a little sketchier. Now, as with premature expostulation, some of these issues don’t change the fundamental point….but some do. Overall a citation avalanche is no good if it turns out you had to tweak the truth to get there.

I think there’s three things that cause a particularly fertile breeding ground for misreprecitation: 1) an audience who is sympathetic to your conclusions and 2) an audience who is unlikely to be familiar with the source documents 3) difficulty accessing source documents. That last point may be why books are particularly prone to this error, since you’d have to actually put the book down and go look up a reference. This also may be a case where blogs have the accuracy advantage due to being so public. I know plenty of people who read blogs they don’t agree with, but I know fewer who would buy a whole book dedicated to discrediting their ideas. That increases the chances that no critical person will read your book, they have less recourse once they do read it (notes in the margin aren’t as good as a comments section), and it’s harder for anyone to fact check. Not saying bloggers can’t do it, just thinking they’d be called on it faster.

Overall it’s a pretty ridiculous little trick, as the entire point of citing others work should be to strengthen your argument. In the best case scenario, people could be confused because they misread/failed to understand/copied an interpenetration of the work they read someone else make. In the worst case scenario, they know what they are doing and are counting on their in-group not actually checking their work. Regardless, it needed a name, and now it has one.

Premature Expostulation

In my last post, I put out a call for possible names for the phenomena of people erroneously asserting that some ideological opponent hadn’t commented on a story without properly verifying that this was true. Between Facebook and the comments section I got a few good options, but the overall winner was set up by bluecat57 and perfected by the Assistant Village Idiot: Premature Expostulation. I have to admit, expostulation was one of those words I only sort of knew what it meant, but the exact definition is great for this situation “to reason earnestly with someone against something that person intends to do or has done; remonstrate:” Therefore, the definition for this phrase is:

Premature Expostulation: The act of claiming definitively that a person, group or media outlet has not reported on, responded to or comment on an event or topic, without first establishing whether or not this is true. 

Premature expostulation frequently occurs in the context of a broader narrative (they NEVER talk about thing X, they ALWAYS prioritize thing Y) , though it can also occur due to bad search results, carelessness, inattention, or simply different definitions of what “covered the story” means. If someone is discussing a news outlet they already don’t like or you are not familiar with, be alert.  It’s easy to miss a statement from someone if you don’t frequent what they write or don’t keep up with them.

To note, premature expostulation is a specific claim of fact NOT subjective opinion. The more specific the claim, the more likely it is (if proven wrong) to be premature expostulation. Saying a story was “inadequate” can cause endless argument, but is mostly a matter of opinion. If you say that a news outlet “stayed silent” however, showing that they ran even one story can disprove the claim.

I think there’s a lot of reasons this happens, but some of the common ones I see seem to be:

  • Search algorithm weirdness/otherwise just missing it. Some people do quick searches or scans and just simply miss it. I have speculated that there’s some sort of reverse inattentional blindness thing going on where you’re so convinced you’ll see something if it’s there that you actually miss it.
  • Attributing a group problem to an individual. I can’t find it right now, but I once saw a great video of a feminist writer who was on a panel get questioned by an audience member why she had hypocritically stayed silent on a particular issue it seems she should have commented on. It turns out she actually had written columns on the issue and offered to send them to him. Poor kid had no idea what to do. Now I suspect at the time there were feminist writers being breathtakingly hypocritical over this issue, but that didn’t mean all of them were.  Even if there were hundreds of feminist writers being hypocritical, you still should double check that the one you’re accusing is one of them before you take aim.
  • Attributing an individual problem to a group Sometimes a prominent figure in a group is so striking that people end up assuming everyone in the group acts exactly as the one person they know about does.
  • Assuming people don’t write when you’re not reading When I had a post go mini-viral a few months ago, I got a huge influx of new people who had never visited this blog. I got many good comments/criticisms, but there were a few that truly surprised me. At least a few people decided that the biggest problem I had was that I never took on big media outlets and that I only picked on small groups, or that I was never talked about statistics that might challenge something liberals said. Now regular readers know this is ridiculous. I do that stuff all the time. For whatever reason though, some people assumed that the one post they read of mine somehow represented everything I’d ever written. That’s a personal anecdote, but we see this happen with other groups as well. During the gay marriage debate I once had a friend claim that Evangelicals never commented on straight divorce. Um, okay. No. You just don’t listen to them until they comment on something you are upset by, then you act like that’s all they ever say.
  • The emotional equivalency metric If someone doesn’t feel the same way you do, they must not have seen the story the way you have. Therefore they can’t have covered the story until they mirror your feelings.

I’m sure there are other ways this comes up as well, feel free to leave me your examples.

 

The Bullshit Two-Step

I’ve been thinking a lot about bullshit recently, and I’ve started to notice a bit of a pattern in the way bullshit gets relayed on social media. These days, it seems like bullshit is turning in to a multi-step process that goes a little something like this: someone posts/publishes something with lots of nuances and caveats. Someone else translates that thing for more popular consumption, and loses quite a bit of the nuance.  This happens with every share until finally the finished product is almost completely unrecognizable. Finally the story encounters someone who doesn’t agree with it, who then points out there should be more caveats. The sharer/popularizers promptly point at the original creator, and the creator throws their hands up and says “but I clarified those points in the original!!!!”. In other words:

The Bullshit Two-Step: A dance in which a story or research with nuanced points and  specific parameters is shared via social media. With each share some of the nuance or specificity is eroded, finally resulting in a story that is almost total bullshit but that no one individually feels responsible for. 

Think of this as the science social media equivalent of the game of telephone.

This is a particularly challenging problem for people who care about truth and accuracy, because so often the erosion happens one word at a time. Here’s an example of this happening with a Census Bureau statistic I highlighted a few years ago. Steps 1 and 2 are where the statistic started, step 4 is how it ended up in the press:

  1. The Census Bureau reports that half of all custodial (single) parents have court ordered child support.
  2. The Census Bureau also states (when talking about just the half mentioned in #1) that “In 2009, 41.2 percent of custodial parents received the full amount of child support owed them, down from 46.8 percent in 2007, according to a report released today by the U.S. Census Bureau. The proportion of these parents who were owed child support payments and who received any amount at all — either full or partial — declined from 76.3 percent to 70.8 percent over the period.
  3. That got published in the New York Times as “In 2009, the latest year for which data are available, only about 41 percent of custodial parents (predominantly women) received the child support they were owed. Some biological dads were deadbeats. ” No mention that this only covered half of custodial parents.
  4. This ended up in Slate (citing the Times) as “…. in a substantial number of cases, the men just quit their families. That’s why only 41 percent of custodial parents receive child support.” The “full amount” part got lost, along with all those with no court mandate who may or may not be getting money.

As you can see, very little changed between each piece, but a lot changed by the end. We went from “Half of all custodial parents receive court ordered child support. Of that half, only 41% have received the full amount this year.” to “only 41% of custodial parents receive child support at all”. We didn’t get there all at once, but we got there.  No one’s fully responsible, but no one’s innocent either. It’s the bullshit two-step.

I doubt there’s any one real source for this….sometimes I think these are legitimate errors in interpretation, sometimes people were just reading quickly and missed the caveat, sometimes people are just being sloppy. Regardless, I think it’s interesting to track the pathway and see how easy it is to lose meaning one or two words at a time. It’s also a good case for only citing primary sources for statistics, as it makes it harder to carry over someone else’s error.

 

Number Blindness

“When the facts are on your side, pound the facts. When the law is on your side, pound the law. When neither is on you side, pound the table.” – old legal adage of unclear origin

Recently I’ve been finding it rather hard to go on Facebook. It seems like every time I log in, someone I know has chosen that moment to start a political debate that is going poorly. It’s not that I mind politics or have a problem with strong political opinions, but what bugs me is how often suspect numbers are getting thrown out to support various points of view. Knowing that I’m a “numbers person”, I have occasionally had people reach out asking me to either support or refute whatever number it is that is being used, or use one of my posts to support/refute what is being said. While some of these requests are perfectly reasonable requests for explanations, I’ve gotten a few recently that were rather targeted “Come up with a reason why my opponent is wrong” type things, with a heavy tone of “if my opponent endorses these numbers, they simply cannot be correct”. This of course put me in a very meta mood, and got me thinking about how we argue about numbers. As a result, I decided to coin a new term for a logical fallacy I was seeing: Number Blindness.

Number Blindness: The phenomena of becoming so consumed by an issue that your cease to see numbers as independent entities and view them only as props whose rightness or wrongness is determined solely by how well they fit your argument

Now I want to make one thing very clear up front: the phenomena I’m talking about is not simply criticizing or doubting numbers or statistics. A tremendous amount of my blogging time is spent writing about why you actually should doubt many of the numbers that are flashed before your eyes. Criticism of numbers is a thing I fully support, no matter whose “side” you’re on.

I am also not referring to people who say that numbers are “irrelevant” to the particular discussion or said that I missed the point. I actually like it when people say that, because it clears the way to have a purely moral/intellectual/philosophical discussion. If you don’t really need numbers for a particular discussion, go ahead and leave them out of it.

The phenomena I’m talking about is actually when people want to involve numbers in order to buffer their argument, but take any discussion of those numbers as offensive to their main point. It’s a terrible bait and switch and it degrades the integrity of facts. If the numbers you’re talking about were important enough to be included in your argument, then they are important enough to be held up for debates about their accuracy. If you’re pounding the table, at least be willing to admit that’s what you’re doing.

Now of course all of this got inspired by some particular issues, but I want to be very clear: everyone does this. We all want to believe that every objective fact points in the direction of the conclusion that we want. While most people are acutely aware of this tendency in whichever political party they disagree with, it is much harder to see it in yourself or in your friends. Writing on the internet has taught me to think carefully about how I handle criticism, but it’s also taught me a lot about how to handle praise. Just like there are many people who only criticize you because you are disagreeing with them, there are an equal number who only praise you because you’re saying something they want to hear. I’ve written before about the idea of “motivated numerancy” (here and for the Sojourners blog here), but studies do show that ability to do math rises and falls depending on how much you like the conclusions that math provides….and that phenomena gets worse the more intelligent you are. As I said in my piece for Sojourners “Your intellectual capacity does NOT make you less likely to make an error — it simply makes you more likely to be a hypocrite about your errors.”

Now in the interest of full disclosure, I should admit that I know number blindness so well in part because I still fall prey to it. It creeps up every time I get worked up about a political or social issue I really care about, and it can slip out before I even have a chance to think through what I’m saying. One of the biggest benefits of doing the type of blogging I do is that almost no one lets me get away with it, but the impulse still lurks around. Part of why I make up these fallacies is to remind myself that guarding against bias and selective interpretation requires constant vigilance.

Good luck out there!

The White Collar Paradox

A few weeks back I blogged about what I am now calling “The Perfect Metric Fallacy“. If you missed it, here’s the definition

The Perfect Metric Fallacy: the belief that if one simply finds the most relevant or accurate set of numbers possible, all bias will be removed, all stress will be negated, and the answer to complicated problems will become simple, clear and completely uncontroversial.”  

As I was writing that post, I realized that there was an element I wasn’t paying enough attention to. I thought about adding it in, but upon further consideration, I realized that it was big enough that it deserved it’s own post. I’m calling it “The White Collar Paradox”. Here’s my definition:

The White Collar Paradox: Requiring that numbers and statistics be used to guide all decisions due to their ability to quantify truth and overcome bias, while simultaneously only giving attention to those numbers created to cater to ones social class, spot in the workplace hierarchy, education level, or general sense of superiority.

Now of course I don’t mean to pick on just white collar folks here, though almost all offenders are white collar somehow. This could just as easily have been called the “executive paradox” or the “PhD paradox” or lots of other things. I want to be clear who this is aimed at because  plenty of white collar workers have been on the receiving end of this phenomena as well, in the form of their boss writing checks to expensive consulting firms just to have those folks tell them the same stuff their employees did only on prettier paper and using more buzzwords. Essentially, anyone who prioritizes numbers that make sense to them out of their own sense of ego despite having the education to know better is a potential perpetrator of this fallacy.

Now of course wanting to understand the problem is not a bad thing, and quite frequently busy people do not have the time to sort through endless data points. Showing your work gets you lots of credit in class, but in front of the C-suite it loses everyone’s attention in less than 10 seconds (ask me how I know this). There is a value in learning how to get your message to match the interests of your audience. However, if the audience really wants to understand the problem, sometimes they will have to get a little uncomfortable. Sometimes the problem is arising precisely because they overlooked something that’s not very understandable to them, and preferring explanations that cater to what you already know is just using numbers to pad the walls of your echo chamber.

A couple other variations I’ve seen:

  1. The novel metric preference As in “my predecessor didn’t use this metric, therefore it has value”.
  2. The trendy metric  “Prestigious institution X has promoted this metric, therefore we also need this metric”
  3. The “tell me what I want to hear” metric Otherwise known as the drunk with a lamp post…using data for support, not illumination.
  4. The emperor has no clothes metric The one that is totally unintelligible but stated with confidence and no one questions it

That last one is the easiest to compensate for. For every data set I run, I always run it by someone actually involved in the work. The number of data problems that can be spotted by almost any employee if you show them your numbers and say “hey, does this match what you see every day?” is enormous. Even if there’s no problems with your data, those employees can almost always tell you where your balance metrics should be, though normally that comes in the form of “you’re missing the point!” (again, ask me how I know this).

For anyone who runs workplace metrics, I think it’s important to note that every person in the organization is going to see the numbers differently and that’s incredibly valuable. Just like high level execs specialize in forming long term visions that day to day workers might not see, those day to day workers specialize in details the higher ups miss. Getting numbers that are reality checked by both groups isn’t easy, but your data integrity will improve dramatically and the decisions you can make will ultimately improve.

The Perfect Metric Fallacy

“The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.” – Daniel Yankelovich

“Andy Grove had the answer: For every metric, there should be another ‘paired’ metric that addresses the adverse consequences of the first metric” -Marc Andreessen

“I didn’t feel the ranking system you created adequately captured my feelings about the vendors we’re looking at, so instead I assigned each of them a member of the Breakfast Club. Here, I made a poster.” -me

I have a confession to make: I don’t always like metrics. There. I said it. Now most people wouldn’t hesitate to make a declaration like that, but for someone who spends a good chunk of her professional and leisure time playing around with numbers it’s kind of a sad thing to have to say. Some metrics are totally fine of course, and super useful. On the other hand, there are times when it seems like the numbers subsume the actual goal, and those become front and center. This is bad. In statistics, numbers are a means to an end, not the end. I need a name for this flip flop, so from here on out I’m calling it  “The Perfect Metric Fallacy”.

The Perfect Metric Fallacy: The belief that if one simply finds the most relevant or accurate set of numbers possible, all bias will be removed, all stress will be negated, and the answer to complicated problems will become simple, clear and completely uncontroversial.

As someone who tends to blog about numbers and such, I see this one a lot.  On the one hand, data and numbers are wonderful because they help us identify reality, improve our ability to compare things, spot trends, and overcome our own biases. On the other hand, picking the wrong metric out of convenience or bias and relying too heavily on it can make everything I just named worse plus piss everyone around you off.

Damn.

While I have a decent number of my own stories about this, what frustrates me is how many I hear from others. When I tell people these days that I’m in to stats and data, almost a third of people respond with some sort of horror story about how data or metrics are making their professional lives miserable. When I talk to teachers, this number goes up to 100%.

This really bums me out.

It seems after years of  disconnected individuals going with their guts and kind of screwing everything up, people decided that now we should put numbers on those grand ideas to prove that they were going to work. When these ideas now fail, people either blame the numbers (if you’re the person who made the decision) or the people who like the numbers (if you’re everybody else).  So why do we let this happen? Almost everyone up front knows that numbers are really just there to guide decision making, so why do we get so obsessed with them?

  1. Math class teaches us that if you play with numbers long enough, there will be a right answer There’s a lot of times in life when your numbers have to be perfect. Math class. Your tax return. You know the drill. Endless calculations, significant figures, etc, etc. In statistics, that’s not true. It’s a phenomena known as “false precision“, where you present data in a way that makes it look more accurate than it really can be. My favorite example of this is a clinic I worked with at one point. They reported weight to two significant figures (as in 130.45 lbs), but didn’t have a standard around whether or not people had to take their coat off before they weighed them. In the beginning of the post, I put a blurb about me converting a ranking system in to a Breakfast Club Poster. This came up after I was presented with a 100 point scale to rank 7 vendor against each other in something like 16 categories. When you have 3 days to read through over 1000 pages of documentation and assign scores, your eyes start to blur a little and you start getting a little existential about the whole thing. Are these 16 categories really the right categories? Do they cover everything I’m getting out of this? Do I really feel 5 points better about this vendor than that other one, and are both of them really 10 points better than that 3rd one? Or did I just start increasing the strictness of my rankings as I went along, or did I get nicer as I had to go faster, or what? It wasn’t a bad ranking system, but the problem was me. If I can’t promise I kept consistent in my rankings over 3 days, how can I attest to my numbers at the end?
  2. We want numbers to take the hit for unpleasant truths A few years ago someone sent me a comic strip that I have promptly sent along to nearly everyone who complains to me about bad metrics in the workplace: This almost always gets a laugh, and most people then admit that it’s not the numbers they have a problem with, it’s the way they’re being used. There’s a lot of unpleasant news to deliver in this world, and people love throwing up numbers to absorb the pain. See, I would totally give you a raise or more time to get things done but the numbers say I can’t. When people know you’re doing exactly what you were going to do to begin with, they don’t trust any number you put up. This gets even worse in political situations. So please, for the love of God, if the numbers you run sincerely match your pre-existing expectations, let people look over your methodology, or show where you really tried to prove yourself wrong. Failing to do this gives all numbers a bad rap.
  3. Good Data is Hard to Find One of the reasons statistician continues to be a profession is because good data is really really really hard to find, and good methods for analysis actually require a lot of leg work. Over the course of trying to find a “perfect metric” many people end up believing that part of being “perfect” is being easily obtainable. As my first quote mentions, this is ridiculous. It’s also called the McNamara Fallacy, and it warns us that the easiest things to quantify are not always the most important.
  4. Our social problems are complicated The power of numbers is strong. Unfortunately, the power of some social problems is even stronger. Most of our worst problems are multi faceted, which of course is why they haven’t been solved yet. When I decided to use metrics to address my personal weight problem, I came up with 10 distinct categories to track for one primary outcome measure. That’s 365,000 data points a year, and that’s just for me. Scaling that up is immensely complicated, and introduces all sorts of issues of variability among individuals that don’t exist when you’re looking at just one person. Even if you do luck out and find a perfect metric, in a constantly shifting system there is a good chance that improving that metric will cause a problem somewhere else. Social structures are like Jenga towers, and knocking on piece out of place can have unforeseen consequences. Proceed with caution, and don’t underestimate the value of small successes.

Now again, I do believe metrics are incredibly valuable and used properly can generate good insights. However, in order to prevent your perfect metric from turning in to a numerical bludgeon, you have to keep an eye on what your goal really is. Are you trying to set kids up for success in life or get them to score well on a test? Are you trying to maximize employee productivity or keep employees over the long term? Are you looking for a number or a fall guy? Can you know what you’re looking to find out with any sort of accuracy? Things to ponder.

 

The Forrest Gump Fallacy

Back in July, I took my first crack at making up my own logical fallacy. I enjoyed the process, so today I’m going to try it again. With election season hanging over us, I’ve seen a lot of Facebook-status-turned-thinkpieces, and I’ve seen this fallacy pop up more and more frequently. I’m calling it “The Forrest Gump Fallacy”. Yup, like this guy:

For those of you not prone to watching movies or too young to have watched this one, here’s some background: Forrest Gump is a movie from 1994 about a slow-witted but lovable character who manages to get involved in a huge number of political and culturally defining moments over the course of his life from 1944 to 1982. Over the course of the film he meets almost every US president for that time period, causes Watergate, serves in Vietnam and speaks at anti-war rallies, and starts the smiley face craze.  It has heaps of nostalgia and an awesome soundtrack.

So how does this relate to Facebook and politics? Well, as I’ve been watching people attempt to explain their own political leanings recently, I’ve been noticing that many of them seem to assume that the trajectory of their own life and beliefs mirrors the trajectory of the country as a whole. To put it more technically:

Forrest Gump Fallacy: the belief that your own personal cultural and political development and experiences are generalizable to the country as a whole.

There are a lot of subsets of this obviously….particularly things like “this debate around this issue didn’t start until I was old enough to understand it” and “my immediate surroundings are nationally representative”. Fundamentally this is sort of a hasty generalization fallacy, where you draw conclusions from a very limited sample size. Want an example? Okay, let me throw myself under the bus.

If you had asked me a few years ago to describe how conservative vs liberal the US was in various decades that I’d lived through, I probably would have told you the following: the 1980s were pretty conservative, the 1990s also had a strong conservative influence, mostly pushing back against Clinton. Things really liberalized more around the year 2000, when people started pushing back against George W Bush. I was pretty sure this was true, and I was also not particularly right. Here is party affiliation data from that time:

Republican affiliation actually dropped during the 90s and rose again after 2000. Now, I could make some arguments about underdogs and the strength of cultural pushback, but here’s what really happened: I went to a conservative private Baptist school up through 1999, then went to a large secular university for college in the early 2000s. The country didn’t  liberalize in the year 2000, my surroundings did.  This change wasn’t horribly profound, after all engineering profs are not particularly known for their liberalism, but it still shifted the needle. I could come up with all the justifications in the world for my biased knee jerk reaction, but I’d just be self justifying. In superimposing the change in my surroundings and personal development over the US as a whole, I committed the Forrest Gump Fallacy.

So why did I do this? Why do others do this? I think there’s a few reasons:

  1. We really are affected by the events that surround us Most fallacies start with a grain of truth, and this one does too. In many ways, we are affected by watching the events that surround us, and we do really observe the country change around us. For example, most people can quite accurately describe how their own feelings and the feelings of the country changed after September 11th, 2001. I don’t think this fallacy arises around big events, but rather when we’re discussing subtle shifts on more divisive issues.
  2. Good cultural metrics are hard to come by A few paragraphs ago, I used party affiliation as a proxy for “how liberal” or “how conservative” the country was during certain decades. While I don’t think that metric is half bad, it’s not perfect. Specifically, it tells us very little about what’s going on with that “independent” group…and they tend to have the largest numbers. Additionally, it’s totally possible that the meaning of “conservative” or “liberal” will change over time and on certain issues. Positions on social issues don’t always move in lock step with positions on fiscal issues and vice versa. Liberalizing on one social issue doesn’t mean you liberalize on all of them either. In my lifetime, many people have changed their opinion on gay marriage but not on abortion. When it’s complicated to get a good picture of public opinion, we rely on our own perceptions more heavily. This sets us up for bias.
  3. Opinions are not evenly spread around This is perhaps the biggest driver of this fallacy, and it’s no one’s fault really. As divided as things can get, the specifics of the divisions can vary widely in your personal life, your city and your state. While the New Hampshire I grew up in generally leaned conservative, it was still a swing state. My school however was strongly conservative and almost everyone was a Republican, and certainly almost all of the staff. Even with only 25% of people identifying themselves as Republican there are certainly many places where someone could be the only Democrat and vice versa. Ann Althouse (a law professor blogger who voted for Obama in 2008) frequently notes that her law professor colleagues consider her “the conservative faculty member”. She’s not conservative compared to the rest of the country, but compared to her coworkers she very much is. If you don’t keep a good handle on the influence of your environment, you could walk away with a pretty confused perception of “normal”.

So what do we do about something like this? I’m not really sure. The obvious answer is to try to mix with people who don’t think like you, aren’t your age and have a different perspective from you, but that’s easier said than done. There’s some evidence that conservatives and liberals legitimately enjoy living in different types of places and that the polarization of our daily lives is getting worse. Sad news. On the other hand, the internet does make it easier than ever to seek out opinions different from your own and to get feedback on what you might be missing. Will any of it help? Not sure. That’s why I’m sticking with just giving it a name.