Calling BS Read-Along Week 2: Spotting BS

Welcome to the Calling Bullshit Read-Along based on the course of the same name from Carl Bergstorm and Jevin West  at the University of Washington. Each week we’ll be talking about the readings and topics they laid out in their syllabus. If you missed my intro, click here or if you want to go back to Week 1 click here.

Hey hey! Welcome back! It is week 2 of the read-along, and we’ve got some good stuff going on today. After spending last week learning what bullshit is, this week we’re going to focus on how to spot it in the wild. This is well timed because a few days ago I had a distressing discussion with a high school teacher-friend who had assigned her kids some of my Intro to Internet Science posts as a prelude to a section on fake news. She had asked them to write an essay about the topic of “fake stuff on the internet” before the discussion in class, and apparently more than a few of them said something to the effect of “that’s nice, but I’ve never heard of fake news so this is not a problem in my life”. Groooooooooooooooooooooooooooan 

Of course the problem with bullshit  is that no one warns you you’re going to see it, and no one slaps you afterwards and says “you just read that uncritically”.  With so much of the bullshit these days being spread by social media, inattentional blindness is in high gear. If 50% of study participants can’t see a gorilla when they’re trying to watch a bouncing ball, what makes you think you’re going to correctly spot bullshit while you’re trying to post pictures/think of funny status updates/score a political point against your uncle/see how your ex is doing????

The only hope is to teach yourself some ticks and remain eternally vigilant. In other words (and with apologies to Hunter S Thompson): I hate to advocate pessimism, skeptical toolkits, the GRIM test and constant paranoia, but they’ve always worked for me.

With that intro, let’s get to the readings! First up is Chapter 12 of Carl Sagan’s Demon Haunted World: Science as a Candle in the Dark: The Fine Art of Baloney Detection. I don’t think I’d read this chapter since I first read this book maybe 15 years ago or so, so it was a lot of fun to read again. Sagan starts by making a differentiation that will be familiar to those who read last week’s piece: those who believe common misconceptions vs those who promote them professionally. The example he uses is being able to contact the dead. He admits to his own longing to talk to his deceased parents  and how much appeal the belief that sometimes you can “feel” the dead has to most of us. As an atheist, he firmly believed the idea of life after death was baloney, but he gives a pass to the large number of people who believe in life after death or even those who believe they’ve had contact with the dead in their personal lives. To him, those beliefs are normal and even if you don’t think they are true or rational, they are hard to criticize. Where his wrath kicks in is those who seek to make money off of promoting this stuff and encouraging people to believe in irrational things, like psychics or mediums. He believes that undermining a society’s ability and desire to seek out independent truth and facts is one of the worst things a person can do. This isn’t just psychics doing this of course, but most of the advertising world as well, who will throw any “fact” at you if you just buy their product. In response to this constant barrage of misinformation and misdirection, he offers a “tool kit” for skeptical thinking. The whole thing is on the 4th and 5th page, but the short version is this:

  • Get independent confirmation of facts
  • Encourage debate
  • Don’t trust authority blindly
  • Come up with multiple possible explanations
  • Don’t stick to one explanation just because it is the one you thought of
  • Find something to quantify, which makes everything easier to compare
  • Make sure the whole chain of the argument works. Don’t let people mumble through part of it.
  • Prefer simple explanations (Occam’s razor)
  • Look for something falsifiable. If something can never be proven wrong, it is, well, never going to be proven wrong.
  • Keep a friendly statistician around at all times

Okay, fine, that last one’s mine, not Sagan’s, but he does come out swinging for well designed experiments. He also includes a really helpful list of the most common logical fallacies (if you want a nice online version, try this one). He concludes with a discussion of corporate advertising, sponsored research, and tobacco companies. Confusing science and skewed research helped promote tobacco for much longer than it should have stuck around.

With the stage set by Sagan, the rest of the readings include some specific tips and tricks to spot various issues with numbers and data. Some are basic plausibility checks, and some are more advanced. These are:

The “what does this number even mean” check: Last week we talked about bullshit as “unclarifiable unclarity”, and this case study is a good example of doing that with numbers. Written by West and Bergstrom, this example looks at a packet of hot cocoa that claims to be “99.9% caffeine free”.  It is not so much that the claim is implausible or even inaccurate, but that it is completely meaningless. If you’re measuring by weight, even a highly caffeinated drink will be mostly “caffeine free”. While it is likely the cocoa actually is low caffeine, this statistic doesn’t give you much insight. It is the appearance of information without any actual substance.

Fermi estimations: A technique named after Enrico Fermi, its focus is to get people to focus on getting people to guess numbers based on the order of magnitude (ie 10 vs 100 vs 1000, etc), not the exact number. When doing rough calculations with large numbers, this can actually yield surprisingly accurate results. To play around with making these estimates, they provide a link to this game here. There’s a good book on this and how to solve problems like “how many piano tuners work in New York City?” called Guesstimation if you’re really in to it.

Being able to focus in on the order of magnitude is surprisingly helpful in spotting bullshit, as is shown in the case study of food stamp fraud numbers. A news report from Fox News says that food stamp fraud costs tax payers $70 million dollars a year, and asked if this level of fraud means it is time to end food stamps. If we take that number at face value, is this a big deal? Using Fermi estimations, you can figure out a ballpark number for total food stamp payouts, and determine that this loss would be around .2% of all benefits paid. That is really close to the number you get if you dig up all the real numbers: .09% of all benefits paid.

GRIM testing: Edging in to the deeper end of the pool, this is a neat little trick that mostly has applications for those reviewing studies with small sample sizes. GRIM stands for “granularity-related inconsistency of means” test, and it is a way of quickly and easily looking for data problems. The full explanation (plus the fascinating story of its development) is here, but here’s the quick version: if your sample size is small and you are counting whole numbers, your mean has to end in very predictable decimal places. If it doesn’t, something’s wrong. For example, a study says that 10 people reported having an average of 2.24 children is bogus. Why? Because 2.24= total number of kids/10, and the total number of kids would have to be 22.4. There are a lot of possible explanations for this, but most of them get down the types of sloppiness or confusion that might make you question other parts of the paper.

By the way, if you want to leave the deep end of the pool and dive right in to the ocean, the author of the GRIM test has a SPRITE test that deals with the implications of standard deviations.

Newcomb-Benford Law: This law is one of my favorites because it was spotted back in 1881 for a reason that simply wouldn’t happen today: uneven wear on books. Back when slide rules were scarce and people had to actually look through a book of numbers to figure out what a logarithm for a certain value was,  an astronomer named Simon Newcomb noticed that the books were really worn out in the first sections where the numbers that started with low numbers were, and rather clean in the back where the leading digits were higher. He began to wonder if “random” numbers found in nature were more likely to start with small digits than large ones, then he just decided to declare it was so and said that the probability that the leading digits was a certain value d was equal to the log((d+1)/d). Basically, a random number like the population of a country will have a 30% chance of starting with 1, and only a 5% chance of starting with a 9.

Despite having very little proof other than a worn out book, it turns out this law is actually pretty true. Machine generated data can gum up the works a bit, but natural phenomena tend to follow this rule. Benford got his name in there by pulling data from hundreds of sources: rivers, populations, physical constants, even random numbers from the pages of Reader’s Digest and categorizing them by leading digit. He got 20,000 numbers together and found that low leading digits simply WERE more common. The proposed mathematical explanations for this are not light reading no matter what they promise, but it is pretty much enough to know that it is a thing. It has been used to detect election fraud and is also used in forensic accounting, but basically all the layperson needs to know is that numbers lists that start with high digits aren’t as plausible as those that start with low ones.

And one more for the road: It is worth noting that there is actually another Benford Law that would be not-irrelevant in a course like this. Benford’s Law of Controversy states that “passion is inversely proportional to the amount of real information available”.

All of these tricks may seem like a lot to keep in mind, so if you want some practice take the advice I give to the high school students: find a cause you really care about and go read bad arguments or propaganda from the “other side”. As I’ve mentioned before, your ability to do math improves dramatically when said math helps you prove a point you feel emotionally attached to. Using this to your advantage while learning these tricks might help you get them down a little faster. Of course the problem with learning these tricks is that unless you’re entirely hypocritical, eventually you might have to turn them around on your own side, so be forewarned of that.To this day the high point of my blogging career is when my political activist brother left me a voicemail screaming “I JUST LEFT A MEETING WITH PEOPLE I LIKE MAKING A POINT I AGREE WITH BUT THEY USED BAD STATISTICS THAT I FIGURED OUT WERE WRONG AND I COULDN’T STOP STARING AT THEM AND I HAD TO CORRECT THEM IN FRONT OF EVERYONE AND THEN THEY TOLD ME IT DIDN’T MATTER AND NOW I’M MAD AT THEM AND YOU!!!!”.

So what am I taking away from this week? A few things:

  1. Even if you’re not a “numbers person”, a good sense of how numbers work can go a long way towards checking the plausibility of a claim
  2. Paranoia is just good sense if people really are out to get you. People who are trying to sell you something are not the most trustworthy sources
  3. Math tricks are fun
  4. People named Benford come up with an unusual number of bullshit related laws

I’m still checking that last one out, but it seems plausible.

And that wraps up this week! Next week we’ll be wallowing in “the natural ecology of bullshit”, so make sure you meander back next Sunday for that. Bring boots. It’ll be fun.

Week 3 is now up! Read it here.

Calling BS Read-Along Week 1: Intro to BS

Welcome to the Calling Bullshit Read-Along based on the course of the same name from Carl Bergstorm and Jevin West  at the University of Washington. Each week we’ll be talking about the readings and topics they laid out in their syllabus. If you missed my intro, click here.

Well hello hello and welcome to Week 1 of the Read-Along! Before we get started I wanted to give a shout out to the Calling Bullshit Twitter feed, and not just because they informed me yesterday that they are jealous of my name. They post some useful stuff over there, so check them out.

We’re kicking off this thing with an Introduction to Bullshit.   Now you may think you and bullshit are already well acquainted, but it never hurts to set some definitions up front. The first reading is a quick blog post that explains what is commonly known as either “Brandolini’s Law” or “The Bullshit Asymmetry Principle”, which states that “The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it”. 

Even if you didn’t know there was a name for this, you know the feeling: you’re in a political discussion when someone decides to launch in to something absolutely crazy about “the other side”. Feeling defensive, you look up whatever it is their talking about to find evidence to refute it. Even with a smartphone this can take a few minutes. You find one source disagreeing with them, they declare it biased. You find another, it’s not well sourced enough. One more, from a credible person/publication who is normally on “their side” aaaaand…they drop it with a shrug and mumble that it wasn’t that important anyway. That’s 5-10 minutes of your life gone over something it took them less than 30 seconds to blurt out. Ugh.

Okay, so we all know it when we see it…..but what is bullshit? The obvious answer is to go with the precedent established in Jacobellis vs Ohio and merely declare that “I know it when we see it“, but somehow I doubt that will get you full credit on the exam. If we’re going to spend a whole semester looking at this, we’re going to have to get more specific. Luckily since bullshit is not a new phenomena, there’s actually some pre-existing literature on the topic. One of the better known early meditations on the topic is from 1986 and is simply called “On Bullshit“. For all my readers who are pedantic word nerds (and I know there’s more than one of you!) I recommend this, if only for the multiple paragraphs examining whether “humbug!” and “bullshit!” are interchangeable or not. That discussion led me to the transcript of the 1980 lecture “On the Prevalence of Humbug” by Max Black, which is not in the course but also worth a read.

Now “humbug” isn’t used commonly enough for me to have a real opinion about what it means, but Frankfurt uses it to set an important stage: “humbug” is not just about misrepresenting something, but also about your reasons for doing the misrepresenting. In Black’s essay, he asserts that “humbug” misrepresentations are not actually so much about trying to get someone to believe something untrue as about making yourself look better. This isn’t the “yeah I have a girlfriend, but she’s in Canada” version of looking better either, but a version of looking better where you come across as more passionate, more dedicated and more on board with your cause than anyone else. The intent is not to get someone to believe that what you are saying is literal truth, but to leave them with a certain impression about your feelings on some matter, and about you in general. In other words, there’s an inherently social component to the whole thing.

After the humbug meditations, Frankfurt moves in to the actual term bullshit and how it compares to regular old lying. The social aspect still remains, he claims, as we possibly would remain friends with a bullshitter, but not a liar. In Frankfurt’s view, a lie seeks to alter one particular fact, bullshit seeks to alter the whole landscape. A liar also has some idea about where truth is and is trying to veer away from it, but bullshit is just picking and choosing facts, half facts, and lies as they fit or don’t fit a purpose. In other words, bullshit is not necessarilyan intent to subvert truth, but an indifference to truth. He also looks at why bullshit has been proliferating: we have more chances to communicate, and more topics to communicate about. Even if our percentage of bullshit stays steady, today’s communication overload means there will be more of it, and the number of complex topics we’re confronted with encourage us to bullshit even further. The essay ends on a fairly philosophical note, concluding that bullshit proliferates the more we doubt that we can ever know the objective reality of anything.  Well then.

I liked the essay overall, as I hadn’t really thought of the social component of bullshit in these terms before. The idea that there’s some sort of philosophical underpinning to the whole endeavor is a little interesting as well. But bullshit in the regular world has been around for forever, and we mostly know how to cope with it. What happens when it moves in to academia or other “higher” sources? That’s the subject of the next essay “Deeper in to Bullshit” by GA Cohen. Cohen takes issue with Frankfurt’s focus on the intent of the talker, and wants to focus on the idea of things that are pure nonsense. In his world, it is not the lying/bluffing/indifference to truth part that is the essence of bullshit, but rather the lack of sense or “unclarifiable unclarity”. You know, the famous “if you dazzle them with brilliance, baffle them with bullshit” line of thought. Cohen also separates producers of this kind of bullshit in to two subcategories: those who aim to do this, and those who just happen to do this a lot. Fantastically, Cohen includes a little chart to clarify his version of bullshit vs Frankfurt’s:


So academia gets it’s own special brand of bullshit, but we’re not done yet. Going even further in to this topic, we get Eubanks and Schaeffer’s “A kind word for bullshit: The problem of academic writing“. Starting with the scholarly work of one Dave Barry, they point out the deep ambivalence about bullshit present in many parts of the academy. On the one hand, academics are acutely aware of the problem of bullshit and the corrosive nature of ignorance, but on the other hand, they are deeply afraid that much of what they produce may actually be bullshit. To quote Barry:

Suppose you are studying Moby-Dick. Anybody with any common sense would say that Moby-Dick is a big white whale, since the characters in the book refer to it as a big white whale roughly eleven thousand times. So in your paper, you say Moby-Dick is actually the Republic of Ireland. . . . If you can regularly come up with lunatic interpretations of simple stories, you should major in English.

This of course is especially common in the humanities and social sciences due to physics envy.

Eubanks and Schaeffer go on to split bullshitters in to two categories of their own “prototypical” bullshitters, like the original type Frankfurt described, or academic bullshit. Academic bullshit does, of course, share some qualities with prototypical bullshit, namely that it aims to enhance the reputation of the author at the expense of clear communication. They point out that this starts infecting academics while they are still students, when they have every incentive to make themselves look good to the professor, and barely any incentives to make themselves intelligible to the average person.

So with these four essays, what are my major takeaways?

  1. Bullshit must be understood in a social context. To put it on the same level as “lying”  is to miss a major motivation.
  2. Due to point #1, challenging bullshit can take tremendous effort. You not only have to challenge the lack of truth, but also might be undermining someone’s sense of self-importance. That second part tends to make the first part look like a cake walk.
  3. Academia, which should be one of our primary weapons against bullshit, has succeeded in creating their own special breeding ground for bullshit.
  4. Undoing point #3 faces all the challenges previously stated in #2, but edit the sentence like this: You not only have to challenge the lack of truth clarity, but also might be undermining someone’s sense of self-importance <insert “and career”>.
  5. I need to start using the word “humbug” more often.

The points about academics are particularly well taken, as there seems to be a common misconception that intelligence inoculates you against bullshit and self deception. When I give my talk about internet science to high school kids, it’s almost always AP classes and I have to REALLY emphasize the whole “don’t get cocky kid” point. That’s why I love showing them the motivated numeracy study I talk about here.  They are always visibly alarmed that high math ability actually makes you more prone to calculation errors if making an error will confirm a pre-existing belief you find important. As we examine bullshit and how to refute it, it’s important to note that preventing yourself from spreading bullshit is a great first step.

That does it for this week. See you next week, when we move on to “Spotting Bullshit”!

Week 2 is up! Go straight to it here.

Calling BS Read Along: Series Introduction

Well hello hello! A few weeks ago, someone forwarded me a syllabus for a new class at being offered at the University of Washington this semester Info 198: Calling Bullshit. The synopsis is simple: “Our world is saturated with bullshit. Learn to detect and defuse it. Obviously I was intrigued. The professors ( Carl T. Bergstrom and Jevin West) have decided to put their entire syllabus online along with links to weekly readings, and are planning to add some of  the lectures when they conclude the semester. Of course this interested me greatly, and I was excited to see that they pointed to some resources I was really familiar with, and some I wasn’t.

Given that I’m in the middle of a pretty grueling semester of my own, I thought this might be a great time to follow along with their syllabus, week by week, and post my general thoughts and observations as I went along. I’m very interested in how classes like this get thought through and executed, and what topics different people find critical in sharpening their BS detectors. Hopefully I’ll find some new resources for my own classroom talks, and see if there’s anything I’d add or subtract.

I’ll start with their introduction next week, but I’ll be following the schedule of lectures posted in the syllabus for each week:

  1. Introduction to bullshit
  2. Spotting bullshit
  3. The natural ecology of bullshit
  4. Causality
  5. Statistical traps
  6. Visualization
  7. Big data
  8. Publication bias
  9. Predatory publishing and scientific misconduct
  10. The ethics of calling bullshit
  11. Fake news
  12. Refuting bullshit

I’ll be reading through each of the readings associated with each lecture, summarizing, adding whatever random thoughts I have, and making sure the links are posted. I’ll be adding a link for the next week’s reading as well. Anyone who’s interested can of course read along and add their own commentary, or just wait for my synopsis.

Happy debunking! (And go straight to Week 1 here)