I continue to be swamped with work, my capstone project, and a bad fantasy football team. En lieu of a real post, please accept this cartoon-I-can’t-find-a-source-for about social stigma among bar charts:

Kinda catty, aren’t they?
I continue to be swamped with work, my capstone project, and a bad fantasy football team. En lieu of a real post, please accept this cartoon-I-can’t-find-a-source-for about social stigma among bar charts:

Kinda catty, aren’t they?
I’m swamped with thesis writing this weekend, but I saw this on Twitter this week and thought it was worth a repost:
It’s from a series from WNYC, and it was actually originally posted over 4 years ago. They have a whole series of these, which I have not looked through, but it would be interesting to click on them every time something bad happens and see how the advice holds up.
My son (age 5) has developed the most fascinating (for both of us) new hobby of creating his own Lego superheroes by rearranging the ones that he has. He’s spent hours on this recently, meticulously dismantling them and looking for exactly the right piece to create the character he wants. Behold, a few recent versions:
He refused to tell me their names and got shy when I asked, but from what I can put together it’s (from right to left): Joker in disguise, Queen Tut/Barbara Gordon, Robin ripping his pants off, Happy Bug Man, Caveman Scarecrow and Spidergirl.
Never one to let a good analogy go, I attempted to explain to him that he’s figuring out how many combinations there are for any group of Legos. For example, if we wanted to know how many unique creations we could make out of the pieces in the picture above, we could make over 70,000 unique characters. He informed me “yes, but they wouldn’t be cool guys.” The kid’s got an aesthetic.
So I tried it a different way, and used it to explain to him the difference between a permutation and a combination. If I told him he could only take 2 out of these 6 creations in the car, he has 15 different groups of two he could select. That’s a combination.
If, however, he has a friend over and I tell them they can take two creations in the car and they each get one, they now have 30 possibilities….the original 15 possibilities x 2 ways of splitting them. That’s a permutation….the order matters in addition to the picks, so the number is always higher.
Of course, they will actually just want the same one, and then we will move on to a lesson in sharing. Also, he’s 5, and he kinda just wandered off part way through permutations and then asked if he could be a baby turtle. That’s when I figured I’d move this lesson to the blog, where I was slightly less likely to get turtle related commentary as a response.
Anyway, the history of using Lego’s to illustrate mathematical concepts is actually pretty robust, and can get really interesting. For more on permutations and combinations, try here. For why stepping on a Lego hurts so much, try this:
Okay, this is going to be another one of those posts where I make up a term for something I’m seeing that annoys me. You’ve been warned.
When I was a little kid, I remember one of the first times I ever saw a buoy in the ocean. I don’t remember how old I was, but I was probably 5 or so, and I thought the buoy was actually somebody’s ball that had floated away. As the day went on, I remember being amazed that it managed to stay so close to the same spot without moving…it was far from shore (at least to a 5 year old) but somehow it never disappeared entirely. I think my Dad must have noticed me looking at it because he teased me about it for a bit, but he finally told me it was actually anchored with a chain I couldn’t see. Life lessons.
I think about that feeling sometimes when I see statistics quoted in articles with little context. It’s always something like “75% of women do x, which is more than men”, and then everyone makes comments about how great/terrible women are for awhile. 5 paragraphs down you find out that 72% of men also do x, meaning all of the previous statements were true, but are a little less meaningful in context. What initially looked like a rather interesting free floating statistic was actually tied to something bigger. It may not stop being interesting or useful, but it certainly changes the presentation a bit. In other words:
Buoy statistic: A statistic that is presented on its own as free-floating, while the context and anchoring data is hidden from initial sight.
I see buoy statistics most often when it comes to group differences. Gender, racial groups, political groups….any time you see a number with what one group does without the number for the other half, I’d get suspicious.
For example, a few years ago, a story broke that the (frequently trolling) Public Policy Polling Group had found that 30% of Republican voters supported bombing the fictional city of Agrabah from the movie Aladdin. This got some people crowing about how dumb Republicans were, but a closer read showed that 36% of Democrats opposed it. Overall, an almost identical number of each party (43% vs 45%) had an opinion about a fictional city. Now this was a poll question designed to get people to say dumb things, and the associated headlines were pure buoy statistics.
Another example was around a Github study from a few years ago that showed that women had a lower acceptance rate of their pull requests if their user name made it clear they were female (71.8% to 62.5%). Some articles ended up reporting that they got far fewer requests accepted than men, but it turns out that men actually got about 64% of their requests accepted. While it was true the drop off was bigger from gender-neutral names (men went from about 68% to about 64%), 62.5% vs 64% is not actually “far fewer”. (Note: numbers are approximate because, annoyingly, exact numbers were not released)
I’m sure there are other examples, but basically any time you get impressed by a statistic, only to feel a bit of a let down when you hear the context, you’ve hit a buoy statistic. Now, just like with buoys, these statistics are not without any use. One of the keys to this definition is that they are real statistics, just not always as free-floating as you first perceive them. Frequently they are actually the mark of something legitimately interesting, but you have to know how to take them. Context does not erase usefulness, but it can make it harder to jump to conclusions.
I mentioned a few posts ago that I’m finally (finally!) working on my final capstone project for my degree. It’s going well so far, but it struck me this weekend exactly how much my process of wading through data resembles my process of cleaning out my closets:


Hope fervently the reduced pile size will quell my growing sense of panic.
I mean, it seemed like a good idea at the time.



The end.
One of the most common running themes on this blog is discussions of human bias, a topic I clearly believe deserves quite a bit of attention. In recent years though, I have started thinking a lot more about machine bias, and how it is slowly influencing more of our lives. In thinking about machine bias though, I’ve noticed recently that many people (myself included) actually tend to anthropomorphize machine bias and attempt to evaluate it as though it was bias coming from a human with a particularly wide-spread influence. Since anthropomorphism is actually a cognitive bias itself, I thought I’d take a few minutes today to talk about things we should keep in mind when talking about computers/big data algorithms/search engine results. Quite a few of my points here will be based off of the recent kerfluffle around Facebook offering to target your ads to “Jew haters”, the book Weapons of Math Destruction, and the big data portion of the Calling BS class. Ready? Here we go!
Overall, things aren’t entirely hopeless. There are efforts underway to come up with software that can systematically test “black box” algorithms on a larger scale to help identify biased algorithms before they can cause problems. However, until something reliable can be found, people should be aware that the biases we’re looking for are not the ones you would normally see if humans were running the show. One of the reasons AI freaks me out so much is because we really do all default to anthropomorphizing the machines and only look out for the downsides that fit our pre-conceived notions of how humans screw up. While this comes naturally to most of us, I would argue it’s on of the more dangerous forms of underestimating a situation we have going today. So uh, happy Sunday!
Twice recently I’ve seen minor characters on both sides of the political spectrum claim that they foresaw/predicted some recent event with “eerie precision”, on topics where their predictions had actually appeared (to me at least) only loosely connected to what actually happened.
While I was annoyed by these people, I was more annoyed by the fans of theirs who rushed to agree that it was clear that they had amazing foresight in making their calls. While obviously some of that is just in-group defensiveness, some of them really seem to believe that this person had done something amazing. While none of those fans are people who read my blog, I figured I’d blow off some steam by reminding everyone of two things:

So yes, next time you see someone make an “amazing prediction”, take a deep breath and ask yourself how much redefining is going on and how many predictions they had to throw out to get to that one that hit. Well, probably not you specifically dear reader, you’re probably fine. This is almost certainly one of those “if you’re thinking about it enough to read this post, you’re probably not the problem” things. Regardless, thanks for letting me get that off my chest.
This past week I was having a discussion with my high school teacher brother about an experiment his class was running and appropriate statistical methods for analyzing the data. We were discussing using the chi square statistic to compare data from an in class calorimetry experiment to the expected/published values (this is the point where the rest of my family wandered off), and he asked what other statistical analysis his kids could do that might help them understand their results. I mentioned that I was a big fan of confidence intervals for understanding data like this, and started to rattle off my reasons. While testing that produces a p-value is more commonly used in scientific work, I think for most people confidence intervals are more intuitive to use and should be worked in to the mix. Since we were talking about all this at around 7am (prior to both the second cup of coffee AND a trip out to move the cows at his farm), I figured I’d use my blog post today to more thoroughly lay out a few reasons confidence intervals should be more widely used (particularly by teachers) and provide a few helpful links.
The foundations of my argument comes from a paper published a few years ago called “Why the P-value culture is bad and confidence intervals a better alternative“, which gets in to the weeds on the stats, but makes a good overall case for moving away from a reliance on p-values and towards a focus on confidence intervals. The major reasons are:
On this graph you can draw lines to show not just “is it different” but also “when do we really care”. I think this is easier to show kids than just a p-value by itself, as there’s no equivalent visual to show p-values.| n | 95% Confidence interval | p-value |
|---|---|---|
| 10 | 94.9-101.1 | .2377 |
| 15 | 95.5-100.5 | .1436 |
| 20 | 95.8-100.2 | .0896 |
| 25 | 96-100 | .0569 |
| 30 | 96.2-99.8 | .0366 |
I think watching the range shrink is clearer than watching a p-value drop, and again, this can easily be converted in to a graph. If you’re running the experiment with multiple classes, comparing their results can also help show kids a wider range of what the variation can look like.
So there you go. Confidence intervals are a superior way of presenting effect size, significance of the finding, and are easy to visualize for those who have trouble with written numbers. While they don’t do away with all of the pitfalls of p-values, they really don’t add any new pitfalls to the mix, and they confer some serious benefits for classroom learning. I used Graphpad to quickly calculate the confidence intervals I used here, and they have options for both summary and individual data.
I got a little behind in my reading list this year, but I’m just finishing up Ben Goldacre’s Bad Pharma and it’s really good. Highly recommended if you want to know all sorts of excruciating detail about how we get the drugs we do, and lose faith in most everything.
The book introduced me to a paper from 2007 called “Following the Script: How Drug Reps Make Friends and Influence Doctors“, where a former pharma salesman lays out the different categories of doctors he encountered and how he worked to sell to each of them. This includes a whole table with doctor categories including “The friendly and outgoing doctor” and the “aloof and skeptical” doctor, along with the techniques used to sell to each.
Since Goldacre is the absolute epitome of “aloof and skeptical” he added his own explanation of the tactic they use on evidence based doctors:
“If they think you’re a crack, evidence-based medicine geek, they’ll only come to you when they have a strong case, and won’t bother promoting their weaker drugs at you. As a result, in the minds of bookish, sceptical evidence geeks, that rep will be remembered as a faithful witness to strong evidence; so when their friends ask about something the rep has said, they’re more likely to reply, ‘Well, to be fair, the reps from that company have always seemed pretty sound whenever they’ve brought new evidence to me…’ If, on the other hand, they think you’re a soft touch, then this too will be noted.”
Maybe it’s just because I’ve never been in sales, but it really had not occurred to me that was a sales technique a person could use. Sneaky.
Of course I then realized I’ve seen other, similar things in other situations. While most people know better than to come to me with shoddy stats during political debates, I’ve definitely seen people who told me that they personally agree certain numbers are shoddy later use those same numbers in Facebook arguments with others who aren’t quite as grouchy. It’s an offshoot of the old “be pleasant while the boss/parent is in the room, show your true colors later” thing. Like a data Eddie Haskell. I may have a new bias name here. Gotta work on this one.
If you’ve been seeing talk about the PURE study that recently was being reported under headlines like “Huge new study casts doubt on conventional wisdom about fat and carbs“? The study found that those with low fat diets were more likely to die by the end of the study than those with higher fat diets. However, Carbsane took a look and noticed some interesting things. First, the US wasn’t included, so we may need to be careful about generalizing the results there. They also included some countries that were suffering other health crises at the time, like Zimbabwe. Finally, the group they looked at was adults age 35 to 70, but they excluded anyone who had any pre-existing heart problems. This was the only disease they excluded, and it makes some of the “no correlation with heart disease” conclusions a little harder to generalize. To draw an equivalency, it’s like trying to figure out if smoking leads to lung cancer by excluding everyone in your sample who has lung problems already. What you really want to see is both groups, together and separately.
For my language oriented friends: this article about how cultures without words for numbers get by was really interesting. They make the assumption that counting distinct quantities is an inherently an unnatural thing to do, but I have to wonder about that. Some people do seem more numbers oriented than others, so what happens to those folks? Do people who are good at numbers and quantities just get really depressed in these cultures? Do they find another outlet? As someone who starts counting things to deal with all kinds of emotions (boredom, stress, etc), I feel like not having words for numbers would have a serious impact on my well being.
There’s a lot of herbs and supplements out there being marketed with dubious health claims, but exactly how those claims are worded depends on who you are. This article on how the same products are marketed on InfoWars and Goop is an interesting read, and a good reminder about how much information we get can be colored by marketing spin.
On a political note, this Economist article about the concept of anti-trust laws in the data age was food for thought.
Finally, I decided to do my capstone project for my degree on a topic I’ve become a little bit obsessed with: dietary variability. Specifically, I’m looking at those who identify that they are food-insecure (defined as not having the resources to obtain enough food to eat in the last 30 days) , and comparing their health habits to those who have enough. While I already have the data set, I’ve been looking for interesting context articles like this one, which explores the “food-insecurity paradox”. Apparently in the US, women who are food insecure are actually more likely to be obese than those who aren’t. Interesting stuff.