Visualizing Effect Sizes

Under the weather this week, but I wanted to post this excellent graph from Twitter, showing what different Cohen’s d effect sizes mean in practice for populations:

I love this because despite the efforts of many many people, every time you see some sort of “group x is different from group y” type assertion, you still see people claiming that this either:

  1. Can’t be true because they know someone in group y who is more like group x or
  2. This  is completely true and every from group x is superior to every member of group y

Both are mistakes. For a more detailed look, there’s a visualization tool here that shows what these translate in to for random superiority. In other words, if you pick one person from each group randomly, what is the chance that the one from the higher group will actually have an outcome above the other person?

  • For d=.2, it’s 55.6%
  • For d=.5, it’s 63.8%
  • For d=.8, it’s 71.4%
  • For d=2, it’s 92.1%

This is good to keep in mind, as Cohen’s d is not an overly intuitive statistic for most people. Visualizations are good to help see quickly what these differences might mean on the population level.

Not the most groundbreaking point, but one that seems to bear repeating!

A Twitter Parallax

As long as humans have been around and arguing with each other, there have always been disputes about what the interpretation of events. During these disputes, I’d imagine that people wished we had ways of capturing events in real time, believing that this would eliminate disagreements. If everyone could work from a shared set of facts, then of course we would end disputes, right?

While I’m sure that’s what I would have thought if I’d lived 100 years ago, our recent age of ubiquitous cell phone cameras and real time Twitter updates has taught all of us that things are not so simple. Every time we see an example of this, I think of one of my favorite words: parallax. Defined as “the apparent displacement of an observed object due to a change in the position of the observer“, it reminds me that our perception of things sometimes depends not only on the object itself, but also where you’re standing.

If this is true of physical objects, then of course emotional situations up the ante. In the best of circumstances human communication can be prone to difficulties, and differences in perception can complicate things enormously. While the promise of technology is often that it will improve communication, the reality is that it often just creates new opportunities for differences in perception.

I bring this up because I had a really interesting example of this in my personal life recently, when the same Twitter thread led to two really different conclusions.

In the middle of one of the many many Twitter controversies of the past few weeks, I noticed some rather high profile people reacting to a Twitter thread from someone I knew was an acquaintance of my brother. Their reactions were not kind, and she was generally getting kinda dragged. He and I had talked about the issue previously before I knew she had jumped in, so I texted him the thread as an example of what I considered a Bad Opinion.

Basically while opining on the issue of the day, this woman had tried to make point X, but in the process had (IMHO) completely minimized counterpoint Y to a comical extent. My opinion was shared by others, who were mocking her for it.

When my brother and I talked a few days later, I mentioned the incident, and was surprised to find he disagreed with me. He said he didn’t at all see that she had minimized point Y, and in fact had emphasized point Y with points Y1 and Y2. At this point I got confused….I hadn’t seen either of those points made. My mind spun a bit. My brother and I have been arguing for years, and I knew he wouldn’t make something like that up. I was in the car so I couldn’t check the Twitter thread, but I started to wonder how I had missed the points she had made. Had I been scrolling too fast? Had I been projecting? Had I jumped to conclusions? I admitted to my brother that it was possible I’d missed something, and agreed that if I had I’d misjudged his friend.

Later that night it was still bothering me, so I went back to my text and reread the Twitter thread. I read all 31 Tweets she had sent, and discovered that neither point Y1 or Ywere in there. Now I was really confused. Like I said, my brother is one of my oldest sparring partners. We’ve been arguing for decades, and at this point I know he would never fabricate a point like that. I also knew that he would have mentioned it if he’d seen it elsewhere. So what the heck had happened?

As I pondered this, I reflexively hit the button “show more replies” to see if I could see some of the reactions he had mentioned and found….there were 4 more Tweets in the thread. Apparently at some point after she had sent the one she labelled 31/31 and said “thanks for listening”, she had added on a few other points. I’m not sure, but I think that because of the time delay between the initial Twetts and the add ons, Twitter hid those Tweets from anyone who accessed the thread directly. Since she had indicated it was the end of the thread, no one reading it that way would have known to go looking for other Tweets. When I clarified this with my brother, he mentioned that when I’d sent him the thread he hadn’t actually clicked on it, he’d just gone directly to her Twitter feed. Since Twitter shows things in reverse chronological order, he had read her added on points first, and then read everything else through that lens. No wonder we’d ended up with different opinions.

I was very struck by this whole thing, as it got me wondering how often this happens in our everyday lives. We believe we’re seeing the same thing due to the promises of technology, but the way it’s presented to us skews our reading. If my brother and I didn’t have years of good faith arguing behind us, I would likely not have been so curious about our different perceptions. If we weren’t both so interested in how information gets presented, we may not have cared or considered how our way of accessing the information had colored our subsequent reading of it. Our belief that we were both seeing the same thing might have actually impeded our communication rather than helped it.

I don’t have a good answer for how to get around this, but it’s good to keep in mind as more disputes are started and perpetuated online. While eliminating some pitfalls, technology does create new ones on a much grander scale. Just because you’re looking at the same thing doesn’t always mean you’re seeing the same thing.

Counting Terrorism and the Pitfalls of Open Source Databases

Terrorism is surging in the US, fueled by right-wing ideologies

Someone posted this rather eye catching story on Twitter recently, which came from an article back in August from I’ve blogged about how we classify terrorism or other Mass Casualty Incidents over the years, so I decided to click through to the story.

It came with two interesting graphs that I thought warranted a closer look. The first was a chart of all terror incidents (the bars) vs the fatalities in the US:

Now first things first: I always note immediately where the year starts. There’s a good reason this person chose to do 15 years and not 20, because including 9/11 in any breakdown throws the numbers all off. This chart peaks at less than 100 fatalities, and we know 2001 would have had 30 times that number.

Still, I was curious what definition of terrorism was being used, so I went to look at the source data they cited from the Global Terrorism Database. The first thing I noted when I got to the website is that data collection for incidents is open source. Interesting. Cases are added by individual data collectors, then reviewed by those who maintain the site. I immediately wondered exactly how long this had been going on, as it would make sense that more people added more incidents as the internet became more ubiquitous and in years where terrorism hit the news a lot.

Sure enough, on their FAQ page, they actually specifically address this (bolding mine):

Is there a methodological reason for the decline in the data between 1997 and 1998, and the increases since 2008 and 2012?

While efforts have been made to assure the continuity of the data from 1970 to the present, users should keep in mind that the data collection was done as events occurred up to 1997, retrospectively between 1998 and 2007, and again concurrently with the events after 2008. This distinction is important because some media sources have since become unavailable, hampering efforts to collect a complete census of terrorist attacks between 1998 and 2007. Moreover, since moving the ongoing collection of the GTD to the University of Maryland in the Spring of 2012, START staff have made significant improvements to the methodology that is used to compile the database. These changes, which are described both in the GTD codebook and in this START Discussion Point on The Benefits and Drawbacks of Methodological Advancements in Data Collection and Coding: Insights from the Global Terrorism Database (GTD), have improved the comprehensiveness of the database. Thus, users should note that differences in levels of attacks before and after January 1, 1998, before and after April 1, 2008, and before and after January 1, 2012 may be at least partially explained by differences in data collection; and researchers should adjust for these differences when modeling the data.

So the surge in incidents might be real, or it might be that they started collecting things more comprehensively, or a combination of both. This is no small matter, as out of the 366 incidents covered by the table above, 266 (72%)had no fatalities. 231  incidents (63%) had no fatalities AND no injuries. Incidents like that are going to be much hard to find records for unless they’re being captured in real time.

The next graph they featured was this one, where they categorized incidents by perpetrator:

The original database contains a line for “perpetrator group”, which seems to speak loosely to motivation. Overall they had 20 different categories for 2017, and Quartz condensed them in to the 4 above. I started to try to replicate what they did, but immediately got confused because the GTD lists 19 of the groups as “Unknown”, so Quartz had to reassign 9 of them to some other group. Here’s what you get just from the original database:

Keep in mind that these categories are open source, so differences in labeling may be due to different reviewers.

Now it’s possible that information got updated in the press but not in the database. It seems plausible that incidents might be added shortly after they occur, then not reviewed later when more facts were settled. For example, the Las Vegas shooter was counted under “anti-government extremists”, but we know that the FBI closed the case 6 months ago stating they never found a motive. In fact, the report concluded that he had a marked disinterest in political and religious beliefs, which explains his lack of manifesto or other explanation for his behavior. While anti-government views had been floated as a motive originally, that never panned out. Also worth noting, the FBI specifically concluded this incident did not meet their definition for terrorism.

Out of curiosity, I decided to take a look at just the groups that had an injury or fatality associated with their actions (29 out of the 65 listed for 2017):

If you want to look at what incident each thing is referring to, the GTD list is here. Glancing quickly, the one incident listed as explicitly right wing was Mitchell Adkins, who walked in to a building and stabbed 3 people after asking them their political affiliation. The one anti-Republican one was the attack on the Republican Congressional softball team.

I think overall I like the original database categories better than broad left or right wing categories, which do trend towards oversimplification. Additionally, when using crowd sourced information, you have to be careful to account for any biases in reporting. If the people reporting incidents are more likely to come from certain regions or to pay more attention to certain types of crimes, the database will reflect that.

To illustrate that point, I should note that 1970 is by FAR the worst year for terrorist incidents they have listed. Here’s their graph:

Now I have no idea if 1970 was really the worst year on record or if it got a lot of attention for being the first year they started this or if there’s some quirk in the database here, but that spike seems unlikely. From scanning through quickly, it looks like there’s a lot of incidents that happened in the same day. That trend was also present in the current data, and there were a few issues I noted that looked like duplicates but also could have been two things done similarly in the same day.

Overall though, I think comparing 1970 to 2017 shows an odd change in what we call terrorism. Many of the incidents listed in 1970 were done by people who specifically seemed to want to make a point about their group. In 2017, many of the incidents seemed to involve someone who wanted to be famous, and picked their targets based on whoever drew their ire.  You can see this by the group names. In 2017 only one named group was responsible for a terrorist attack (the White Rabbit Militia one) whereas in 1970 there at least a dozen groups with names like “New World Liberation Front” or “Armed Revolutionary Independence Movement“.

Overall, this change does make it much harder to figure out what ideological group terrorists belong to, as a large number of them seem to be specifically eschewing group identification. Combine that with the pitfalls of crowd sourcing, and changing definitions, and I’d say this report is somewhat inconclusive.

Reporting the High Water Mark

Another day, another weird practice to add to my GPD Lexicon.

About two weeks ago, a friend sent me that “People over 65 share more fake news on Facebook” study to ask me what I thought. As I was reviewing some of the articles about it, I noticed that they kept saying the sample size was 3,500 participants. As the reporting went on however, the articles clarified that not all of those 3,500 people were Facebook users, and that about half the sample opted out. Given that the whole premise of the study was that the researchers had looked at Facebook sharing behavior by asking people for access to their accounts, it seemed like that initial sample size wasn’t reflective of those used to obtain the main finding. I got curious how much this impacted the overall number, so I decided to go looking.

After doing some follow up with the actual paper, it appears that 2,771 of those people had Facebook to begin with,  1,331 people actually enrolled in the study, and 1,191 were able to link their Facebook account to the software the researchers needed. So basically the sample size the study was actually done on is about a third of the initially reported value.

While this wasn’t necessarily deceptive, it did strike me as a bit odd. The 3,500 number is one of the least relevant numbers in that whole list. It’s useful to know that there might have been some selection bias going on with the folks who opted out, but that’s hard to see if you don’t report the final number.  Other than serving as a selection bias check though (which the authors did do), 63% of the participants had no link sharing data collected on them, and thus are irrelevant to the conclusions reported.  I assumed at first that reporters were getting this number from the authors, but it doesn’t seem like that’s the case.  The number 3,500 isn’t in the abstract. The press release uses the 1,300 number. From what I can tell, the 3,500 number is only mentioned by itself in the first data and methods section, before the results and “Facebook profile data” section clarify how the interesting part of the study was done. That’s where they clarify that 65% of the potential sample wasn’t eligible or opted out.

This was not a limited way of reporting things though, as even the New York Times went with the 3,500 number. Weirdly enough, the Guardian used the number 1,775, which I can’t find anywhere. Anyway, here’s my new definition:

Reporting the high water mark: A newspaper report about a study that uses the sample size of potential subjects the researchers started with, as opposed the sample size for the study they subsequently report on.

I originally went looking for this sample size because I always get curious how many 65+ plus people were included in this study. Interestingly, I couldn’t actually find the raw number in the paper. This strikes me as important because if older people are online in smaller numbers thank younger ones, the overall number of fake stories might be larger among younger people.

I should note that I don’t actually think the study is wrong. When I went looking in the supplementary table, I noted that the authors mentioned that the most commonly shared type of fake news article was actually fake crime articles. At least in my social circle, I have almost always seen those shared by older people rather than younger ones.

Still, I would feel better if the relevant sample size were reported first, rather than the biggest number the researchers looked at throughout the study.

What I’m Reading: January 2019

Well my best read of the month was the draft manuscript of my brother’s upcoming book Addiction Nation: What the Opioid Crisis Reveals About Us . This book came out of an article he wrote about his own opioid addiction, which I blogged about here. I’m super proud of him for this book, so expect more mentions of this as the publication date draws closer. He’s asked me if I’d do some blogging with him about some of the research around this topic, so if anyone had anything in particular they’d be interested on that topic please let me know.

A recent bout of Wikipedia reading led me to this really interesting visual about the Supreme Court.  My dad had mentioned recently the idea that there used to be a “Catholic seat” on the Supreme Court, before the more recent trend of Catholics dominating SCOTUS. Turns out there’s a visual that shows how right he was:

So basically for almost 60 years there was only one Catholic justice, and they seemed to be nominated to replace each other. Then in the late 80s that all changed, and by the late 2000s Catholics would dominate the court. As it stands today the breakdown is 5 Catholics, 3 Jews, and 1 Episcopalian.

I stumbled across an interesting paper a few weeks ago called “Metacognitive Failure as a Feature of Those Holding Radical Beliefs” that found that people who held “radical” beliefs were more likely to be overconfident/less aware of their errors in neutral areas as well. I haven’t read through the full study, but the idea that radical beliefs are due to generalized overconfidence as opposed to attachment to a specific idea is intriguing.

As someone who was raised with a good dose of 90s era environmentalism, I thought this Slate Star Codex post about “What Happened to 90s Environmentalism?” was fascinating. Turns out some of the stuff we were warned about was solved, some was overhyped and some….just stopped being talked about.

On a totally different note, I’ve decided to do a cookbook challenge this year, and am cooking my way through the book 12 Months of Monastery Soups. I sort of started blogging about it, but I’m not sure if I like that format or not. If I end up ditching that, then I’m still going to post pictures on my heretofore neglected Instagram account.

Updates on Mortality Rates and the Impact of Drug Deaths

A couple of years ago now, there was a lot of hubbub around a paper about mortality rates among white Americans. This paper purported to show that mortality for middle aged white people in the US were not decreasing (like other countries/races/ethnicities) were, but was actually increasing.

Andrew Gelman and others challenged this idea, and noted that some of the increase in mortality was actually a cohort effect. In other words, mortality was up, but so was the average age of a “45-54 year old”. After adjusting for this, their work suggested that actually it was white middle aged women in the south who were seeing an increase in mortality:

In this article for Slate, they published the state by state data to make this even clearer:

In other words, there are trends happening, but they’re complicated and not easy to generalize.

One of the big questions that came up when this work was originally discussed was how much “despair deaths” like the opioid overdoses or suicide rates were driving this change.

In 2017, a paper was published that showed that this was likely only partially true. Suicide and alcohol related deaths had remained relatively stable for white people, but drug deaths had risen:

Now, there appears to be a new paper coming out  that shows there may be elevated mortality in even earlier age groups. It appears only the abstract is up at the moment, but the initial reporting shows that there may be some increase in Gen X (current 38-45 year olds) and some Gen Y (27-37 year olds). They have reportedly found elevated mortality patterns among white men and women in that age group, being partially driven by drug overdoses and alcohol poisonings.

From the abstract, the generations with elevated mortality were:

    • Non-Hispanic Blacks and Hispanics: Baby Boomers
    • Non-Hispanic White females: late-Gen Xers and early-Gen Yers
    • Non-Hispanic White males: Baby Boomers, late-Gen Xers, and early-Gen Yers.

Partial drivers for each group:

  • Baby Boomers: drug poisoning, suicide, external causes, chronic obstructive pulmonary disease and HIV/AIDS for all race and gender groups affected.
  • Late-Gen Xers and early-Gen Yers: are at least partially driven by mortality related to drug poisonings and alcohol-related diseases for non-Hispanic Whites.

And finally, one nerve-wracking sentence:

Differential patterns of drug poisoning-related mortality play an important role in the racial/ethnic disparities in these mortality patterns.

It remains to be seen if this paper will have some of the cohort effect problems that have plagued other analyses, but the drug poisoning death issue seems to be a common feature. It remains to be seen what the long term outcomes of this will be, but here’s an interesting visualization from Senator Mike Lee’s website:

Not a pretty picture.

GPD Most Popular Posts of 2018

Well here we are, the end of 2018. I always get a kick out of seeing what my most popular posts are each year, as they are never what I expect them to be, so I always enjoy my year end roundup.

My most popular posts this year continue to be ones I think people Google for school assignments (6 Examples of Correlation/Causation Confusion and 5 Examples of Bimodal Distributions), and the Immigration, Poverty and Gumballs post. Every time immigration debates hit the news, Numbers USA reposts the video that sparked that post and my traffic goes with it. At this point I find that utterly bizarre as the video is 8 years old and the numbers he quotes over a decade old, but people still contact me wanting to fight about them. Sigh.

Okay, now on to 2018! Here were my most popular posts written in 2018:

  1. 5 Things About that “Republicans are More Attractive than Democrats” Study I was surprised to see this post topped my list, until I realized it’s actually just been an extremely steady performer, garnering equal number of hits every month since it went live. Apparently partisan attractiveness is a timeless concern.
  2. Vitamin D Deficiency: A Supplement Story Yet another one that surprised me. Moderately popular when I first posted it, this has become increasingly popular as we enter winter months.
  3. What I’m Reading: September 2018 My link fests don’t often make my most read list, but my discussion of the tipping point paper (i.e. how many people have to advocate for a thing before the minority can convince the majority) got this one some traffic from the AVI at Chicago Boyz.
  4. 5 Things About the GLAAD Accelerating Acceptance Report This was the post I personally shared the most this year, with and odd number of people I knew actually bringing it up to me. Yay self-promotion!
  5. GPD Lexicon I was pleased to see my new GPD Lexicon page made the list, and I really enjoy putting these together. I think my 2019 resolution is to try to figure out how to put some of these in to an ebook. Writing them makes me happy.
  6. Tick Season 2018 I think this post got put up on an exterminator website as a good example of why you needed their spraying services. Glad I could help?
  7. Death Comes for the Appliance This actually may have been my favorite post of the year. I learned so much weird trivia and now have way too many talking points when people bring up appliance lifespans. Thanks to all who participated in the comments.
  8. 5 Things About Fertility Rates I still think about this post every time someone mentions fertility rates. The trends are fascinating to me.
  9. Tidal Statistics  I was very gratified to find people linking to this one when encountering their own statistics problems. Easier to fight the problem when you can actually name it.
  10. The Mandela Effect: Group False Memory This entire concept still makes me laugh and think, all at the same time.

Well that’s a wrap folks, see you in the new year!