5 Definitions You Need to Remember When Discussing Mass Shootings This Week

In the wake of the Orlando tragedy of last week, the national conversation rapidly turned to what we could do to prevent situations like this in the future. I’ve heard/seen a lot of commentary on this, and I get concerned at how often statistics get thrown out without a clear explanation of what the numbers actually do or don’t say.  I wanted to review a few of the common issues I’m seeing, and to clarify what some of the definitions are. While I obviously have my own biases, my goal is NOT to endorse one viewpoint or another here. My goal is to make sure everyone knows what everyone else is talking about when they throw numbers out there.

Got it? Let’s go!

  1. Base rate Okay, this is obviously one of my pet issues right now, but this is a great example of a time you have to keep the concept of a base rate in mind. In the wake of mass shootings, many people propose various ideas that will help us predict who future mass shooters might be. Vox does a great article here about why most of the attempts to do this would be totally futile. Basically, for every mass shooter in this country, there are millions and millions of non mass shooters. Even a detection algorithm that makes the right call 99.999% of the time would yield a couple hundred false positives (innocent people incorrectly identified) for every true positive.  Read my post on base rates here for the math, but trust me, this is an issue.
  2. Mass Shooting I’ve seen the claim a couple of places that we have about one mass shooting per day in this country, and I’ve also seen the claim that we had 4-6 last year.  This Mother Jones article does an excellent deep dive on the statistic, but basically it comes down to circumstances. Most people agree that “mass” refers to 3 or 4 people killed at one time, but the precipitating events can be quite different. There are basically three types of mass shootings: 1. Domestic/family violence 2. Shootings that occur during/around other criminal activity 3. Indiscriminate public shootings. If you count all 3 together, you get the “one per day” number. If you only count #3, you get 4-6 per year. While obviously all of these events are horrible, the methods  of addressing each are going to be different. At the very least, it’s good to know when we’re talking about one and when we’re talking about ALL of them.
  3. Gun Deaths Even more common than the confusion about the term “mass shooting” is the term “gun deaths”. This pops up frequently that I’ve been posting about it almost as long as I’ve been blogging and have made a couple of graphs (here and here) that have come in handy in some Twitter debates. The short version is that anything marked “gun deaths” almost always includes suicides and accidents. Suicide is the biggest contributor to this category, and any numbers or graphs generated from “gun death” data tend to look really different when these are taken out.
  4. Locations This is a somewhat minor issue compared to the others, but take care when someone mentions “school shootings” or “attacks on American soil”. As I covered here, sometimes people use very literal definitions of locations to include situations you wouldn’t normally think of.
  5. Gun violence Okay, this one should be obvious, but gun violence only refers to, um, gun violence. In the wake of a tragedy like Orlando, I’ve seen the words “gun violence” and “terrorism” tossed about as though they are interchangeable.  When you state it clearly, it’s obvious that’s not true, but in the heat of the moment it’s an easy point to conflate. In one of my guns and graphs posts, I discovered that states with higher rates of gun murders also tend to have higher rates of non-gun murders with r=.6 or so. In most states gun murders are higher than non-gun murders, but it’s important to remember other types of violence exist as well….especially if we’re talking about terrorism.

One definition I didn’t cover here is the word “terrorism”. I’ve been looking for a while, and I’m not I’ve found a great consensus on what constitutes terrorism and what doesn’t. Up until a few years ago for example, the FBI ranked “eco-terrorism” as a major threat (and occasionally the number one domestic threat) to the the US, despite the fact that most of these incidents caused property damage rather than killing people.

Regardless of political stance, I always think it’s important to understand the context of quoted numbers and what they do or don’t say. Stay safe out there.

Cool kids and linguistic pragmatism

Yesterday a facebook friend of mine put up an angry post regarding misuse of the word “decimate”.  His chief complaint was that people used it as a synonym for destroy, when really it meant a reduction of 10% or so.  That cleared up the “deci” part of the word for me, but I was surprised that the proper definition was so narrow….so of course I went to dictionary.com to check his facts.

Turns out the “one in ten” definition is specifically marked as obsolete.  The current accepted definition is merely “to destroy a great number of”.  So basically it can’t be used to sub in for obliterate, but the 10% definition was only valid through the year 1600 or so.  Sigh.

I’m not a big fan of people who try to get too cute when picking on the language of others.  While I certainly am irritated by some of the more obvious errors in language (irregardless makes me cringe, and please don’t mix up “less” and “fewer” in my presence), I dislike when people go back several hundred verbal years and then attempt to claim that’s the “proper” way of doing things.  This annoys me enough that my brother bought me this book a few years ago, just to help me out.  I believe language will always be morphing to a certain extent, and while rules are good we just need to accept that all language is pretty much arbitrary.  Thus, I refer to myself as a linguistic pragmatist.  Adhere to the rules, but accept that sometimes society just moves on.

Why am I bringing this up?  Well, after going through that internal rant, I found it very interesting that this study is being reported with the headline “Popular kids who tortured you in high school are now rich“.

Basically, researchers assessed how popular kids were in high school, based on how many people gave you “friendship nominations” and found that those in the top 20% made 10% more money 40 years later than those in the bottom 20%.

Now I think this makes a certain amount of sense.  While the outcast nerd makes good story is appealing, it stands to reason that many of the least popular kids in high school might be unpopular because of real issues with social skills that hurt them later in life (to note, social skill impairment is a co-morbidity with all sorts of things that could make this worse….ADHD, depression, etc).  Conversely of course, those with more friends probably have skills that help them maintain networks later.  Basically, I think this study tells us that the number of friends you have in high school isn’t totally random.

My issues with the reporting/reading of this study is in the semantics.  I think there’s a disconnect between our common interpretation of “popular in high school” and the actual definition of “popular in high school”.  The researchers in this study weren’t assessing the kids other kids aspired to be, they were assessing the kids who actually had lots of friends and were well liked.  While the classic football player who beats up kids in the locker room may get referred to as a popular kid, it’s likely he would not have had many people naming him as a friend on a survey.  So basically, the study had a built in control for those kids who were temporarily at the top of the social ladder, but lacked actual getting along with people skills.  I had an incredibly small high school class (<30) and I could name several kids who fell in the "perceived popular" category but not the "actually popular" category.

All this to come back to my original point.  Words mean different things depending on context, and this should always be taken in to account when assessing research and reading subsequent report.  It’s not bad data, just a different set of definitions.

Growth charts and tiny babies

This is another post that reflects my current life situation, but it highlighted some pretty interesting issues with data tables.

This issue is particularly interesting to me because I delivered via unplanned/urgent c-section, in part because of some abnormal measurements found during a routine ultrasound.  We had to have quite a few follow up consults and testing (among other things, they actually had to assess for achondroplasia – better known as the major cause of dwarfism)*.

Given this, my mother thought I’d find this Wall Street Journal article on baby growth charts interesting.  Essentially, baby growth charts were set several decades ago based on a population that’s different from what we have now.  The CDC does not want to readjust the charts, as it would make obesity look more normal than they think it should, and this is causing a situation where a high number of children are measuring “off the charts”.

It’s an interesting situation when you realize that 95th percentile doesn’t actually mean “larger than 95% of children of the same age” but rather “larger than 95% of children the same age 40 years ago”.

Additionally, it also points out that the CDC growth chart is based largely on formula fed babies, who grow slightly differently from breast fed babies.  So at the same time Mayor Bloomberg is pushing breastfeeding, doctors are potentially telling parents their children need formula to speed their growth up to match a chart that only tracks where they would be if they had done formula to begin with (this is why state mandated health policy drives me nuts so often….you solve one aspect while leaving several causes unadressed).

As the availability of testing goes up, we have to be particularly vigilant to make sure our standards charts keep up as well.  Otherwise we routinize unnecessary testing and freak out new parents.  And from personal experience, I can say that’s just not nice.

*It was ruled unlikely, though apparently we can’t get a definitive no until he actually starts growing, or not as the case may be.  There’s no genetic history of it in my family or the husband’s, though we are both on the short side.  In this case, us being short is actually a positive….it means the abnormalities are more likely natural variations.  Our genetic consult doctor was hilariously terrible though….she suggested if we wanted more information about the condition we watch the reality TV show about it (Little People Big World).  Then she said it was unlikely, but maybe we should still watch the show.  She ended it all with a comment about how it was never good when genetics doctors had too much to say, so we should be happy she wasn’t talking too much.  I don’t think she was very self aware.  

International data – beware the self reporting

Maybe it’s just because the Olympics are on, but I’ve run in to a few interesting international statistics lately that gave me pause.

The first was regarding infant mortality.  After Aaron Sorkin’s new show The Newsroom incorrectly reported that the US was 178th in infant mortality (really, you think there are 177 countries you’d rather give birth in?), I went looking for the infant mortality listings across the world.  The US does not typically do very well in terms of other industrialized countries.  
There are a few interesting reasons for that….we have a much larger population than most of the countries that beat us, and it’s spread out over a much larger area.  Our care across areas/populations tends to be more uneven, states vary wildly on issues like access, health insurance, prenatal care, etc. Our records however, tend to be meticulous….there is very little doubt that we capture nearly all infant mortality that actually occurs.  This combination can put the US at a huge disadvantage in these statistics (10-30% according to the best published studies).
This raises the point of why Cuba tends to beat us.  Now, realistically speaking, if you or someone you love had to give birth, would you seriously pick Cuba over the US?  Would anybody?  And yet they look safer given the data….which is all self reported.  I have no problems believing that Singapore outranks us, but I’m skeptical of any country that might have an agenda.  Worldwide, there is actually very little consensus on what is a “live birth”, and the US tends to use the “any sign of life” definition.  
On the other end of the spectrum, I saw this piece recently on gun control.  I’ve covered misleading gun stats before (suicides are often combined with homicides to get “death by gun violence” numbers).  One of the interesting facts the article above points out is that internationally, gun deaths are only counted when it’s civilian on civilian violence.  This is certainly fine in the US…I would think we wouldn’t want to count every time the police had to open fire, but in countries with, um, more questionable police tactics, this could cause some skewing (Syria was cited as one such example).  
Data is hard enough to pin down when you know the sources have no vested interest in misleading you….international rankings will never be free from such bias.

What is STEM anyway?

I’ve been trying to work on a post about some further research on women in STEM fields, and I keep getting bogged down in definitions.  I am currently headed down the rabbit hole of what a “STEM job” actually is.

I found out some interesting things.  According to this report, my job doesn’t count as a STEM job, despite the fact that I work with nothing but math and science (alright, and some psych).  It’s not the psych part that excludes me however, it’s actually that I work in healthcare.  Healthcare, apparently is excluded completely.

So if I were performing my same job, with the same qualifications, in a different field, I’d have a STEM job.  Since I report in to a hospital however, I don’t have one.

Your doctor does not have a STEM job.  Neither does your pharmacist, dentist, nurse, or anyone who teaches anything on any level.  Apparently if you run stats for the Red Sox, you’re in a STEM job, but do the same thing for sick people, and it doesn’t count.

Fascinating.

Spanking and Mental Trauma

The headline reads “Spanking Linked to Mental Illness“, and I was immediately intrigued.  Spanking, generally, is a very hard thing to study, as it is so often correlated with other things.  Physical punishment of children is often linked to frustrated and under resourced parents, cultural norms that can be positive or negative, and even immigration status.

Curious how the study authors controlled for such things, but assured by the article that they had, I flipped over to the study itself.  It didn’t take long for me to realize this was yet another example of bad journalism mucking about with a half decent study.

The article starts like this:

Although the American Academy of Pediatrics (AAP) strongly discourages spanking, at leasthalf of parents admit to physically punishing their children. Some research suggests that as many as 70-90 percent of mothers have resorted to spanking at one time or another. Anew study published in the journal Pediatrics may cause parents to think more carefully before laying a hand on their little ones.

However, the study states:

Physical punishment was assessed with the question, “As a child how often were you ever pushed, grabbed, shoved, slapped or hit by your parents or any adult living in your house?” Respondents who reported an answer of“sometimes” or greater to this event were considered as having experienced harsh physical punishment. The term harsh physicalpunishment was used for this study because the measure includes acts of physical force beyond slapping, which some may consider more severe than “customary” physical punishment (ie, spanking).

 So the study specifically excluded “customary” physical punishment when it assessed the effects on future mental illness….which pretty much completely contradicts the headline.   I also doubt this is what 70-90% of mother’s are admitting to when they spank “at one time or another”.  

Irresponsible.

It’s all (culturally) relative

Last week I put up a post regarding a study on sexism levels in men whose wives stay at home.  I argued that due to the diversity of that group of men, and the variety of reasons a woman might stay home, this study was essentially meaningless.

Another issue came up in the comments section that I wanted to touch on: cultural relevance of data.

Most studies that get press here in the US are from the US, performed on American subjects.  This is sketchy business.

In the study about stay at home moms, mothers who worked part time were lumped in with the stay at home mothers.  Interestingly, in the Netherlands, this would actually be 90% of the women.  Does that mean that nearly every Dutch man married to a woman is more likely to be sexist?  Or does it mean that part time work has different value in different cultures?

I took a look around for some other examples, and found that in China, many women see working as part of a new found freedom.  At a conference I attended a few months ago, I talked to a man from Shanghai who mentioned that his wife went back to work because she couldn’t have handled trying to fight off the two grandmother’s, both of whom wanted to watch the child.  Due to the one child policy, this was the only chance they would get to have a grandbaby.  In many ways, it was actually the hierarchical/patriarchal culture there that pushed his wife to go back to work, as opposed to having her stay home.  

As the world continues to flatten out, and as America continues to welcome new immigrants, we must be conscious of who studies are actually looking at and how generalizable the results are.  In the sexism study, even the authors admitted their findings were meant to be a commentary on the US only….but it should raise some questions that they seemed to be chasing after a structure that doesn’t exist in some very liberal countries.

Something to consider, depending on the goal of the study.