Short Takes: Gerrymandering, Effect Sizes, Race Times and More

I seem to have a lot of articles piling up that I have something to say about, but not enough for a full post. Here’s 4 short takes on 4 current items:

Did You Hear the One About the Hungry Judges?
The AVI sent me an article this week about a hungry judge study I’ve heard referenced multiple times in the context of willpower and food articles. Basically, the study shows that judges rule in favor of prisoners requesting parole 65% of the time at the beginning of the day and 0% of the time right before lunch. The common interpretation is that we are so driven by biological forces that we override our higher order functioning when they’re compromised. The article rounds up some of the criticisms of the paper, and makes a few of its own…namely that an effect size that large could never have gone unnoticed. It’s another good example of “this psychological effect is so subtle we needed research to tease it out, but so large that it noticeably impacts everything we do” type research, and that should always raise an eyebrow. Statistically, the difference in rulings is as profound as the difference between male and female height. The point is, everyone would know this already if it were true. So what happened here? Well,this PNAS paper covers it nicely but here’s the short version: 1) the study was done in Israel  2) This court does parole hearings by prison, 3 prisons a day with a break in between each 3) prisoners who have legal counsel go first 4) lawyers often represent multiple people, and they chose the order of their own cases 5) the original authors lumped “case deferred” and “parole denied” together as one category. So basically the cases are roughly ordered from best to worst up front, and each break starts the process over again. Kinda makes the results look a little less impressive, huh?

On Inter-Country Generalization and Street Harassment
I can’t remember who suggested it, but I saw someone recently suggest that biology or nutrition papers in PubMed or other journal listings should have to include a little icon/picture at the top that indicated what animal the study was done on. They were attempting to combat the whole “Chemical X causes cancer!” hoopla that arises when we’re overdosing mice on something. I would like to suggest we actually do the same thing with countries, maybe use their flags or something. Much like with the study above, I think tipping people off that we can’t make assumptions things are working the same way they work in the US or whatever country you hail from. I was thinking about that when I saw this article from Slate with the headline “Do Women Like Being Sexually Harassed? Men in a New Survey Say Yes“. The survey has some disturbing statistics about how often men admit to harassing or groping women on the street (31-64%) and why they do it (90% say “it’s fun”), but it’s important to note it surveyed men exclusively in the Middle East and Northern Africa. Among the 4 countries, results and attitudes varied quite a bit, making it pretty certain that there’s a lot of cultural variability at play here. While I thought the neutral headline was a little misleading on this point, the author gets some points for illustrating the story with signs (in Arabic) from a street harassment protest in Cairo. I only hope other stories reporting surveys from other countries do the same.

Gerrymandering Update: Independent Commissions May Not be That Great (or Computer Models Need More Validating)
In  my last post about gerrymandering, I mentioned that some computer models showed that independent commissions did a much better job of redrawing districts than state legislatures did. Yet another computer model is disputing this idea, showing that they aren’t. To be honest I didn’t read the working paper here and I’m a little unclear over what they compared to what, but it may lend credibility to the Assistant Village Idiot’s comment that those drawing district maps may be grouping together similar types of people rather than focusing on political party. That’s the sort of thing that humans of all sorts would do naturally and computers would call biased. Clearly we need a few more checks here.

Runner Update: They’re still slow and my treadmill is wrong
As an update to my marathon times post, I recently got sent this websites report that  showed that US runners for all distances are getting slower. They sliced and diced the data a bit and found some interesting patterns: men are slowing down more than women and slower runners are getting even slower. However, even the fastest runners have slowed down about 10% in the last two decades. They pose a few possible reasons: increased obesity in the general population, elite runners avoiding races due to the large numbers of slower runners, or in general leaving to do ultras/trail races/other activities. On a only tangentially related  plus side, I thought I was seriously slowing down in my running until I discovered that my treadmill was incorrectly calibrated to the tune of over 2 min/mile.  Yay for data errors in the right direction.



Does race or profession affect sleep?

I’ve commented before on my skepticism about self reported sleep studies.

Two recent studies on sleep piqued my interest, and while my original criticisms hold, there was yet another issue I wanted to bring up.

The first was from a few months back at the NYT blog, commenting on the most sleep deprived professions.
The second is from Time magazine, and talks about sleep differences among the races.

My gripe with both studies is the extremely small difference between the rankings.

In the professions study (sponsored by Sleepy’s btw), the most sleep deprived profession (home health aide) clocks in at 6hr57m.  The most well rested is loggers, with 7h20m.   On a self reported survey, how significant is 23 minutes?

From the study on races:

Overall, the researchers found, blacks, Hispanics and Asians slept less than whites. Blacks got 6.8 hours of sleep a night on average, compared with 6.9 hours for Hispanics and Asians, and 7.4 hours a night for whites. 

Here we see the same thing….there’s a 6 minute difference between the totals for Blacks and Hispanics and Asians.   Whites get 30 minutes more than Hispanics/Asians and 36 minutes more than blacks.

I question the significance of this, since I can’t remember whether I went to bed at 9:00 or 9:30 last night, and would have to guess if someone asked me.  Both surveys state this was self reported, and thus the chance these averages could be even closer together is huge.

Additionally, these differences do not actually reach the level of significance that the studies showing the dangers of sleep deprivation reach.

For example, in this study about sleep and overeating, subjects were woken up 2/3rds of the way through their normal sleep time.  That would be 2 hours early for nearly everyone above.  The studies on heart disease were only linked with chronic insomnia.  Cancer and diabetes are both more common in shift workers, but as someone who worked overnights for 3 years, I can tell you that’s not the same as waking up 30 minutes early.

Kaiser Fung has a great post about the popularizing of tiny effects that will be a hit if you didn’t like Freakonomics.

When in Doubt, Blame the Journalist

Within minutes of hitting “publish post” on my mission statement, I found an article that reminded me of one of my worst pet peeves when it comes to data/science/studies of all types.  The headline read  “Keeping Your Name? Midwesterners Are Judging You”.  My ears (eyes?) perked up at this headline, as I am among those women who declined to change her name post-nuptial.  Despite knowing that Jezebel is not often the best place for unbiased reporting, I gave it a read.  

The article linked to a much more well nuanced article here, but the basics are as follows: students at a small midwestern college feel that women who don’t change their last names when they get married are less committed to their relationships than those who do.  This was interesting in part because the number of people who felt negatively about this quadrupled between 1990 and 2006.  
For the personal reasons listed above, I find this interesting.  However, when you look at the numbers (2.7% of 256 and 10.1% of 246 which Jezebel did include) and do a little math, you realize that this “jump” is a difference of 18 people.  
A few things to consider about this:
  1. I couldn’t find that this was published anywhere.  It seemed to be a sort of “FYI for the headlines”.
  2. Apparently there’s no data on whether or not this perception is true.  My bias would be that it’s not, but I couldn’t find data actually saying if the perception was correct.  This happens in many “perception” studies….they quote percentages who believe something with the implication that a certain belief is wrong without ever proving it.
  3. There wasn’t a gender breakdown of who those 18 people were.  If most were female, then isn’t their perception likely to be based on experience?  As in “well if I didn’t do it, it would be because I wasn’t committed”?  That not judgement of others, that’s judgement of self.
  4. Have any of their professors (or TV shows, or other media sources) recently made disparaging remarks about this?  18 people who all very well might know each other (the university surveyed was under 1000 students) could easily be influenced in their answer  by even one strong source.
  5. As college students, presumably very few of those polled were actually married.  From my experience in college, I would conjecture that this is a phase of life during which people are very idealistic regarding their future mates without having many real experiences to back it up.  I put much more stock in what people who are actually married use to feel out level of commitment than what someone who’s never walked down that aisle thinks.
All that being said, it looked like the study authors were careful to address several of these points (especially the “this is not a representative sample” point.  It was only in the translation that conclusions were drawn that were more dubious.  
Scientists have very little incentive to exaggerate the meaning of their findings.  They are in a profession where that could be very damaging.  Reporters for both old and new media have EVERY incentive to spin things in to good headlines.  Remember that.