Are law schools liable for misleading statistics?

An interesting snippet from over at the Volokh Conspiracy, where former students sued their law school for publishing misleading statistics.

The court ruled that the salary statistics published by the school were truly misleading, but in the end caveat emptor prevailed.  Apparently the schools had published average salary data, but only for those students who actually got jobs.  The court ruled that:

….even though Plaintiffs did not know the truth of how many graduates were used to calculate the average salary, at the very least, it is clear that the Employment Report has competing representations of truth. With red flags waiving and cautionary bells ringing, an ordinary prudent person would not have relied on the statistics to decide to spend $100,000 or more.

I do love legal language at times, and I was fairly amused by the phrase “competing representations of truth”. While in this case it was clear cut what information would have been most useful to the consumer, it’s often unclear what statistical breakdown represents “actual reality” and such.  I did think that perhaps the court was giving the public too much credit though, when it cited what an “ordinary prudent” person would do (or is it just that not many prudent people exist?).

I’ve been reading Tom Naughton’s blog quite a bit lately, and he often quotes his college physics professor’s advice to all of his students.  It’s a good quote, one that I think should be taught to all students freshmen year of high school.  In fact, it should have been used in this court decision:  “Learn math.  Math is how you know when they’re lying to you.”

Does race or profession affect sleep?

I’ve commented before on my skepticism about self reported sleep studies.

Two recent studies on sleep piqued my interest, and while my original criticisms hold, there was yet another issue I wanted to bring up.

The first was from a few months back at the NYT blog, commenting on the most sleep deprived professions.
The second is from Time magazine, and talks about sleep differences among the races.

My gripe with both studies is the extremely small difference between the rankings.

In the professions study (sponsored by Sleepy’s btw), the most sleep deprived profession (home health aide) clocks in at 6hr57m.  The most well rested is loggers, with 7h20m.   On a self reported survey, how significant is 23 minutes?

From the study on races:

Overall, the researchers found, blacks, Hispanics and Asians slept less than whites. Blacks got 6.8 hours of sleep a night on average, compared with 6.9 hours for Hispanics and Asians, and 7.4 hours a night for whites. 

Here we see the same thing….there’s a 6 minute difference between the totals for Blacks and Hispanics and Asians.   Whites get 30 minutes more than Hispanics/Asians and 36 minutes more than blacks.

I question the significance of this, since I can’t remember whether I went to bed at 9:00 or 9:30 last night, and would have to guess if someone asked me.  Both surveys state this was self reported, and thus the chance these averages could be even closer together is huge.

Additionally, these differences do not actually reach the level of significance that the studies showing the dangers of sleep deprivation reach.

For example, in this study about sleep and overeating, subjects were woken up 2/3rds of the way through their normal sleep time.  That would be 2 hours early for nearly everyone above.  The studies on heart disease were only linked with chronic insomnia.  Cancer and diabetes are both more common in shift workers, but as someone who worked overnights for 3 years, I can tell you that’s not the same as waking up 30 minutes early.

Kaiser Fung has a great post about the popularizing of tiny effects that will be a hit if you didn’t like Freakonomics.

When in Doubt, Blame the Journalist

Within minutes of hitting “publish post” on my mission statement, I found an article that reminded me of one of my worst pet peeves when it comes to data/science/studies of all types.  The headline read  “Keeping Your Name? Midwesterners Are Judging You”.  My ears (eyes?) perked up at this headline, as I am among those women who declined to change her name post-nuptial.  Despite knowing that Jezebel is not often the best place for unbiased reporting, I gave it a read.  

The article linked to a much more well nuanced article here, but the basics are as follows: students at a small midwestern college feel that women who don’t change their last names when they get married are less committed to their relationships than those who do.  This was interesting in part because the number of people who felt negatively about this quadrupled between 1990 and 2006.  
For the personal reasons listed above, I find this interesting.  However, when you look at the numbers (2.7% of 256 and 10.1% of 246 which Jezebel did include) and do a little math, you realize that this “jump” is a difference of 18 people.  
A few things to consider about this:
  1. I couldn’t find that this was published anywhere.  It seemed to be a sort of “FYI for the headlines”.
  2. Apparently there’s no data on whether or not this perception is true.  My bias would be that it’s not, but I couldn’t find data actually saying if the perception was correct.  This happens in many “perception” studies….they quote percentages who believe something with the implication that a certain belief is wrong without ever proving it.
  3. There wasn’t a gender breakdown of who those 18 people were.  If most were female, then isn’t their perception likely to be based on experience?  As in “well if I didn’t do it, it would be because I wasn’t committed”?  That not judgement of others, that’s judgement of self.
  4. Have any of their professors (or TV shows, or other media sources) recently made disparaging remarks about this?  18 people who all very well might know each other (the university surveyed was under 1000 students) could easily be influenced in their answer  by even one strong source.
  5. As college students, presumably very few of those polled were actually married.  From my experience in college, I would conjecture that this is a phase of life during which people are very idealistic regarding their future mates without having many real experiences to back it up.  I put much more stock in what people who are actually married use to feel out level of commitment than what someone who’s never walked down that aisle thinks.
All that being said, it looked like the study authors were careful to address several of these points (especially the “this is not a representative sample” point.  It was only in the translation that conclusions were drawn that were more dubious.  
Scientists have very little incentive to exaggerate the meaning of their findings.  They are in a profession where that could be very damaging.  Reporters for both old and new media have EVERY incentive to spin things in to good headlines.  Remember that.