You are getting sleepy….

It’s been one of those weeks.  I feel I would pay good money to be able to fast forward through tomorrow and jump straight to the weekend, as I’m pretty sure my brain is leaking out of my ear.

Given that, the headlines about this announcement by the CDC caught my eye.  The headline reads “30% of US Workers Don’t Get Enough Sleep”.

Now, I’m in a pretty forgiving mood towards that sentiment.  I’m tired today, and I know when I got in this morning most of my coworkers were dragging too.  Any comment on sleep deprivation would have most certainly gotten lots of knowing looks and nods of commiseration.  This study backs us up right?  We’re all veeeeeeeeery sleepy.

Except that studies like this are almost all misleading.

Several years ago, I read a pretty good book by Laura Vanderkam called 168 Hours: You have more time than you think.  It was through this book that I got introduced to the Bureau of Labor Statistics American Time Use Survey.

Now, most time use surveys….the type that people use to give reports about how much we sleep or work….are done by just asking people.  Now that’s great, except that people are really terrible at reporting these things accurately.  The ATUS however, actually walks people through their day rather than just have them guess at a number.  It’s interesting how profound these differences can be.  In another survey using time diary methodology, it was found that people claiming to work 60 – 64 hours per week actually averaged 44.2 hours of work.  More here, if you’re interested.

Unsurprisingly, sleep is one area that people chronically underestimate how much they’re getting.  The CDC study, which it admits was all data from calling up and asking people “how many hours of sleep do you get on average?” found that 30% of workers sleep fewer than 6 hours per night.  The ATUS however, finds that the average American sleeps 8.38 hours per night….and that’s on weekday nights alone.  Weekends and holidays, we go up to 9.34.

I couldn’t find the distribution for this chart, but I did find the age breakdown, so we can throw out those 15-24 and those over 65 (all of whom get about 9 hours of sleep/night).  We’re left with those 25 – 65 who average roughly around 8.3 hours of sleep per night.

Alright, now lets check the CDC number and figure out how much sleep the other 70% of the population would have to be getting in order to make these two number work.

If we take some variables:
a = percent of people sleeping an average of fewer than 6 hours per night
x = the maximum number of hours to qualify as “fewer than 6 hours”
b = percent of people sleeping more than 6 hours per night
y = average amount they are sleeping to balance out the other group
c = average amount of sleep among workers according to the ATUS survey

We get this:  ax + by = c
And then substituting:  (0.3*5.9) + (0.7*y) = 8.3
Solving for y:  y = 9.33 hours of sleep per night

Are 70% of Americans of working age actually getting 9.33 hours of sleep per night?  That would be pretty impressive.  It would also mean that instead of a normal distribution of sleep hours, we’d actually have a bimodal distribution….which would be a little strange.

There is, of course, the caveat that those answering the ATUS represent the whole population while the CDC targeted working adults.  It’s a little tough figuring out how profoundly this would affect the numbers since the BLS reports workforce participation rates for those 16 and up.  The unemployment rate for 2010 (the year the survey was completed) hovered just under 10%, but the “not in labor force” numbers are a little harder to get without skewing by the under 25 or over 65 crowd.  The CDC also didn’t report an average, so I can’t compare the two….but given the 30% number, the six 6 hours or less would be less than half a standard deviation from the mean (if the sleep data was roughly normal).

So does this mean I’m not as tired as I think I am?  Nope, I’m pretty sure I’m still going to bed early tonight. I will however, be aware that a tiring week does not necessarily mean a sleep deprived one.

Age Bias and Polling Methods

A few years ago, in one of my research methods classes in grad school, a professor I had asked us to raise our hand if we had a cell phone.  

Everyone raised their hands.  
Then he asked people to keep their hands up if they had a land line as well.  
Many hands went down.  
For those left, he asked how many answered it regularly or had caller ID and screened calls.  
Pretty much everyone.
This of course then led in to a discussion of political polling and how many of us had ever considered who was actually answering these questions.  It was an interesting discussion, as pretty much the entire class admitted they would have self excluded.  The Pew Research center suggests this was not an anomaly, and that this is actually a problem that’s becoming more acute in political polling.  
While many large national polling organizations have started calling cell phones as well, on the state level this is not often corrected for.  This can, and has, resulted in some inaccurate polls, as the sample of people home, with a landline, willing to answer a pollsters call, does not always reflect the general population.  Actually, I think there’s good reason to question the representativeness of a sample willing to answer their phone for an unknown number, but that could be disputed (those interested enough to pick up the phone also might be more likely to actually go vote).  
Anyway, none of this is new.  What is new this (presidential) election cycle is that news organizations are now starting to put up stats on Twitter and Facebook status updates.  I decided to take a look and see exactly how skewed these stats are, and found that Twitter is most popular in the 18-29 demographic.  Of course, this is the least likely demographic to actually vote.  Interestingly, the poll on Twitter usage did not include people under 18, but these are not excluded when they are compiling trends.  
So two different ways of tracking elections, two different sets of flaws.  Pick your poison.

Opinions, everybody’s got one

I was listening to a management podcast recently where a man named John Blackwell was being interviewed.  He was talking about how he was constantly reading things about how the whole workplace was changing, but he was getting curious as to why he felt like the companies he worked with weren’t reflecting this.  When he tried to investigate, he found out that the ongoing surveys commonly used in British management journals (can’t find a link) were being done on the “up and coming business leaders”.  When he looked in to what that meant, he realized it was people who were second year MBA students.

The problem with this, of course, was that this was asking people not in the workforce what the workforce was going to look like 10 years from now.  They found, not surprisingly, that young people in grad school tend to be very optimistic about things like “working from home” or “flex time” when they’re in school, but when they got in to business, they towed toed the line.  Thus, every survey done was essentially useless.  
This all reminded me of a conversation I got in to several years ago when I was working the overnight shift.  Someone had brought in a magazine (People or Vogue or something like that) and they had a ranking of the 100 most beautiful women in Hollywood.  Drew Barrymore was number one that year, and one of my (young, male) coworkers was actively scoffing at that.  “She’s unattractive,” he stated definitively.  “All the guys I know think so too.”
Now, I was feeling a little feisty feminist that night, so I thought about how to challenge him on that.  Leaving aside that “Hollywood unattractive” would still turn heads in any average crowd (and be more attractive than any girl he’d dated), something about his comment irked my data side.  “So maybe the voting was done by women,” I replied.  
He was floored.
I noted that it was not a men’s magazine that ran the story, so really women’s opinions of other women’s attractiveness would actually be more relevant to this list.  Furthermore, as most of the leading women in Hollywood make their money on romantic comedies, professionally women’s opinions of their attractiveness (which presumably included a certain likeability factor) would actually matter more than men’s.
I was fascinated that this clearly disturbed him.  It had clearly never occurred to him that straight men may not be the target audience for female attractiveness, or even that the relevance of his opinion might get questions.  He wasn’t trying to be a jerk, he was legitimately confused at the whole idea.
A long intro, but the bigger point is important.  In any opinion survey or research, it’s important to figure out whose opinion is most relevant to what you’re trying to get at and why.  When it comes to law and public policy questions, I think every voter is relevant.  When it comes to workplace trends?  You may need to narrow your sample.
Sampling bias is a huge problem in many contexts, but my primary one for today’s post is when the survey was not conducted with the end in mind.  For any sample, you have to figure out how much your subject’s opinions actually matter given what you’re trying to find out.  In social conversation it may be interesting to find out what a particular person thinks of a topic, but for good data, show me why I care.

Stand Back! I’m going to try SCIENCE!

Today I discovered that my favorite webcomic ( actually has a special comic up if you check it from my employer’s server.  Turns out the artist’s wife is a patient, doing well, and he wanted to show some love.  This post is thus titled for this shirt, which would make an awesome Christmas present for me, even in April.

Anyway, this weekend I saw this story with the headline “Study: Conservatives’ Trust In Science At Record Low”.

My first thought on seeing this was that the word “science” is a loaded word.  I mean, I’m as much a science geek as anyone.  Math’s my favorite, but science will always be a close second.  But do I trust science? I’m not sure.  Something really bothered me about that question, but I couldn’t quite put my finger on it until I read this post on the study from First Things today.  

My love of science makes me a skeptic.  I makes me question relentlessly and then continuously revisit to figure what got left out.  I don’t trust science because not trusting your assumptions is science done right.  If we could all trust our assumptions, what would we need science for?  This is the problem with vague questions and loaded words.  Much like the discussion in the comments section of this post where several commenters weighed in on the word “delegate” in relation to household tasks, it’s clear that people will interpret the phrase “trust science” in many different ways.

Some might say it means the scientific method, scientists, science as a career, science’s role in the world, or something else not springing to mind.  Given the vagueness of the question though, I would have a hard time actually calling anyone’s interpretation wrong.  Mine is based on my own bias, but I would wager everyone’s is.  So isn’t this survey more about how we’re defining a phrase than about anything else?

I thought my annoyance was going to end there, I really did.

Then I looked at the graph with the story, and had no choice but to get annoyed all over again.

That’s what I get for just reading headlines.

So over the course of this survey, moderates have consistently trusted science less than conservatives for all but four data points?  Why didn’t this get mentioned?  I found the original study and took a quick look for the breakdown: 34% self identified as conservative, 39% as moderate, and 27% as liberal.  So 73% of the population has shown a significant drop off in “trust of science” and yet they’re somehow portrayed as the outliers?  Science and technology have changed almost unimaginably since 1974, and yet liberal’s opinions about all that haven’t changed*?  Does that strike anyone else as the more salient feature here?

*Technically this may not be true.  I don’t know what the self identified proportions were in 1974, so it could be a self-identification shift.  Still.  This might be that media bias everyone’s always talking about.


Hate’s a strong word.  I get that.  I also get that data and survey types are not always the sort of thing that inspires people to strong hatred, but here we are.

In this post I mentioned my annoyance at perception/prediction polls.  The one I referenced was based on women who didn’t change their last names and their level of marital commitment.  Commenter Assistant Village Idiot mentioned another example, which I also liked ““Do you think earthquakes are more likely now because of climate change?” What we think has nothing to do with anything. The earthquakes will happen according to their own rules.”  

In writing that post however, I forgot to mention that same study included an even worse piece of data.  As a rebuttal to the “Midwestern college kids don’t think non-name changing women are committed” they included a remark that women who didn’t plan on changing their names didn’t feel less committed. 


I would really love it if someone could tell me if there’s a proper name for this sort of thing, but I always think of it as “the embarrassing question debacle”.  Basically, researchers ask people questions with a potentially embarrassing answer, and then report it as meaningful when people do not answer embarrassingly.

There are only two types of people I have ever heard who will admit they went in to their marriages less than completely committed:

  1. Those who have been married successfully for quite some time who are now comfortable in admitting they were totally naive when they walked down the aisle.
  2. Those who are already divorced and reflecting on what went wrong.
Level of commitment is best assessed in retrospect, and I look with great skepticism at anyone who says they can gauge it before the fact.  
Getting at the reasons people do things can be brutal.  Your only source for your data also has the biggest motivation to conceal it from you.  Some people are actually doing things for good reasons, some just want to look like they are, and some are lying to themselves.  Unless a study at least attempts to account for all 3 scenarios, I would hold all answers suspect.

It’s not the question, it’s how you ask it

Data gathering is a lot harder than most people imagine.  It’s an interesting exercise to take a study and prior to reading it start asking yourself “how would I, if pressed, get the data they claim to have gotten?”.  It’s amazing how many fall apart quickly when you realize how bad the source data is.

I face this all the time at work.  The simplest questions…what is our demand for transplants? can be a never ending labyrinth of opinion, observation, anecdote, and data….all completely enmeshed.  I spend much of my day trying to untangle these strings, and I never underestimate how difficult getting a simple answer can be. ran a great piece today illustrating this challenge.  In a post titled “How Many Would Repeal Obamacare?”  they review 4 different surveys that all try to get to the same number: how many people think healthcare reform should be repealed?

It’s a great article that covers sampling practices, question phrasing, date of the poll, and history of the polling organization.

If you looking at the numbers, it shows up pretty quickly that when given dichotomous choices (repeal/keep), people often look like they gave a strong opinion.  In the polls where more moderate answers are given (“it may need small modifications, but we should see how it works), people trend towards that answer.

The phrasing was extremely intriguing though:
“Turning to the health care law passed last year, what is your opinion of the law?”
“If a Republican is elected president in this November’s election, would you strongly favor, favor, oppose, or strongly oppose him repealing the healthcare law when he takes office?”
“Do you think Congress should try to repeal the health care law, or should they let it stand?”

In one, the question focuses on personal opinion, in the next the focus is the presidency, in the third it’s Congress.  All of this for a law that most Americans have yet to feel the effects of in any practical way.

Of course this is not to say that a public opinion poll (or 4) makes one side right or wrong. If constitutionality or effectiveness are your concern, nothing here addresses either.  I am enjoying it immensely for the educational value though, and kind of wishing I was teaching a class so I could use this as an example.  Those of us in Massachusetts do have the luxury of sitting back and just sort of pondering all of this….as this has been our world for 7 years now.

That reminds me….were these samples controlled for that????