Gender pay gap – Categories and equivalencies

Amy Alkon (aka the Advice Goddess) had an interesting piece over on her blog about the gender pay gap stat that keeps getting floated around.  By the way, if you’re at all libertarian leaning and hate the TSA, she’s a good read.

Anyway, I’ve posted before about the gender pay gap stat, and how it’s fairly deceptive, but her post triggered a point I hadn’t thought about previously.  Apparently the stat (that women make 75-81 cents per dollar that men make) is based on full time year round workers.  Alkon quotes another article that mentions that “full time” means anything over 35 hours.  
Now obviously this accounts for some of the disparity in pay gap.  We all know high achievers who work 70 hours a week, and to lump them in with those working 35 is silly.  
It’s an interesting study in categories though.  When I took my assessment class in grad school, the professor showed us a study that had been done on the number of sex partners a person had.  The options were:
  1. 0
  2. 1
  3. 2
  4. 3
  5. 4+
He pointed out two issues with this setup.  First, is there truly a meaningful difference between those who had 2 partners and 3?  How about those who had 3 partners and “4+”?  Is everyone with 4 or more partners really equal?
This mirrors the paycheck issue pretty well.  We’d all expect someone work 40 hours to make more than someone working 20 hours, but none of the calculations take in to account that someone working 60 hours will also make more than someone working 40 hours.  Alkon also links to this piece by Kay Hymowitz that gives this quote:

In 2007, according to the Bureau of Labor Statistics, 27 percent of male full-time workers had workweeks of 41 or more hours, compared with 15 percent of female full-time workers; meanwhile, just 4 percent of full-time men worked 35 to 39 hours a week, while 12 percent of women did. Since FTYR men work more than FTYR women do, it shouldn’t be surprising that the men, on average, earn more.

She also mentions the term “proofiness”….the use of misleading statistics to confirm what you already believe. Love that.

Anyway, I’m all for equal pay for equal work, but only if we’re really talking about equal pay AND equal work.

Bad categories, bad.

Never go with facts when a pre-existing narrative will do….

Well, the Republican National Convention has come and gone, and I have been too busy with house guests and the new little darling to even watch the much talked about Clint Eastwood speech.  So I can’t say I had been too on top of things, though I was a bit surprised to see the headline that the reality show (which I have also not seen) “Here Comes Honey Boo-Boo” had beat the RNC in the ratings.

This morning, in a quiet moment, I was perusing the internets, and found an interesting note in a Gawker article about that headline:

The Hollywood Reporter story titled “Honey Boo Boo Ratings Top the Republican National Convention” that perpetuated this myth went on to state that this victory was in the demographic of adults 18-49, and this results from coverage of the convention being spread over multiple channels. “Aggregate coverage of the RNC across networks obviously eclipsed Honey Boo Boo considerably,” Michael O’Connell wrote. Obviously. Considerably.

The media LOVES stories of increasing American ignorance and the decline of civilization.  So much so that they’ll rearrange their own numbers to prove how bad things are getting.

It’s quite the racket really….create a reality show, report on how this brings media to a new low, then hype it up so people watch it, then start writing stories about how people are watching it.  

Genius.

Economic Data, and why I don’t talk about it

I find it really hard to even comment on economic data on this blog.  It’s based on so many assumptions and there are so many different numbers that can be included or excluded that critiquing it is a combination of trying to shoot fish in a barrel and trying to catch a greased pig.

Not my idea of a good time.

Anyway, BD Keller linked to an excellent post today that is way more articulate than I about why evidence based monetary policy is so hard to come by.

On economic experimental models:

Think of a good experimental design: randomised control variables, holding everything else constant, etc. Now think of the worst possible experimental design. Imagine something that engineers or psychologists might dream up over beers for a laugh, or to illustrate what not to do. That’s what economists face. It’s as if our lab assistants (the fiscal and monetary authorities) were deliberately trying to make our (economists’) lives as hard as possible. They do this, of course, not to spite us, but to try to make everyone else’s lives as easy as possible. To get a good experimental design for economists, both the fiscal and monetary authorities would need to be malevolent.

Makes sense, but given this, I do wish they’d stop saying their predictions with such authority.

Now THAT’s how you write a science headline

“Babies Shun Altruism, Prefer Bouncing”

Speaking of replication of results, this study failed to substantiate the idea that 10 month old babies had a moral code.  Turns out that the their preference for “helpful” robots was based less on the fact that the robots were helpful, and more on the fact that they bounced.

I’m sort of curious how many of the original study authors were parents.  I’ve only been a mom for 19 days and even I could tell you that babies like bouncy things more than discussions about man’s existential angst.  The 2 AM feeding helps you figure these things out pretty quickly.

For fun, I decided to conduct my own n=1 experiment and to present my son with a survey regarding his preference for robots in general and their morals in particular.  I thought it was a fairly well crafted survey.

I think my findings are best summarized with this picture:

I think that should be good enough for any number of social psychology journals.  

Why do women quit science?

A week ago, I got forwarded this NPR article called “How Stereotypes Can Drive Women To Quit Science“.  It was sent to me by a friend from undergrad, female, successful, with both an undergrad and a grad degree in engineering.  She found it frustrating, and so do I.

Essentially, the article is about a study that tracked female science professor’s discussions at work (using some very cool/mildly creepy in ear recording devices), and came to the conclusion that women left science fields not because they were being overtly discriminated against, but because they’re scared that they might be.  This panic is apparently called “stereotype threat”, and is explained thusly:

When there’s a stereotype in the air and people are worried they might confirm the stereotype by performing poorly, their fears can inadvertently make the stereotype become self-fulfilling.

I figure this is why I only routinely make typos when someone is watching me type (interestingly, I made two just trying to get through that sentence).

Anyway, the smoking gun (NPRs words, not mine) was that:

When male scientists talked to other scientists about their research, it energized them. But it was a different story for women. “For women, the pattern was just the opposite, specifically in their conversations with male colleagues,” Schmader said. “So the more women in their conversations with male colleagues were talking about research, the more disengaged they reported being in their work.”Disengagement predicts that someone is at risk of dropping out. There was another sign of trouble.When female scientists talked to other female scientists, they sounded perfectly competent. But when they talked to male colleagues, Mehl and Schmader found that they sounded less competent.

The interpretation of  this data was curious to me. I wasn’t sure that social identity threat was the first theory I’d jump to, but I figured I’d read the study first.

It took me a little while to find the full paper free online, but I did it.  I got a little hung up on the conversation recording device part (seriously, it’s a very cool way of doing things….they record for 50 seconds every 9 minutes for the work day to eliminate the bias of how people recall conversations….read more here).

Here are the basics:  The sample size was 19 female faculty from the same research university.  Each was then “matched” with a male faculty member for comparison.  I couldn’t find the ages for the men, but they were matched on rank and department.  It appears the 19 women were out of 32 possibilities.  I’m unclear whether the remainder were unavailable or whether they declined. Genders did not have a difference in their levels of disengagement at the beginning of the study.

Unfortunately, they didn’t elaborate much on one thing I had a lot of questions about: how do you define competence.  They only stated that two different research assistants ranked it.  Since all of the researchers in this study were social psychologists, presumably so were their assistants.  It concerned me a bit that science faculty was being rated by people that wouldn’t know actual competence, merely the appearance of it (the study authors admit this is a weakness).

Another interesting point is that job disengagement was only measured up front.  When I had initially read the report on the study, I had inferred that they were taking data post conversation to see the change.  They weren’t.  They took it up front, then found that the more disengaged women had a higher percentage of total discussions about work with men than the other women were.  It occurred to me that this could merely be a sign of “female auto pilot mode”.  Perhaps when women are at ease they share more about their personal life?  The researchers admit this as a possibility, but say it’s not likely given that they sound less competent….as assessed by people who didn’t know what competence sounded like.

One point not addressed at all in this study was the seniority of the people the participants were talking to.  In traditionally male dominated fields, it is likely that at least some of the men they ran in to were the chairs of the department, etc, meaning that these women were probably talking to a more intimidating group of men than women.  Women who talk heavily about research and less about personal lives may have run in to more senior faculty more often.  As the study took place over 3 days, it could conceivably be skewed by who people ran in to.  Additionally, I was wondering about the presence of mentoring and/or women in science type groups.  Women in science frequently meet other women in science through these groups, and there could have been some up front data skewing there.

It’s also important to note that for every measure of disengagement in the study, the results were between 1.5 and 2 (on a scale of 1 to 5).  While statistically significant, I do wonder about the practical significance of these numbers.  If asked whether you agree or disagree with the statement “I often feel I am going through the motions at work”, how accurately could you answer, on a scale of 1 to 5?

Overall this study seemed very chicken and egg to me.  I’m not convinced that it’s implausible that women simply share more of themselves at work, especially when they’re comfortable, as opposed to the sharing itself making women more comfortable at work (there’s nothing worse at work than an awkward overshare).   I’m still not sure I get where you’d extrapolate stereotype threat unless it was the explanation you’d already picked…..I did not see any data that would point to it independently.

I’d like to see a follow up study in ten years to see if these women did actually drop out at higher rates than their male colleagues, and what their stated reason for leaving was.  Without that piece, any conclusions seem incredibly hypothetical to me.  One of the things that drives me a bit bonkers when discussing STEM careers is very few people seem interested in what the women actually doing these careers think about why they choose what they do or do not do.  I’ve never seen a study that walked in to an English class and asked all the women why they weren’t engineers.  Likewise, if more of these women quit than the men, I’d like to see why they said they did it.  Then perhaps we can get in to the weeds, but won’t somebody tell me why women actually think they’re quitting?

I looked through the references section and couldn’t find a paper that addressed this question.

Anyway, I think it’s important to remember that when reading a study like this, you have to agree with all the steps before you can agree with the conclusions.  Is measuring snippets of conversations and having them coded by research assistants a valid method of determining how women function in the workplace?  Is 19 people a big enough sample size?  Should level of disengagement at work be controlled for possible outside events that might be causing them to feel less engaged?  Should the women in question be asked if they felt stereotype threat, or is that irrelevant?

Most importantly, should NPR have clarified that when they said “stereotypes can drive women out of science” they meant “theoretical stereotypes that may or may not have been there and that women may or may not have been afraid of….and none of these women had actually quit we just think they might?”.  You know, just hypothetically speaking.