Why do women quit science?

A week ago, I got forwarded this NPR article called “How Stereotypes Can Drive Women To Quit Science“.  It was sent to me by a friend from undergrad, female, successful, with both an undergrad and a grad degree in engineering.  She found it frustrating, and so do I.

Essentially, the article is about a study that tracked female science professor’s discussions at work (using some very cool/mildly creepy in ear recording devices), and came to the conclusion that women left science fields not because they were being overtly discriminated against, but because they’re scared that they might be.  This panic is apparently called “stereotype threat”, and is explained thusly:

When there’s a stereotype in the air and people are worried they might confirm the stereotype by performing poorly, their fears can inadvertently make the stereotype become self-fulfilling.

I figure this is why I only routinely make typos when someone is watching me type (interestingly, I made two just trying to get through that sentence).

Anyway, the smoking gun (NPRs words, not mine) was that:

When male scientists talked to other scientists about their research, it energized them. But it was a different story for women. “For women, the pattern was just the opposite, specifically in their conversations with male colleagues,” Schmader said. “So the more women in their conversations with male colleagues were talking about research, the more disengaged they reported being in their work.”Disengagement predicts that someone is at risk of dropping out. There was another sign of trouble.When female scientists talked to other female scientists, they sounded perfectly competent. But when they talked to male colleagues, Mehl and Schmader found that they sounded less competent.

The interpretation of  this data was curious to me. I wasn’t sure that social identity threat was the first theory I’d jump to, but I figured I’d read the study first.

It took me a little while to find the full paper free online, but I did it.  I got a little hung up on the conversation recording device part (seriously, it’s a very cool way of doing things….they record for 50 seconds every 9 minutes for the work day to eliminate the bias of how people recall conversations….read more here).

Here are the basics:  The sample size was 19 female faculty from the same research university.  Each was then “matched” with a male faculty member for comparison.  I couldn’t find the ages for the men, but they were matched on rank and department.  It appears the 19 women were out of 32 possibilities.  I’m unclear whether the remainder were unavailable or whether they declined. Genders did not have a difference in their levels of disengagement at the beginning of the study.

Unfortunately, they didn’t elaborate much on one thing I had a lot of questions about: how do you define competence.  They only stated that two different research assistants ranked it.  Since all of the researchers in this study were social psychologists, presumably so were their assistants.  It concerned me a bit that science faculty was being rated by people that wouldn’t know actual competence, merely the appearance of it (the study authors admit this is a weakness).

Another interesting point is that job disengagement was only measured up front.  When I had initially read the report on the study, I had inferred that they were taking data post conversation to see the change.  They weren’t.  They took it up front, then found that the more disengaged women had a higher percentage of total discussions about work with men than the other women were.  It occurred to me that this could merely be a sign of “female auto pilot mode”.  Perhaps when women are at ease they share more about their personal life?  The researchers admit this as a possibility, but say it’s not likely given that they sound less competent….as assessed by people who didn’t know what competence sounded like.

One point not addressed at all in this study was the seniority of the people the participants were talking to.  In traditionally male dominated fields, it is likely that at least some of the men they ran in to were the chairs of the department, etc, meaning that these women were probably talking to a more intimidating group of men than women.  Women who talk heavily about research and less about personal lives may have run in to more senior faculty more often.  As the study took place over 3 days, it could conceivably be skewed by who people ran in to.  Additionally, I was wondering about the presence of mentoring and/or women in science type groups.  Women in science frequently meet other women in science through these groups, and there could have been some up front data skewing there.

It’s also important to note that for every measure of disengagement in the study, the results were between 1.5 and 2 (on a scale of 1 to 5).  While statistically significant, I do wonder about the practical significance of these numbers.  If asked whether you agree or disagree with the statement “I often feel I am going through the motions at work”, how accurately could you answer, on a scale of 1 to 5?

Overall this study seemed very chicken and egg to me.  I’m not convinced that it’s implausible that women simply share more of themselves at work, especially when they’re comfortable, as opposed to the sharing itself making women more comfortable at work (there’s nothing worse at work than an awkward overshare).   I’m still not sure I get where you’d extrapolate stereotype threat unless it was the explanation you’d already picked…..I did not see any data that would point to it independently.

I’d like to see a follow up study in ten years to see if these women did actually drop out at higher rates than their male colleagues, and what their stated reason for leaving was.  Without that piece, any conclusions seem incredibly hypothetical to me.  One of the things that drives me a bit bonkers when discussing STEM careers is very few people seem interested in what the women actually doing these careers think about why they choose what they do or do not do.  I’ve never seen a study that walked in to an English class and asked all the women why they weren’t engineers.  Likewise, if more of these women quit than the men, I’d like to see why they said they did it.  Then perhaps we can get in to the weeds, but won’t somebody tell me why women actually think they’re quitting?

I looked through the references section and couldn’t find a paper that addressed this question.

Anyway, I think it’s important to remember that when reading a study like this, you have to agree with all the steps before you can agree with the conclusions.  Is measuring snippets of conversations and having them coded by research assistants a valid method of determining how women function in the workplace?  Is 19 people a big enough sample size?  Should level of disengagement at work be controlled for possible outside events that might be causing them to feel less engaged?  Should the women in question be asked if they felt stereotype threat, or is that irrelevant?

Most importantly, should NPR have clarified that when they said “stereotypes can drive women out of science” they meant “theoretical stereotypes that may or may not have been there and that women may or may not have been afraid of….and none of these women had actually quit we just think they might?”.  You know, just hypothetically speaking.

What is STEM anyway?

I’ve been trying to work on a post about some further research on women in STEM fields, and I keep getting bogged down in definitions.  I am currently headed down the rabbit hole of what a “STEM job” actually is.

I found out some interesting things.  According to this report, my job doesn’t count as a STEM job, despite the fact that I work with nothing but math and science (alright, and some psych).  It’s not the psych part that excludes me however, it’s actually that I work in healthcare.  Healthcare, apparently is excluded completely.

So if I were performing my same job, with the same qualifications, in a different field, I’d have a STEM job.  Since I report in to a hospital however, I don’t have one.

Your doctor does not have a STEM job.  Neither does your pharmacist, dentist, nurse, or anyone who teaches anything on any level.  Apparently if you run stats for the Red Sox, you’re in a STEM job, but do the same thing for sick people, and it doesn’t count.

Fascinating.

Deadliest weapons and causes of death

There’s an apocryphal story in the international public health sphere about the time someone tried to figure out total mortality in Africa in any given year.  Apparently they went through the newsletters/press releases of  charities dedicated to various diseases, and found that if you added all the “x number of people die every year” numbers up, everyone in Africa died every year.  Twice.

While there’s likely some data inflation there, the other explanation is that it’s really hard to classify causes of death (I’ve covered some of this before).  Even with infectious disease, this can be tricky.  If an HIV positive person contracts tuberculosis and dies, do they go under HIV mortality, or tuberculosis?  If malnutrition leaves on susceptible to other infections, what’s the real cause of death?  How about a bad water supply that carries ringworm?

I bring this up because I saw a fascinating stat today over at the New Yorker (via Farnam St):

What Is The Most Effective Killing Machine Man Has Ever Seen?Mosquitoes.
There has never been a more effective killing machine. Researchers estimate that mosquitoes have been responsible for half the deaths in human history.Malaria accounts for much of the mortality, but mosquitoes also transmit scores of other potentially fatal infections, including yellow fever, dengue fever, chikungyunga, lymphatic filariasis, Rift Valley fever, West Nile fever, and several types of encephalitis. Despite our technical sophistication, mosquitoes pose a greater risk to a larger number of people today than ever before. Like most other pathogens, the ciruses and parasites borne by mosquitoes evole rapidly to resist pesticides and drugs.
via “The Mosquito Solution,” ($$$) The New Yorker, July 9 & 16, 2012, p. 40

Definitely made me a bit nervous, especially since it seems malaria, etc would actually be some of the more accurately counted causes of death.  So, um, take care of yourselves this summer, okay?

Political Arithmetic – Voter ID laws

Update: Link fixed

Last week I put up a post slamming an infographic on fair market rent between states.  I was interested in the AVIs response, which end with “These are advocacy numbers.  Not the same as actual reality.”

Advocacy and other political skewings of data are one of those things that shouldn’t bother me, but do.  
I read headlines, knowing that I’m going to be driven nuts but the presumptions and projections, and yet I read things anyway.  It’s a bad habit.
All that being said, I truly enjoyed Nate Silver’s examination of the real effect voter ID laws might have on voter turnout in various states. 
He attempts to cut through all the partisan hoopla and to do a one person point-counterpoint.  An example:

But some implied that Democratic-leaning voting groups, especially African-Americans and Hispanics, were more likely to be affected. Others found that educational attainment was the key variable in predicting whom these laws might disenfranchise, with race being of secondary importance. If that’s true, some white voters without college degrees could also be affected, and they tend to vote Republican.

He also makes a fascinating point about the cult of statistical significance:

Statistical significance, however, is a funny concept. It has mostly to do with the volume of data that you have, and the sampling error that this introduces. Effects that may be of little practical significance can be statistically significant if you have tons and tons of data. Conversely, findings that have some substantive, real-world impact may not be deemed statistically significant, if the data is sparse or noisy.

On the whole, he concludes it will swing in the Republican direction for this election, but reminds everyone:

One last thing to consider: although I do think these laws will have some detrimental effect on Democratic turnout, it is unlikely to be as large as some Democrats fear or as some news media reports imply — and they can also serve as a rallying point for the party bases. So although the direct effects of these laws are likely negative for Democrats, it wouldn’t take that much in terms of increased base voter engagement — and increased voter conscientiousness about their registration status — to mitigate them. 

The whole article is long but a great read about how to assess policy changes if you’re trying to get to the truth, rather than just prove a political point.

Weekend moment of Zen 7-14-12

No comic, but a mildly humorous anecdote:

My wonderful husband and I took a child birth education class today.  The teacher was excellent, and spent a lot of time emphasizing that there were lots of different opinions about lots of things, but the focus should always be having a healthy baby/healthy mom.

She repeated this several times (clearly trying to avoid having any natural childbirth vs epidural debates) and then mentioned that you could read plenty of research about all sorts of different aspects of childbirth, but that it was really important to assess sample size, who did the study, etc etc.

I started to laugh a bit, and she looked at me and said “no really, you would not believe how many bad studies there are out there!”.

Needless to say, I enjoyed this teacher immensely.

My kind of class right there.

Moral obligations and Lazy Truth

I was going to include this in a Friday link post, but I really felt it deserved it’s own spotlight.  

There’s a new gmail gadget called “Lazy Truth” that promises to send you a fact check email every time you receive a (forwarded) email it deems to be of dubious content.

I haven’t tried it, so I’m not sure what it’s set up to flag, or how accurate the “fact check” email is, but I was immediately intrigued.  I’ve actually been working on a much longer post that covers just this topic, so it’s something I’ve been giving a lot of thought.

I’ve been mulling over the rise of Facebook/email/Twitter lately, and wondering…..for those of us who value our integrity and our truthfulness, and do not believe ends justify means, what exactly are the moral implications of hitting forward or share on information that we could have easily proven to be false if we’d checked?

I was wondering if I was the only one worried about this, when I came across a blog post from Dr Michael Eades.  He’s a pro-low carb physician, who spends much of his time critiquing nutritional research.  In a post about the book “The China Study”, he describes finding what he consider a great critique of it on another person’s blog.  Then this:

…. I had fallen victim to the confirmation bias.  My bias was that Dr. Campbell was wrong, so I was more than happy to uncritically accept evidence confirming his error without lifting a finger to double check said evidence myself.  I knew that if a blogger somewhere had come out with a long post describing an analysis of the China study demonstrating the validity of all of Dr. Campbell’s notions of the superiority of the plant-based diet, I would’ve been all over it looking for analytical errors.  But since Ms. Minger’s work accorded with my own beliefs, my confirmation bias ensured that I accepted it at face value. 

Once the fact that I had succumbed to my confirmation bias settled in around me, I became suffused with angst.  I had tweeted and retweeted Ms. Minger’s analysis a number of times, giving the impression that I had at least minimally checked it out and had approved it.  I had emailed it to a number of people, many of whom, I’m sure, had forwarded it on.  I’m sure I played a fairly large role in the rapid dissemination of the anti Campbell/China study info.

In the end, he went back and realized that the post was good, but his panic attack was intriguing to me.  How many of us have had this same panic?  How many of us should have?  How many lousy graphs rip through Facebook like wildfire because no one bothers to double check if they’re even valid?  Is the liar the person who created the graph, or do those who share it share some blame?

I don’t pretend I have an answer for this.  I feel most of the people interested enough to read this blog probably do not fall in the category of those who would easily share skewed information without thinking about it, but I am hoping for some thoughts/feedback from you all.

Are we so used to hearing politicians of all stripes seamlessly repeat bad data that we’ve come to view it as acceptable?  Is this just a fact of life?  Is it possible that we will be saved by widgets like the one above?   Does religion matter, or is this an overall moral issue? Does confrontation work with this sort of thing?  Or is this something I just have to learn to live with?

Soviet Propaganda, Infographic Style

In “How to Lie With Statistics“, the author frequently comments about Soviet Propaganda and how bad it is. Being a member of a cynical generation, Huff’s annoyance at an oppressive regime using data skewing to seem better than it was seemed almost quaint….I mean of course they were.

Even given my cynicism and lack of Russian skills, I have to admit these infographics from the Duke U library are pretty interesting.

This one’s my favorite, because none of the bar heights make any sense:

Moral of the story?  Every time you share a bad infographic, the Communists win.

Good hospital/Bad hospital

Several years ago, back when I was working in the Emergency Department, I had a rather fascinating encounter with a patient’s wife.  It was late in the evening on a Friday….a generally bad time to come in to the ER….and she had brought her husband in with a large cut on his arm.  He needed stitches for sure, but the place was hopping that night, and so she, her husband, and her two small children had been stuck in the waiting room for several hours.  After some time, she had come in asking me when someone was going to come get him.  At that point, I think they still had 4 or 5 people ahead of them, and I let her know.  

She (fairly understandably) flipped out.  
As I tried to calm her down, she started to lecture me about how long they had been waiting….and then proceeded to let me know that this wait had come after she had driven her husband over an hour and a half to get there.  “You are SUPPOSED to be the best hospital in the country” she raged.  “How can you be if you make patients wait so long????”.
Now I had the “why am I waiting so long” conversation with literally thousands of patients in my time in the ER, but something really struck me about this poor woman’s frustration.  She had brought her husband to a hospital that was supposed to be the best (this particular hospital bounces around the top 5 in the country pretty routinely), but not for what he needed done that night.  What he needed was a simple set of stitches, the likes of which nearly any doctor in the country could have done.  When I took a look at her address, I realized she had driven by at least five different hospitals with ERs to get to ours.  Most likely any one of them would have gotten her faster service with the same quality of care.  In fact, within the next few years, three of them would devise marketing strategies around publicizing that fact.  The problem is, this woman had confused “the best” with “good at everything”.  
When it comes to hospitals, that’s just not true.
Given my professional experience, I was unsurprised  to see Time reporting that not one of the 17 best hospitals (according to US News and World Report) made the consumer reports list of safest hospitals.  
There’s a couple reasons for this, some good and some bad:
  1. Best hospitals tend to be large teaching hospitals.  Large teaching hospitals have a lot of residents. Residents can be a little dicey.
  2. Best hospitals tend to see huge numbers of patients.  This can complicate things.
  3. Best hospitals tend to see cases other hospitals can’t help.  Almost all of your top hospitals will have higher mortality rates than smaller community hospitals.  Why?  Because unless you’re literally DOA, the first thing a small hospital will do with a really sick patient is to ship them off to a hospital with a good intensive care unit.  The top hospitals almost never transfer their patients.
  4. Best hospitals are ranked in large part on how they treat the toughest cases.  The more unique your condition, or the worse your risk factors, the more selective you need to be.  The more routine your complaint, the more a top hospital can actually work against you….you’re going to be one of many, and nothing makes you stand out.
  5. Large medical centers, specifically in urban settings, give away a lot of free care to a lot of high risk populations.  These patients are unlikely to do well in any setting, and can skew the data tremendously.  Location counts.
There’s constant strife over how to accurately rank hospitals, because professionals skew hospital rankings in the direction of valuing medical uniqueness.  Patients on the other hand, tend to value things like “comfort of chairs in the waiting room” nearly as high as they do “physician competence”.  Patient’s also claim to want things that they don’t really….for example nearly everyone says they value physician competence over bedside manner, yet patient’s routinely rate physicians with good bedside manner higher than those with good technical skill.  Patient’s receiving appropriate care also file plenty of complaints if it wasn’t the care they expected.  No hospital ranking is going to hit every part of the hospital equally regardless of who ranks it, and every department can have a bad day.   
I don’t have a lot of answers to these issues, but it’s important to keep them in mind when you hear ideas for improvement.  While the Time article got a bit too political for my taste, it is true that patients can only make informed decisions if the information they have is what they think it is.

19 women don’t like sports

Normally this is the sort of thing Joseph’s blog specializes in, but I couldn’t let this one slide.

I’ve spent all of last week and this week listening to construction workers traipsing around my basement, working diligently to finish it so we can finally have the sports room my husband’s impressive memorabilia collection deserves.  Thus, it distressed me a bit to see the headline that married women only watch sports for the sake of their husbands.  Is my interest in the sports room one big lie? Has my Red Sox fandom all been a fraud?  Should I toss out all my vintage basketball cards from the 80s?  And football…..okay, I actually didn’t like it all that much until I got married.  I’ll give you that one.  Two out of three ain’t bad.

Anyway, I pretty amused when Jezebel and other’s quickly pointed out that the sample size for this study was 19.     19 women, all from around the University of Tennessee.  In case you’re curious, The Bleacher Report ranked Knoxville the 44th best sports town in the USA.  Maybe my perception is skewed because Boston’s #2, but I’m not sure that’s an overly representative sample from an overly representative town.

Get some good Southie girls together and ask them what they think, I bet you’ll get a wicked different picture.