On Romney

I’m posting about Romney to mention that I’m not commenting on Romney.

I’ve posted before about the stat on people not paying federal income tax…that stat’s been out there forever. The issue here seems to be completely on interpretation of that stat and what it means for voting.  I don’t do interpretation, I just do numbers.

 

Never go with facts when a pre-existing narrative will do….

Well, the Republican National Convention has come and gone, and I have been too busy with house guests and the new little darling to even watch the much talked about Clint Eastwood speech.  So I can’t say I had been too on top of things, though I was a bit surprised to see the headline that the reality show (which I have also not seen) “Here Comes Honey Boo-Boo” had beat the RNC in the ratings.

This morning, in a quiet moment, I was perusing the internets, and found an interesting note in a Gawker article about that headline:

The Hollywood Reporter story titled “Honey Boo Boo Ratings Top the Republican National Convention” that perpetuated this myth went on to state that this victory was in the demographic of adults 18-49, and this results from coverage of the convention being spread over multiple channels. “Aggregate coverage of the RNC across networks obviously eclipsed Honey Boo Boo considerably,” Michael O’Connell wrote. Obviously. Considerably.

The media LOVES stories of increasing American ignorance and the decline of civilization.  So much so that they’ll rearrange their own numbers to prove how bad things are getting.

It’s quite the racket really….create a reality show, report on how this brings media to a new low, then hype it up so people watch it, then start writing stories about how people are watching it.  

Genius.

Anti-conservative bias and social psychology

My most popular blog post of all time was the one I did on conservative trust in the scientific community vs retraction rates.   I called it “Paranoia is just good sense if people really are out to get you” because I had a suspicion (confirmed when I ran the data) that conservatives might actually be behaving rationally when they said they trusted science less, given the ever increasing retraction rates in prominent journals.

Now, a new study shows that this distrust of the scientific community is even more well founded than I originally thought.

In a survey conducted by two self proclaimed liberals, it was found that there is heavy evidence that conservatives are being systematically discriminated against in the field of social psychology.  What unnerved the authors even more is that this was not a case where people were hiding their bias:

To some on the right, such findings are hardly surprising. But to the authors, who expected to find lopsided political leanings, but not bias, the results were not what they expected.
“The questions were pretty blatant. We didn’t expect people would give those answers,” said Yoel Inbar, a co-author, who is a visiting assistant professor at the Wharton School of the University of Pennsylvania, and an assistant professor of social psychology at Tilburg University, in the Netherlands.
He said that the findings should concern academics. Of the bias he and a co-author found, he said, “I don’t think it’s O.K.”

The study isn’t available yet, so I can’t say I’ve read the nuances.  Still, it’s hard for me to believe two liberal authors would have attempted to skew the results in this direction.  Conservatives have claimed this bias exists for years (look no further than the ethics complaint lodged against Mark Regnerus for proof), and will no doubt find nothing shocking about the results.  For liberals to have to face what this means however, that’s something new.  Even in the comments on this article, the vitriol is surprising, with many saying that conservatives are so out of touch that it is an ethical responsibility to keep them out of fields like social psychology.

Yikes.

It is much to my chagrin that social science gets lumped in with harder science, but since findings in this field are so often reported in the media, it makes sense to take them in to account.  We have a vicious cycle here now where some fields are dominated by one party, who then do studies that slam the other party, then accuse that party of being anti-science when they don’t agree with the results.  This is crazy.  The worst thing that can happen to any scientific research is too much consensus….especially when it involves moving targets like social psychology.  With 40% of the population identifying as conservative, how can we leave those perspectives out?  Everyone, liberal and conservative, should be troubled by these findings.  Those untroubled by this should take a good look at themselves and truly ask the question “what am I so afraid of?”.

Review and redraft – research in government

A few months ago, my father let me know that New Hampshire had passed a law that required the various government agencies to update their rules/statutes every few years (5 years? 7 years? Dad, help me out here).  I’m not entirely sure what the scope of this law was, but my Dad mentioned that it was actually quite helpful for his work at the DMV.  It had surprised him how many of their rules did not actually reflect the changing times, and how helpful it was to update them.  One of the biggest rules they had found to update was that in certain situations, they were still only allowed to accept doctor’s notes from M.D.s….so anyone who used a nurse practitioner for primary care couldn’t get an acceptable note….despite NPs being perfectly qualified to comment on the situations they were assessing.  It wasn’t that the note needed to be from an MD, it was just that when the rule was written, very few people had anything other than a primary care MD.  I found the entire idea pretty good and proactive.

I was thinking about that after my post yesterday on South Dakota’s law regarding abortion risk disclosure.  I was wondering how many, if any, states require that laws based primarily on current scientific research  review those laws in any given time period.

Does anyone know if any states require this?  Or is this solely up to those who oppose certain laws to challenge things later?  

Correlation and Causation – Abortion and Suicide meet the 8th circuit

Perhaps it’s lawyer’s daughter in me, but I think watching courts rule on presentation of data is totally fascinating to me.

Today, the 8th Circuit Court of Appeals had to make just such a call.

The case was Planned Parenthood v Mike Rounds and was a challenge to a 2005 law that required doctors to inform patients seeking abortions that there was “an increased risk of suicide ideation and suicide”.  This was part of the informed consent process under the “all known medical risks” section.

Planned Parenthood challenged on the grounds that this was being presented as a causal link, and was therefore was a violation of the doctor’s freedom of speech.

It’s a hot topic, but I tried to get around the controversy to the nuts and bolts of the decision. I was interested how the courts evaluated what research should be included and how.

Apparently the standard is as follows:

…while the State cannot compel an individual simply to speak the State’s ideological message, it can use its regulatory authority to require  a  physician to provide  truthful,  non-misleading  information relevant to a patient’s decision to have an abortion, even if that information might also encourage the patient to choose childbirth over abortion.”  Rounds, 530 F.3d at 734-35; accord Tex. Med. Providers Performing Abortion Servs. v. Lakey, 667 F.3d 570, 576-77 (5th Cir. 2012).  

So in order to be illegal, disclosures must be proven to be ““either  untruthful, misleading or not relevant to the patient’s decision to have an abortion.”


It was the misleading part that the challenge focused on.  The APA has apparently endorsed the idea that any link between abortion and suicide is NOT causal.  The theory is that those with pre-existing mental health conditions are both more likely to have unplanned pregnancies and to later commit suicide. It was interesting to read the huge debate over whether the phrase “increased risk” implied causation (the court ruled causation was not implicit in this statement).


Ultimately, it was decided that this statement would be allowed as part of informed consent.  The conclusion was an interesting study in what the courts will and will not vouch for:

We acknowledge that these studies, like the studies relied upon by the State and Intervenors, have strengths as well as weaknesses. Like all studies on the topic, they must make use of imperfect data that typically was collected for entirely different purposes, and they must attempt to glean some insight through the application of sophisticated statistical techniques and informed assumptions. While the studies all agree that the relative risk of suicide is higher among women who abort compared to women who give birth or do not become pregnant, they diverge as to the extent to which other underlying factors account for that link.  We express no opinion as to whether some of the studies are more reliable than others; instead, we hold only that the state legislature, rather than a federal court, is in the best position to weigh the divergent results and come to a conclusion about the best way to protect its populace.  So long as the means chosen by the state does not impose an unconstitutional burden on women seeking abortions or their physicians, we have no basis to interfere.

I did find it mildly worrisome that the presumption is that the state legislators are the ones evaluating the research.  On the other hand, it makes sense to put the onus there rather than the courts. It’s good to know what the legal standards are though….it’s not always about the science.

Political ages…mean vs median?

I just found out The Economist has a daily chart feature!

Today’s graph about age of population vs age of cabinet ministers is pretty fascinating:

It did leave me with a few questions though…..who did they count as cabinet ministers?  I don’t know enough about the governments in these countries to know what that equates to.  Also, why average vs median?  
I initially thought this chart might have been representing Congress, not the Cabinet.  I took a look at my old friend the Congressional Research Service Report and discovered that at the beginning of the 112th Congress in 2011, the average age was  57.7 years, which would make this chart about right.  I had to dig a bit further to get the ages of the Cabinet, but it turns out their average age is 59.75.  I was surprised the data points would be so close together actually….especially since that 57.7 was for Jan 2011, so it’s actually 59.2 or so now.  
In case you’re curious, 7 members of the cabinet are under 60.  The youngest is Shaun Donovan (46), Department of Housing and Urban Development.  The oldest is Leon Panetta (74), Department of Defense. Panetta is actually the only member over 70.  Half of them are in their 60s, 5 in the 50s, and 2 in their 40s.  
I felt a little ashamed I only could have given name/position to 5 of them before looking them all up.  That’s not great, especially when you realize I’m counting Biden.  Still, I comforted myself with the fact that I bet that beats a very large percentage of Americans.  
A quick look for other data suggests that median age of populations is the more commonly reported value.  The median age of the cabinet was actually 61, in case you’re curious.

Political Arithmetic – Voter ID laws

Update: Link fixed

Last week I put up a post slamming an infographic on fair market rent between states.  I was interested in the AVIs response, which end with “These are advocacy numbers.  Not the same as actual reality.”

Advocacy and other political skewings of data are one of those things that shouldn’t bother me, but do.  
I read headlines, knowing that I’m going to be driven nuts but the presumptions and projections, and yet I read things anyway.  It’s a bad habit.
All that being said, I truly enjoyed Nate Silver’s examination of the real effect voter ID laws might have on voter turnout in various states. 
He attempts to cut through all the partisan hoopla and to do a one person point-counterpoint.  An example:

But some implied that Democratic-leaning voting groups, especially African-Americans and Hispanics, were more likely to be affected. Others found that educational attainment was the key variable in predicting whom these laws might disenfranchise, with race being of secondary importance. If that’s true, some white voters without college degrees could also be affected, and they tend to vote Republican.

He also makes a fascinating point about the cult of statistical significance:

Statistical significance, however, is a funny concept. It has mostly to do with the volume of data that you have, and the sampling error that this introduces. Effects that may be of little practical significance can be statistically significant if you have tons and tons of data. Conversely, findings that have some substantive, real-world impact may not be deemed statistically significant, if the data is sparse or noisy.

On the whole, he concludes it will swing in the Republican direction for this election, but reminds everyone:

One last thing to consider: although I do think these laws will have some detrimental effect on Democratic turnout, it is unlikely to be as large as some Democrats fear or as some news media reports imply — and they can also serve as a rallying point for the party bases. So although the direct effects of these laws are likely negative for Democrats, it wouldn’t take that much in terms of increased base voter engagement — and increased voter conscientiousness about their registration status — to mitigate them. 

The whole article is long but a great read about how to assess policy changes if you’re trying to get to the truth, rather than just prove a political point.

More thoughts on the soda ban

Yesterday I found out the soda ban is potentially hitting a bit closer to home.

For those of you not familiar with Cambridge, MA, it’s affectionately known as “The People’s Republic” (and even has a communist bar of the same name).  Thus the proposed ban was pretty unsurprising.

Coincidentally, Ben Goldacre put up a new post yesterday publicizing a paper he coauthored to try to push governments in the UK to actually conduct trials of their policies before implementing them.

Best quote:

We also show that policy people need to have a little humility, and accept that they don’t necessarily know if their great new idea really will achieve its stated objectives. We do this using examples of policies which should have been great in principle, but turned out to be actively harmful when they were finally tested.

Contrast this to the Mayor of Cambridge’s statement on the soda ban:

“As much free will as you can have in a society is a good idea,” Davis said Tuesday. “… But with a public health issue, you look at those things that are dangerous for people, that need government regulation.”

Is no one interested in finding out if this idea will actually work before implementing it?  The leading researchers in the field seem to think it won’t.   I tend to agree with them.  You know what though?  I’m game.  Let’s put it to a randomized trial.  There are those who think the constitutionality of this should be worked out first, but I think a well run trial could open the door for an opt in system rather than a mandatory one.

Hey, maybe if politicians stayed a little more open to testing their ideas, you wouldn’t wind up with cartoons like this one:

Soda bans and research misapplications

When I first read about Mayor Bloomberg’s proposed soda restrictions for NYC, I immediately thought of this post where I mentioned the utter failure of removing vending machines from schools.  Thus, I was extremely skeptical that this ban would work at all, and it seemed quite an intrusion in to private business for what I saw as an untested theory.

To be honest, I didn’t put much more thought in to it.  I saw the studies about people eating more from large containers floating around, but I dismissed on the basis that (like with the vending machine theory) they were skipping a crucial step.  Even if this ban got people to drink less soda, that doesn’t actually prove it would reduce obesity.  You have to prove all the steps in the series to prove the conclusion.

A few days ago, the authors of the “bigger containers cause people to eat more” study published their own rebuttal to the ban.  In an excellent example of the clash of politics and research, they claim that to apply their work on portion sizes in this manor is a misreading of the body of their work.  They highlight that the larger containers study was done by assigning portion sizes at random, to subjects who had no expectations as to what they would be getting.  In their words, the ban is a problem because (highlight mine):

Banning larger sizes is a visible and controversial idea. If it fails, no one will trust that the next big — and perhaps better — idea will work, because “Look what happened in New York City.” It poisons the water for ideas that may have more potential.

Second, 150 years of research in food economics tells us that people get what they want. Someone who buys a 32-ounce soft drink wants a 32-ounce soft drink. He or she will go to a place that offers fountain refills, or buy two. If the people who want them don’t have much money, they might cut back on fruits or vegetables or a bit of their family meal budget.

In essence, by removing the random element and forcibly replacing what people want with something the don’t, you frequently will have the worst possible effect: rebellion.

Mindless eating can be a problem, but rebellious eating is even worse.

When the researchers you’re trying to use to back yourself up start protesting your policies, you know you got it all wrong.

Quote of the week and more recall coverage

Statistics are like bikinis.  What they reveal is suggestive, but what they conceal is vital.  ~Aaron Levenstein


I’ve been reading more of the Scott Walker recall election coverage, and was struck by the frequent references to Walker being “the first governor to survive a recall election”.  Of course this made me curious how many governor’s had been recalled.  I remembered the California governor a few years back, so I had been imagining it would be at least a dozen or so.

Nope.

It’s two.  Lynn Frazier from North Dakota in 1921, and Gray Davis from California in 2003.

I had to laugh at my own sampling bias.  My assumptions were pretty understandable….I’ve been of voting age since 1999, and in that time this has happened twice.  Therefore it was reasonable to assume this happened at least occasionally.   I figured about once every 10 years, which would be 23 or 24 in American history.  I was pretty sure not every state had a recall option, so I halved it.  12 felt good.

This is the problem when data leaves out key points….it relies on our own assumptions to fill in the details.  Engineers are normally trained to get explicit with their assumptions when estimating, as evidenced by the famous Fermi problem.  However, even the most carefully thought through assumptions are still guesses.

That’s why it’s important to remember the quote above: what you’re shown is important, but it’s not half as interesting as what’s hidden.