The SCOTUS and perception of statistics

Finally got internet in the new house.  Can’t complain too much….the guy finished running the wire to our house even though a thunderstorm started.  Clearly that man was getting paid by the job, not the hour.

Anyway, had an interesting chat with my father (a lawyer) after our closing on Thursday about the Supreme Court ruling on health care.  He mentioned that a coworker was griping that the Supreme Court meant nothing any more because they only voted on party lines.  My father, being the good data accuracy man that he is, quickly dissented.

He looked it up, and asserted that nearly half of the decisions last year were unanimous.  For this year, 7-2 votes were the least common (8%), then 8-1 (11%), 6-3 (17%) and then 5-4 (20%).  So overall,  they agree nearly as much as they disagree, and they are only completely divided on about 1 in 5 cases.  Kennedy and Roberts voted with the majority over 90% of the time.  Ginsburg was the least likely to vote in the majority.  Lots of interesting stats to be run on this, another good breakdown of some of the data is here.

It seems the perception that every vote is political is heavily skewed by the very few court cases most of us hear about every year.  I would wager even highly political citizens probably couldn’t rattle off more than a handful.  When you break down the 5-4 decisions exclusively, about 2/3rds of them vote down ideological lines…..which totals to about 10 cases for 2011.

This kind of skewing of perception is common when a few high profile events dramatically overshadow regular operations.  Thanks Dad, for pointing that out.


Hate’s a strong word.  I get that.  I also get that data and survey types are not always the sort of thing that inspires people to strong hatred, but here we are.

In this post I mentioned my annoyance at perception/prediction polls.  The one I referenced was based on women who didn’t change their last names and their level of marital commitment.  Commenter Assistant Village Idiot mentioned another example, which I also liked ““Do you think earthquakes are more likely now because of climate change?” What we think has nothing to do with anything. The earthquakes will happen according to their own rules.”  

In writing that post however, I forgot to mention that same study included an even worse piece of data.  As a rebuttal to the “Midwestern college kids don’t think non-name changing women are committed” they included a remark that women who didn’t plan on changing their names didn’t feel less committed. 


I would really love it if someone could tell me if there’s a proper name for this sort of thing, but I always think of it as “the embarrassing question debacle”.  Basically, researchers ask people questions with a potentially embarrassing answer, and then report it as meaningful when people do not answer embarrassingly.

There are only two types of people I have ever heard who will admit they went in to their marriages less than completely committed:

  1. Those who have been married successfully for quite some time who are now comfortable in admitting they were totally naive when they walked down the aisle.
  2. Those who are already divorced and reflecting on what went wrong.
Level of commitment is best assessed in retrospect, and I look with great skepticism at anyone who says they can gauge it before the fact.  
Getting at the reasons people do things can be brutal.  Your only source for your data also has the biggest motivation to conceal it from you.  Some people are actually doing things for good reasons, some just want to look like they are, and some are lying to themselves.  Unless a study at least attempts to account for all 3 scenarios, I would hold all answers suspect.