Last week, the Assistant Village Idiot forwarded me a new paper called “A cleansing fire: Moral outrage alleviates guilt and buffers threats to one’s moral identity“. It’s behind a ($40) paywall, but Reason magazine has an interesting breakdown of the study here, and the AVI does his take here. I had a few thoughts about how to think about a study like this, especially if you don’t have access to the paper.
So first, what did the researchers look at and what did they find? Using Mechanical Turk, the researchers had subject read articles that talked about either labor exploitation in other countries or the effects of climate change. They found that personal feelings of guilt about those topics predicted greater outrage at a third-party target, a greater desire to punish that target, and that getting a chance to express that outrage decreased guilt and increased feelings of personal morality. The conclusion being reported is (as the Reason.com headline says) “Moral outrage is self-serving” and “Perpetually raging about the world’s injustices? You’re probably overcompensating.”
So that’s what’s being reported. So how do we think through this when we can’t see the paper? Here’s 5 things I’d recommend:
- Know what you don’t know about sample sizes and effect sizes Neither the abstract nor the write ups I’ve seen mention how large the effects reported were or how many people participated. Since it was a Mechanical Turk study I am assuming the sample size was reasonable, but the effect size is still unknown. This means we don’t know if it’s one of those unreasonably large effect sizes that should alarm you a bit or one of those small effect sizes that is statically but not practically significant. Given that reported effect size heavily influences the false report probability, this is relevant.
- Remember the replication possibilities Even if you think a study found something quite plausible, it’s important to remember that fewer than half of psychological studies end up replicating exactly as the first paper reported. There are lots of possibilities for replication, and even if the paper does replicate it may end up with lots of caveats that didn’t show up in the first paper.
- Tweak a few words and see if your feelings change Particularly when it comes to political beliefs, it’s important to remember that context matters. This particular studies calls to mind liberal issues, but do we think it applies to conservative issues too? Everyone has something that gets them upset, and it’s interesting to think through how that would apply to what matters to us. When the Reason.com commenters read the study article, some of them quickly pointed out that of course their own personal moral outrage was self serving. Free speech advocates have always been forthright that they don’t defend pornographers and offensive people because they like those people, but because they want to preserve free speech rights for themselves and others. Self serving moral outrage isn’t so bad when you put it that way.
- Assume the findings will get more generic In addition to the word tweaks in point #3, it’s likely that subsequent replications will tone down the findings. As I covered in my Women Ovulation and Voting post, 3 studies took findings from “women change their vote and values based on their menstrual cycle” to “women may exhibit some variation in face preference based on menstrual cycle”. This happened because some parts of the initial study failed to replicate, and some caveats got added. Every study that’s done will draw another line around the conclusions and narrow their scope.
- Remember the limitations you’re not seeing One of the most important parts of any papers is where the authors discuss the limitations of their own work. When you can’t read the paper, you can’t see what they thought their own limitations where. Additionally, it’s hard to tell if there were any interesting non-findings that didn’t get reported. The limitations that exist from the get go give a useful indication of what might come up in the future.
So in other words….practice reasonable skepticism. Saves time, and the fee to read the paper.
Right after the election, most people in America saw or heard about this Tweet from then President elect Trump:
I had thought this was just random bluster (on Twitter????? Never!), but then someone sent me this article. Apparently that comment was presumably based on an actual study, and the study author is now giving interviews. It turns out he’s pretty unhappy with everyone….not just with Trump, but also with Trump’s opponents who claim that no non-citizens voted. So what did his study actually say? Let’s take a look!
Some background: The paper this is based on is called “Do Non-Citizens Vote in US Elections” by Richman et all and was published back in 2014. It took data from a YouGov survey and found that 6.4% of non-citizens voted in 2008 and 2.2% voted in 2010. Non-citizenship status was based on self report, as was voting status, though the demographic data of participants was checked with that of their stated voting district to make sure the numbers at least made sense.
So what stood out here? A few things:
- The sample size While the initial survey of voters was pretty large (over 80,000 between the two years) the number of those identifying themselves as non-citizens was rather low: 339 and 489 for the two years. There were a total of 48 people who stated that they were not citizens and that they voted. As a reference, it seems there are about 20 million non-citizens currently residing in the US.
- People didn’t necessarily know they were voting illegally One of the interesting points made in the study was that some of this voting may be unintentional. If you are not a citizen, you are never allowed to vote in national elections even if you are a permanent resident/have a green card. The study authors wondered if some people didn’t know this, so they analyzed the education levels of those non-citizens who voted. It turns out non-citizens with less than a high school degree are more likely to vote than those with more education. This actually is the opposite trend seen among citizens AND naturalized citizens, suggesting that some of those voters have no idea what they’re doing is illegal.
- Voter ID checks are less effective than you might think If you’re first question up on reading #2 was “how could you just illegally vote and not know it?” you may be presuming your local polling place puts a little more in to screening people than they do. According to the participants in this study, not only were non-citizens allowed to register and cast a vote, but a decent number of them actually passed an ID check first. About a quarter of non-citizen voters said they were asked for ID prior to voting, and 2/3rds of those said they were then allowed to vote. I suspect this issue is that most polling places don’t actually have much to check their information against. Researching citizenship status would take time and money that many places just don’t have. Another interesting twist to this is that social desirability bias may kick in for those who don’t know voting is illegal. Voting is one of those things more people say they do than actually do, so if someone didn’t know they couldn’t legally vote they’d be more likely to say they did even if they didn’t. Trying to make ourselves look good is a universal quality.
- Most of the illegal voters were white Non-citizen voters actually tracked pretty closely with their proportion of the population, and about 44% of them were white. The next most common demographic was Hispanic at 30%, then black, then Asian. In terms of proportion, the same percent of white non-citizens voted as Hispanic non-citizens.
- Non-citizens are unlikely to sway a national election, but could sway state level elections When Trump originally referenced this study, he specifically was using it to discuss national popular vote results. In the Wired article, they do the math and find that even if all of the numbers in the study bear out it would not sway the national popular vote. However, the original study actually drilled down to a state level and found that individual states could have their results changed by non-citizen voters. North Carolina and Florida would both have been within the realm of mathematical possibility for the 2008 election, and for state level races the math is also there.
Now, how much confidence you place in this study is up to you. Given the small sample size, things like selection bias and non-response bias definitely come in to play. That’s true any time you’re trying to extrapolate the behavior of 20 million people off of the behavior of a few hundred. It is important to note that the study authors did a LOT of due diligence attempting to verifying and reality check the numbers they got, but it’s never possible to control for everything.
If you do take this study seriously, it’s interesting to note what the authors actually thought the most effective counter-measure against non-citizen voting would be: education. Since they found that low education levels were correlated with increased voting and that poll workers rarely turned people away, they came away from this study with the suggestion that simply doing a better job of notifying people of voting rules might be just as effective (and cheaper!) than attempting to verify citizenship. Ultimately it appears that letting individual states decide on their own strategies would also be more effective than anything on the federal level, as different states face different challenges. Things to ponder.
During my recent series on “Why Most Published Research Findings Are False“, I mentioned a concept called “study power” quite a few times. I haven’t talked about study power much on this blog, so I thought I’d give a quick primer for those who weren’t familiar with the term. If you’re looking for a more in depth primer try this one here, but if you’re just looking for a few quick hits, I gotcha covered:
- It’s sort of the flip side of the p-value We’ve discussed the p-value and how it’s based on the alpha value before, and study power is actually based on a value called beta. If alpha can be thought of as the chances of committing a Type 1 error (false positive), then the beta is the chance of getting a Type 2 error (false negative). Study power is actually 1 – beta, so if someone says study power is .8, that means the beta was .2. Setting the alpha and beta values are both up to the discretion of the researcher….their values are more about risk tolerance than mathematical absolutes.
- The calculation is not simple, but what it’s based on is important Calculating study power is not easy math, but if you’re desperately curious try this explanation. For most people though, the important part to remember is that it’s based on 3 things: the alpha you use, the effect size you’re looking for, and your sample size. These three can all shift based on the values of the other one. As an example, imagine you were trying to figure out if a coin was weighted or not. The more confident you want to be in your answer (alpha), the more times you have to flip it (sample size). However, if the coin is REALLY unfairly weighted (effect size), you’ll need fewer flips to figure that out. Basically the unfairness of a coin weighted to 80-20 will be easier to spot than a coin weighted to 55-45.
- It is weirdly underused As we saw in the “Why Most Published Findings Are False” series, adequate study power does more than prevent false negatives. It can help blunt the impact of bias and the effect of multiple teams, and it helps everyone else trust your research. So why don’t most researchers put much thought in to it, science articles mention it, or people in general comment on it? I’m not sure, but I think it’s simply because the specter of false negatives is not as scary or concerning as that of false positives. Regardless, you just won’t see it mentioned as often as other statistical issues. Poor study power.
- It can make negative (aka “no difference”) studies less trustworthy With all the current attention on false positive/failure to replicate studies, it’s not terribly surprising that false negatives have received less attention…..but it is still an issue. Despite the fact that study power calculations can tell you how big the effect size you can detect is, and odd number of researchers don’t include their calculations. This means a lot of “negative finding” trials could also be suspect. In this breakdown of study power, Stats Done Wrong author Alex Reinhart cites studies that found up to 84% of studies don’t have sufficient power to detect even a 25% difference in primary outcomes. An ASCO review found that 47% of oncology trials didn’t have sufficient power to detect all but the largest effect sizes. That’s not nothing.
- It’s possible to overdo it While underpowered studies are clearly an issue, it’s good to remember that overpowered studies can be a problem too. They waste resources, but can also detect effect sizes so small as to be clinically meaningless.
Okay, so there you have it! Study power may not get all the attention the p-value does, but it’s still a worthwhile trick to know about.
Over the years I’ve spilled a lot of (metaphorical) ink on how to read science on the internet. At this point almost everyone who encounters me frequently IRL has heard my spiel, and few things give me greater pleasure than hearing someone say “you changed the way I read about science”. While I’ve written quite a fewer longer pieces on the topic, recently I’ve been thinking a lot about what my “quick hits” list would be. If people could only change a few things in the way they read science stories, what would I put on the list?
Recently, a story hit the news about how you might live longer if your doctor is a woman and it got me thinking. As someone who has worked in hospitals for over a decade now, I had a strong reaction to this headline. I have to admit, my mind started whirring ahead of my reading, but I took the chance to observe what questions I ask myself when I need to pump the brakes. Here they are:
- What would you think if the study had said the opposite of what it says? As I admitted up front, when I first heard this study, I reacted. Before I’d even made it to the text of the article I had theories forming. The first thing I did to slow myself down was to think “wait, how would you react if the headline said the opposite? What if the study found that patients of men did better?” When I ran through those thoughts, I realized they were basically the same theories. Well, not the same…more like mirror image, but they led to the same conclusion. That’s when I realized I wasn’t thinking through the study and it’s implications, I was trying to make the study fit what I already believed. I admit this because I used this knowledge to mentally hang a big “PROCEDE WITH CAUTION” sign on the whole topic. To note, it doesn’t matter what my opinion was here, what matters is that it was strong enough to muddy my thoughts.
- Is the study linked to? My first reaction (see #1) kicked in before I had even finished the headline, so unfortunately “is this real” comes second. In my defense, I was already seeing the headlines on NPR and such, but of course that doesn’t always mean there’s a real study. Anyway, in this case of this study, there is a real identified study (with a link!) in JAMA. As a note, even if the study is real, I distrust any news coverage that doesn’t provide a link to the source. In 2017, that’s completely inexcusable.
- Do all the words in the headline mean what you think they mean? Okay, I’ve covered headlines here, but it bears repeating: headlines are a marketing tool. This study appeared under several headlines such as “You Might Live Longer if Your Doctor is a Woman“. What’s important to note here is that by “live longer” they meant “slightly lower 30 day mortality after discharge from the hospital”, by doctor they meant “hospitalist”, and by “you” they meant “people over 65 who have been hospitalized”. Primary care doctors and specialists were not covered by this study.
- What’s the sample size and effect size? Okay, once we have the definitions out of the way, now we can start with the numbers. For this study, the sample size was fantastic….about 1.5 million hospital admissions. The effect size however….not so much. For patients treated by female physicians vs male, the 30 day mortality dropped from 11.49% to 11.07%. That’s not nothing (about a 5% drop), but mathematically speaking it’s really hard to reliably measure effect sizes of under 5% (Corollary #2) even when you have a huge sample size. To their credit, the study authors do include the “number to treat”, and note that you’d have to have 233 patients treated by female physicians over male physicians in order to save one life. That’s a better stat than the one this article tried to use “Put another way – if you were to replace all the male doctors in the study with women, 32,000 fewer people would die a year.” I am going to bet that wouldn’t actually work out that way. Throw “of equal quality” in there next time, okay?
- Is this finding the first of it’s kind? As I covered recently in my series on “Why Most Published Research Findings Are False“, first of their kind exploratory studies are some of the least reliable types of research we have. Even when they have good sample sizes, they should be taken with a massive grain of salt. As a reference, Ioannidis puts the chances that a positive finding is true for a study like this at around 20%. Even if subsequent research proves the hypothesis, it’s likely that the effect size will diminish considerably in subsequent research. For a study that starts off with a 5% effect size, that could be a pretty big hit. It’s not bad to continue researching the question, but drawing conclusions or changing practice over one paper is a dangerous game, especially when the study was observational.
So after all this, do I believe this study? Well, maybe. It’s not implausible that personal characteristics of doctors can effect patient care. It’s also very likely that the more data we have, the more we’ll find associations like this. However, it’s important to remember that proving causality is a long and arduous process, and that reacting to new findings with “well it’s probably more complicated than that” is an answer that’s not often wrong.
Fake news is all the rage these days. We’re supposed to hate it, to loathe it, to want it forever banned from our Facebook feeds, and possibly give it credit for/blame it for the election results. Now, given my chosen blog topics and my incessant preaching of internet skepticism, you would think I would be all in on hating fake news.
Nah, too easy.
Instead I’m going to give you 5 reasons why I think the hate for fake news is overblown. Ready? Here we go!
- Fake news doesn’t have a real definition Okay, yeah I know. Fake news is clear. Fake news is saying that Hillary Clinton ran a child prostitution ring out of a DC pizza place. That’s pretty darn clear, right? Well, is it? The problem is that while there are a few stories that are clearly “fake news”, other things aren’t so clear. One mans “fake news” is another mans “clear satire”, and one woman’s “fake news” is another’s “blind item”. Much of the controversy around fake news seems to center around the intent of the story (ie to deceive or make a particular candidate look bad), but that intention is quite often a little opaque. No matter what standard you set, someone will find a way to muddy the water.
- Fake news is just one gullible journalist away from being a “hoax” Jumping off point #1, let’s remember that even if Facebook bans “fake news” you still are going to be seeing fake news in your news feed. Why? Because sometimes journalists buy it. See if you or I believe a fake story, we “fell for fake news”. If a journalist with an established audience does it, it’s a “hoax”. Remember Jackie from Rolling Stone? Dan Rather and the Killian documents? Or Yasmin Seweid from just last month? All were examples of established journalists getting duped by liars and reporting those lies as news. You don’t always even need a person to spearhead the whole thing. For example, not too long ago a research study made headlines because it claimed eating ice cream for breakfast made you smarter. Now skeptical readers (ie all of you) will doubt that finding was founded, but you’d be reasonable in assuming the study at least existed. Unfortunately your faith would be unfounded, as Business Insider pointed out that no one reporting on this had ever seen the study. Every article pointed to an article in the Telegraph which pointed to a website that claimed the study had been done, but the real study was not locatable. It may still be out there somewhere, but it ludicrously irresponsable of so many news outlets to publish it without even making sure it existed.
- Fake news can sometimes be real news In point #1, I mentioned that it was hard to actually put a real definition on “fake news”. If one had to try however, you’d probably say something like “a malaciously false story by a disreputable website or news group that attempts to discredit someone they don’t like”. That’s not a bad definition, but it is how nearly every politician initially categorizes every bad story about themselves. Take John Edwards for example, whose affair was exposed by the National Inquirer in 2007. At the time, his attorney said “”The innuendos and lies that have appeared on the internet (sic) and in the National Enquirer concerning John Edwards are not true, completely unfounded and ridiculous.” It was fake news, until it wasn’t. Figuring out what’s fake and who’s really hiding something isn’t always as easy at it looks.
- Fake news probably doesn’t change minds Now fake news obviously can be a huge problem. Libel is against the law for a reason, and no one should knowingly make a false claim about someone else. It hurts not just the target, but can hurt innocent bystanders as well. But aside from that, people get concerned that these stories are turning people against those they would otherwise be voting for. Interestingly, there’s not a lot of good evidence for this. While the research is still new, the initial results suggest that people who believe bad things about a political candidate probably already believed those things, and that seeing the other side actually makes them more adament about what they already believed. In other words, fake news is more a reflection of pre-existing beliefs than a creater of those beliefs.
- Fake news might make us all more cautious There’s an interesting paradox in life that sometime by making things safer you actually make them more dangerous. The classic example is roads: the “safer” and more segregated (transportation mode-wise) roads are, the more people get in to accidents. In areas where there is less structure, there are fewer accidents. Sometimes a little paranoia can go a long way. I think a similar effect could be caused by fake news. The more we suspect someone might be lying to us, the more we’ll scrutinize what we see. If Facebook starts promising that they’ve “screened” fake news out, it gives everyone an excuse to stop approaching the news with skepticism. That’s a bad move. While I reiterate that I never support libel, lying or “hoaxes”, I do support constant vigilance against “credible” news sources. With the glut of media we are exposed to, this is a must.
To repeat for the third time, I don’t actually support fake news. Mostly this was just an exercise in contrarianism. But sometimes bad things can have upsides, and sometimes paranoia is just good sense.
Well howdy? Only 11 days until Christmas, and I have in no way shape or form finished my shopping. I’m only ever good at coming up with gift ideas when they’re not actually needed, so I thought now was the perfect time to make a list of things you could get a statistician/data person in your life, if you were so inclined. Of course any of the books on my reading list here are pretty good, but if you’re looking for something more specific, read on!
- The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century This was my December book, and it was phenomenal. An amazing history of statistical thought in science and the personalities that drove it. If you ever wanted to know who the “student” was in “student’s t distribution”, this is the book for you. Caveat: If you don’t understand that previous sentence, I’d skip this one.
- Statistical dinosaurs. You had me at “Chisquareatops“. Or maybe “Stegonormalus“.
- A pound of dice. Or cards. Or bingo balls. Because you never know when you may have to illustrate probability theory on the fly. (Bonus: these “heroes of science” playing cards are extra awesome)
- For the music lover Prints of pop song infographics. Data visualization taken to the next level.
- Art supplies Maybe an x-y axis stamp or grid post it notes?
- 2016 year in review The best infographics of 2016 or the best mathematics writing of 2016. The first one is out already, but you’ll have to wait until March for that second one.
This past week, I had the tremendous pleasure of seeing one of my brother’s articles on the cover of the December issue of Christianity Today as part of a feature on pain killers. While my brother has done a lot of writing for various places over the years, his article “How Realizing My Addiction Had Chosen Me Began My Road to Recovery” was particularly special to see. In it, he recounts his story of getting addicted to pain killers after a medical crisis, and details his road to recovery. Most of the story is behind a paywall, but if you want a full copy leave me a comment or use the get in touch form and I’ll send you the word document.
As someone who was intimately involved with all of the events relayed in the article, it’s pretty self evident why I enjoyed reading it as much as I did. On a less personal note though, I thought he did a great job bringing awareness to an often overlooked pathway to addiction: a legitimate medical crisis. My brother’s story didn’t start at a party or with anything even remotely approaching “a good time”. His story started in the ER, moved to the ICU, and had about 7 months of not being able to eat food by mouth at the end. His bout with necrotizing pancreatitis was brutal, and we were on edge for several months as his prognosis shifted between “terrible” and “pretty bad”.
Through all that, the doctors had made decisions to put him on some major pain killers. Months later, when things were supposed to be improving, he found that his stomach was still having trouble, and went back to his doctor for more treatment. It was only then that he was told he had become an addict. The drugs that had helped save his life were now his problem.
Obviously he tells the rest of the story (well, all of the story) better than I do, so you should really go read it if your interested. What I want to focus on is the prescribing part of this. When talking about things like “the opioid crisis”, it’s tempting for many people to label these drugs as “good” or “bad”, and I think that misses the point (note to my brother who will read this: you didn’t make this mistake. I’m just talking in general here. Don’t complain about me to mom. That whole “stop sciencing at your brother” lecture is getting old). There’s a lot that goes in to the equation of whether or not a drug should be prescribed or even approved by the FDA, and a shift in one can change the whole equation. Also, quick note, I’m covering ideal situations here. I am not covering when someone just plain screws up, though that clearly does happen:
- Immediate risk (acute vs chronic condition) In the middle of a crisis when life is on the line, it shouldn’t surprise anyone that “keeping you alive” because the primary endpoint. This should be obvious, but after a few years of working in an ER, I realized it’s not always so clear to people. For example, you would not believe the number of people who come in to the ER unconscious after a car accident or something who later come in and complain that their clothes were cut off. In retrospect it feels obvious to them that a few extra minutes could have been taken to preserve their clothing, but the doctors who saw them in that moment almost always feel differently. Losing even one life because you were attempting to preserve a pair of jeans is not something most medical people are willing to do. A similar thing happens with medications. If there is a concern your life is in danger, the math is heavily weighted in favor of throwing the most powerful stuff we have at the problem and figuring out the consequences later. This is what kicked off the situation my brother went through. At points in his illness they put his odds of making it through the night at 50-50. Thinking about long term consequences was a luxury he wasn’t always able to afford.
- Side effects vs effect of condition The old saying “the cure is worse than the disease” speaks to this one, and sometimes it’s unfortunately true. Side effects of a drug always have to be weighed against the severity of the condition they are treating. The more severe the condition, the more severe the allowable side effects. A medication that treats the common cold has to be safer than a medication that treats cancer. However, just because the side effects are less severe than the condition doesn’t mean they are okay or can’t be dangerous themselves (again, think chemotherapy for cancer), but for severe conditions trade offs are frequently made. My brother had the misfortune of having one of the most painful conditions we know of, and the pain would have literally overwhelmed his system if nothing had been done. Prescription drugs don’t appear out of nowhere, and always must be compared to what they are treating when deciding if they are “good” or “bad”.
- Physical vs psychological/emotional consequences One of the more interesting grey areas of prescription drug assessment is the trade off between physical consequences vs psychological and emotional consequences. For better or worse, physical consequences are almost always given a higher weight than psychological/emotional consequences. This is one of the reasons we don’t have male birth control. For women, pregnancy is a physical health risk, for men, it’s not. If hormonal birth control increases a woman’s chances of getting blood clots, that’s okay as long as it’s still less impactful than pregnancy. For men however, there’s no such physical consequence and therefore the safety standards are higher. The fact that many people might actually be willing to risk physical consequences to prevent the emotional/psychological/financial consequences isn’t given as much weight as you might think. The fact that my brother got a doctor who helped him manage both of these was fantastic. His physical crutch had become a mental and emotional crutch, and the beauty of his doctor was that he didn’t underestimate the power of that.
- Available alternatives Drugs are not prescribed in vacuum, and it’s important to remember they are not the end all be all of care. If other drugs (or lifestyle changes) are proven to work just as well with fewer side effects, those may be recommended. In the case of my brother, his doctor helped him realize that mild pain was actually better than the side effects of the drugs he was taking. For those with chronic back pain, yoga may be preferable. This of course is also one of the arguments for things like legalized marijuana, as it’s getting harder to argue that those side effects are worse than those of opioids.
- Timing (course of condition and life span) As you can see from 1-4 above, there are lots of balls in the air when it comes to prescribing various drugs. Most of these factors actually vary over time, so a decision that is right one day may not be right the next. This was the crux of my brother’s story. Prescribing him high doses of narcotics was unequivocally the right choice when he initially got sick. However as time went on the math changed and the choice became different. One of the keys to his recovery was having his doctor clearly explain that this was not a binary….the choice to take the drug was right for months, and then it became wrong. No one screwed up, but his condition got better and the balance changed. This also can come in to play in the broader lifespan…treatments given to children are typically screened more carefully for long term side effects than those given to the elderly.
Those are the basic building blocks right there. As I said before, when one shifts, the math shifts. For my brother, I’m just glad the odds all worked in his favor.
Man, that title probably isn’t winning me any clickbait awards.
Anyway, I was catching up on my blog reading this past weekend, and I was intrigued by the Assistant Village Idiot’s “Conservation of Fear” post. In it, he mentions the idea that most of us probably have some sort of baseline disposition towards the world, and that circumstances aren’t always as important as we think they are. We frequently assume that as good things increase so does our mood, and as bad things increase our mood goes lower, but he asserts this may not always be the case. Of course this being the AVI, he immediately then walks back on that assertion and points out that some circumstances are really important, and that fixing those can make a big difference in mood. So basically as some good circumstances increase we could get a nice linear gain in happiness, but at a certain point the relationship probably cuts out.
This uneven effect issue is not actually all that uncommon in human behavior. While generally people want to find (or recite) nice linear relationships between things (ie x causes y), we often run in to situations where things aren’t that simple. Sometimes x makes y go up….but then you get to a certain level of x and suddenly x is totally irrelevant to y. Sometimes above a certain level x makes y go down. You get the picture. Or maybe you don’t. Regardless, here are some examples!
- Income and Personal Happiness We all know the famous saying “money can’t buy happiness”. However, as anyone who has ever gone without money can tell you, that’s crap. Well, partial crap. A few years ago an investment group did some analysis and figured out that more money does make you happier, but only up to a certain household income. After that, it’s pretty much a wash. Overall for the US the cutoff was $75K. Basically an increase in salary from $30K to $40K will make you happier, but one from $110K to $120K doesn’t have the same effect. The linear relationship occurs for low numbers, but not high ones. For the curious, here’s the state to state breakdown: If you think about it, this makes a lot of sense. If money is a struggle, it affects your happiness. Once you’ve stopped struggling, it stops having the same effect. So basically it’s more accurate to say that money can’t buy happiness, but a lack of money sure can stress you out.
- GDP and Subjective Well Being Related to #1, but slightly different: it’s not just your personal income that helps your well being, your country’s GDP can play a role too. Again though, only to a point. Check out this graph from Our World in Data: So countries that struggle to develop do take their toll on their citizens, but at some point development stops yielding returns in well being. It would be interesting to see if the effect of personal wealth varied with country GDP, but alas I can’t find that data.
- Sexual frequency and housework divisions If my ranting about linear relationships that aren’t entirely linear sound familiar, it’s because I’ve brought this up before in my (oft Googled, less often read) Sex, Models and Housework post and the follow up. My first post was about a study that caused a stir when it claimed that men who did more housework had less sex. The follow up covered a study that rejected a linear model, and instead grouped respondents in to “traditional”, “egalitarian” and “counter-cultural” couples. Despite the claims of the original study, they found that the relationships were only really linear within the groups, but that it was 3 different linear relationships. Egalitarian couples had the most sex and satisfaction, traditional couples had slightly less, and counter-cultural couples did the worst. The model worked much better when the three groups were treated separately than when they were treated as a continuous group.
- Age at first marriage The conventional wisdom states that waiting a bit to get married is good for you. It turns out that’s true, until a point. For each year you wait to get married past the age of 20, your chance of divorce goes down 11%. However, once you get to 32, your chance of divorce actually starts going back up. Basically the divorce risk curve is now a parabola:
- Expenses and income I found a couple examples of this in this technical paper for statisticians on how to handle partially linear logistic regressions. Basically, the consumption of many household items goes up with household income until a certain point where it stays pretty steady. Things like gas, electricity, and many consumer goods fall in this category. Interestingly, overall income and expenses actually increase sort of linearly with age from 20-44, then decrease sort of linearly with age from 45-75+:
This is a good thing to watch out for in general, as it makes summarizing the trends a little trickier. If you leave out a key modifier or the limits, you could end up giving someone a wrong impression or encouraging people to extrapolate beyond the scope of the model, and that will make the statistician in your life very sad. Know your limits people, and the limits of your data set!
It’s election day here in the US, so I thought I’d do a roundup of my favorite posts I’ve done in the past year about the political process and it’s various statistical pitfalls. Regular readers will recognize most of these, but I figured there were worth a repost before they stopped being relevant for another few years. As always, these posts are meta/about the process type posts, and no candidates or positions are endorsed. The rest of you seem to have that covered quite nicely.
- How Do They Call Elections So Early? My most popular post so far this year, I walk through the statistical methods used to call elections before all the votes are counted. No idea if this will come in to play today, but if it does you’ll be TOTALLY prepared to explain this at your next cocktail party or whatever it is the kids do these days.
- 5 Studies About Politics and Bias to Get You Through Election Season In this post I do a roundup of my favorite studies on, well, politics and bias. Helpful if you want to figure out what your opponents are doing wrong, but even MORE helpful if you use it to re-examine some of your own beliefs.
- Two gendered voting studies. People love to study the secret forces driving individual genders to vote certain ways, but are those studies valid? I examined one study that attempted to link women’s voting patterns and menstrual cycles here, and one that attempted to link threats to men’s masculinity and their voting patterns here. Spoiler alert: I was underwhelmed by both.
- Two new logical fallacies (that I just made up) Not specific to politics, but aimed in that direction. I invented the Tim Tebow Fallacy for those situations when someone defends a majority opinion as though they were an oppressed minority. The Forrest Gump Fallacy I made up for those times when someone believes that their own personal life is actually reflective of a greater trend in America….when it doesn’t.
- My grandfather making fun of statistical illiteracy of political pundits 40 years ago. The original stats blogger in my family also got irritated by this stuff. Who would have thought.
As a final thought, if you’re in the US, go vote! No, it won’t make a statistically significant difference on the national, but I think there’s a benefit to being part of the process.
News flash! People lie. Some more than others. Now there are all sorts of reasons why we get upset when people don’t tell the truth, but I’m not here to talk about those today. No, today I’m here to give a few interesting examples of where self-reporting bias can really kinda screw up research and how we perceive the world.
Now, self reporting bias can happen for all sorts of reasons, and not all of them are terrible. Some bias happens because people want to make themselves look better, some happens because people really think they do things differently than they do, some happens because people just don’t remember things well and try to fill in gaps. Regardless of the reason, here’s 5 places bias may pop up:
- Nutrition/Food Intake Self reported nutrition data may be the worst example of research skewed by self reporting. For most nutrition/intake surveys, about 67% of respondents give implausibly low answers….an effect that actually shows up cross culturally. Interestingly there are some methods known to improve this (doubly labeled water for example), but they tend to be more expensive and thus are used less often. Unfortunately this effect isn’t random, so it’s hard to know exactly how bad they effect is across the board.
- Height While it’s pretty ubiquitous that people lie about their weight, lying about height is a less recognized but still interesting problem. It’s pervasive in online dating for both men AND women, both of whom exaggerate by about 2 inches. On medical/research surveys we all get slightly more honest, with men overestimating their height by about .5 inches, and women by .33 inches.
- Work hours
Know anyone who says they work a 70 hour week? Do they do this regularly? Yeah, they’re probably not remembering that correctly. Edit: My snark got ahead of me here, and I got called out in the comments, so I’m taking it back. I also added some text in bold to clarify what the problem is. When people are asked how much they work per week, they tend to give much higher answers than when they are asked to list out the hours they worked during the week. The more they say they work, the more likely to have inflated the number. People who say they work 75+ hours work an average of 50 hours/week, and those who say they work 40 hours/week tend to work about 37. Added: While some professions do actually require crazy hours (especially early in your career….looking at you medical residencies, and first year teachers are notorious for never going home), very few keep this up forever. Additionally, what people work most weeks almost never equals what they work when averaged over the course of a year. That 40 hour a week office worker almost certainly gets some vacation time, and even 2 weeks of vacation and a few paid holiday take that yearly average down to 37 hours per week…and that’s before you add in sick time. Some of this probably gets confusing because of business travel or other “grey areas” like professional development time, but it also speaks to our tendency to remember our worst weeks better than our good ones.
- Childhood memories It is not uncommon in psychological/developmental research that adults will be asked various questions about the state of their life currently while also being queried about their upbringing. This typically leads to conclusions about parenting type x leading to outcome y in children. I was recently reading a paper about various discipline methods and long term outcomes in kids, when I ran across a possible confounder I hadn’t considered: sex differences in the recollection of childhood memories. Apparently overall men are not as good at identifying family dynamics from their childhoods, and the authors wondered if that led to some false findings. They didn’t have direct evidence, but it’s an interesting thing to keep in mind.
- Base 10 madness You wouldn’t think our fingers would cause a reporting bias, but they probably do. Our obsession with doing things in multiples of 5 or 10 probably comes from our use of our hands for counting. When it comes to surveys and self reports, this leads to a phenomena called “heaping”, where people tend to round their reports to multiples of 5 and 10. There’s some interesting math you can use to try to correct for this, but given that rounding tends to be non-constant (ie we round smaller numbers to 5 and larger numbers to 10) this can actually affect some research results.
Base 10 aside: one of the more interesting math/pop-culture videos I’ve seen is this one, where they explore why the Simpson’s (who have 4 fingers on each hand) still use base 10 counting (7:45 mark):