5 Things About Crime Statistics

Commenter Bluecat57 passed along an article a few weeks ago about the nightclub shooting in Thousand Oaks, California. Prior to the tragedy, Thousand Oaks had been rated the third safest city in the US, and it quickly lost that designation after the shooting. He raised the issue of crime statistics and how a city could be deemed “safe”. This seemed like a decent question to look in to, so I thought I’d round up a few interesting debates that crime statistics have turned up over the years.

Ready? Here we go!

  1. There’s no national definition for some crimes In this age of hyperconnectedness, we tend to all assume that having cities report crime is a lot like reporting your taxes or something. Unfortunately, that’s not the case. Participation in most national crime databases is voluntary, and every jurisdiction has their own way of counting things. For example, 538 reported that in New York City, people who were hit by glass from a gunshot weren’t counted as victims of a shooting, but other jurisdictions do count them as such. Thus, any city that reports those as shootings will always look like they have a higher rate of crime than those that don’t.
  2. Data is self-reported by cities and states Self reporting of anything can be known to influence the rates, and crime is no exception. One doesn’t have to look hard to find stories of cities changing how they classify crimes in response to public pressure. Even when everyone’s trying to be honest though, self-reports can be prone to typos and other issues. That’s why earlier this year NPR found that national school shooting numbers appear to have been skewed upward by reporting mistakes made by districts in Cleveland and California. One way of catching these issues is to ask people themselves how often they’ve been victimized and to compare it to the official reported statistics, but this can lead to other problems….
  3. Crimes are self-reported by people For all crimes other than murder (most of the time) police can’t do much if they don’t know about a crime. Some crimes are underreported because people are embarrassed (falling for scams comes to mind), but some are underreported for other reasons. In some places, people don’t believe  the police will help, that they will make things worse, or that they won’t respond quickly, so they will not report. Unauthorized immigrants frequently will not call the police for crimes committed against them, and some studies show that when their legal status changes their crime reporting rate triples. Additionally, crimes are typically not reported when the others involved were also committing crimes. Gang members will probably not report assault, and sex workers likely won’t report being robbed.
  4. Denominators fluctuate One of the more interesting ideas Bluecat57 brought up when he passed the article on to me is that some cities suffer from having changing populations. For example, cities with a lot of tourists will get all of the crimes committed against the tourists, but the tourists will not be counted in their denominator. In Boston, the city population fluctuates by 250,000 people when the colleges are in session, but I’m not clear what population is used for crime reporting. Interestingly, this is the same reason we see states like Hawaii and Nevada reporting the highest marriage rates in the country…they get tourist weddings without keeping the tourists.
  5. Unusual events can throw everything off Getting back to the original article that sparked this whole discussion, it’s hard to calculate crime rates when there’s one big event in the data. For example, people have struggled with whether or not to include 9/11 in NYCs homicide data. Some have, some haven’t. It depends on what your goal is, really. For a shooting like the one in Thousand Oaks, this would put them well ahead of the national average for this year (around 5 murders per 100,000 people) at 9 per 100,000, and immediately on par with cities like Tampa, FL. A big event in a small population can do that.

So overall, some interesting things to keep in mind when you read these things. As a report in Vox a few years ago said “In order for statistics to be reliable, they need to be collected for the purpose of reliability. In the meantime, the best that the public can do is to acknowledge the problems with the data we have, but use it as a reference anyway.” In other words, caveat emptor, caveats galore.

 

Where is the Center?

Last week there was an interesting controversy about a New York Times op-ed (this one, in case you’re curious) that sparked an email discussion between some friends and I. I had been reading up on the concerns about the op-ed, which were mostly coming from left-leaning folks (summary of the controversy here) and was interested to note that in many of the discussions the political orientation of the New York Times was considered germane, as the NYTs was not considered a “friendly” publication to the left. I read multiple times that the NYT was obviously a “center right” publication.

This assertion surprised me, as I had always heard the NYTs referred to as a left leaning publication. As I’ve previously mentioned, I went to Baptist school through 12th grade, so this was actually a thing pretty frequently discussed.

As tends to happen when I hear two sides who disagree on something, I immediately wondered what definitions everyone was using. As I mentioned recently while discussing the political tribes study, measuring where the center is when it comes to compromise is hard. How do we measure where the center is when it comes to journalism? Or in general?

It strikes me that when we use the word “center”, people can mean a few things:

  1. Center of public opinion of the country This one makes sense when we’re talking about elections, though can be deceptive. I heard someone recently mention that most people were actually liberal, because most people support expanding social services. Well yeah. The problem is that most people really tend to hate when their tax bill goes up, so they also tend to vote against that. What “most people want” can shift and wiggle depending on specifics.
  2. Center of public opinion of those they are aware of I’ll come back to this one in a second, but who we see every day matters. A person growing up in Massachusetts will almost certainly end up with a slightly different idea of “center” than a person growing up in Texas. Likewise, a person spending a lot of time on the internet may believe that the center is something different than it is in real life.
  3. Center of public opinion of a group of countries/the world This one comes up a lot when people talk about things like healthcare or anything that starts with “out of the G8 countries, the US is the only one without _____”. Likewise, a friend of mine who is Methodist recently sent out a video where their pastor pointed out that what was being proposed as a “moderate” stance on LGBT issues would actually be a radical stance for the Methodist churches located in Africa. Center changes quickly if you move outside the US.
  4. Moderate political beliefs While there doesn’t appear to be a firm definition of moderate vs centrist, I did really like this Quora discussion about the difference. There’s an interesting assertion that non-extreme liberals like to use the word “centrist” whereas non-extreme conservatives like to use the word “moderate”. The political tribes study certainly took this stance, and called the right leaning center the “moderates”. Essentially though, “moderate” seems to imply a slower paced version of the liberal beliefs you align with. So someone who was “moderate” on taxes might believe they should be lowered, but would advocate gradual change. Someone who was “centrist” might believe they should stay where they are.
  5. People who express their beliefs politely and are willing to listen to others, or who otherwise strive for harmony I’ll be coming back to this one as well, but there is an idea that “centrists” may just be people who don’t really like to openly argue with others. They may be people who put harmony ahead of political stances. They may be center by disposition, not by belief. Interestingly, the political tribes study I just mentioned put those who were “politically disengaged” in the exact center, flanked by those they called “passive liberals” and those they called “moderates”.
  6. Someone who doesn’t agree with you on a key issue, but agrees on others. This one would be particularly key if you had one issue you felt strongly about. Major political parties tend to have a platform, but there are many people who are more single issue. If someone disagreed with them on that one issue, they may end up not thinking of them as  on “their side” even if they were based on our traditional definitions.

With all those options, some groups that try to assess political bias have taken a multifaceted approach to ranking media outlets. For example, the website AllSides.com uses reader surveys that include a measure of the readers own bias in the calculation. When you sign up, you take a survey to assess your own bias, then they weight your rating of articles/outlets with that in mind. They also tell you how disputed the rankings are, and for large newspapers they rank the news and the editorial page separately. All Sides ranks The New York Times news section is rated “leans left” and their editorial page is rated “left”, FWIW.

So why the perception that the NYTs is center right?

Well, I thought about this and I’m guessing it’s a bit of #1 and 6 put together.

The first time I saw the NYTs referred to as “right leaning” was when they started profiling Trump voters after the election. Some people thought that was giving more air time to half the country than the other half, as there were not equal profiles of non-Trump voters. Of course the response is that the NYTs newsroom is almost certainly made up of non-Trump voters, along with much of their readership and that their typical articles reflect this, but there was still some thought that this should have been made more equal. This seems to have gotten in to the conventional wisdom in some circles, and now is getting repeated.

However on a deeper level, I wonder if #2 and #5 are coming in to play, particularly for younger people. It occurred to me that most of us older than, I don’t know, 25 or so, probably grew up with a different exposure to media than younger people have. When I was a kid, my parents subscribed to the newspaper (the Union Leader) and maybe watched the evening news. Now, both my husband and I read our news online. We’ve never gotten a newspaper, and we only watch the news when something big happens. This means my son has almost never seen how we get the news. He has much less of a baseline for news than I would have at the same age. If I’m not careful, his first exposure to reading the news will be random stories that catch his attention on Facebook/Twitter/whatever social media dominates when he starts getting in to it.

It occurred to me that if the bulk of your initial media exposure is viral headlines and journalism that openly advocates for certain positions, you’re going to have a very different take on what “center” is. If you’re used to media outlets marketing themselves directly to your demographic, then anything that doesn’t do that may not feel like “your side”. The further we get in to the internet/market segmentation age, the more people will have grown up without exposure to anything different.

I have no idea what the outcome of that would be, or if it will be a good thing or a bad thing. I do think it might have an impact on where we consider “the center” to be, as it may more and more come to mean “those not given to conflict” as opposed to “those attempting to represent both sides”. Not sure if that change is for the better or for the worse, but I do suspect there will be a shift.

I will note that we may already be seeing a shift in journalism due to Twitter. Someone noted recently that while about 20% of Americans have a Twitter account (including <18), almost 100% of journalists do. This means that journalists are most likely to hear from those who want to go on Twitter and mix it up with journalists, which almost certainly leaves out the “passive liberal” and “politically disengaged” off their radar. The survey suggested that’s 41% of the population, so that could lead to a serious skewing of perception. If they’re not hearing from “moderates” often, then they’re missing almost 60% of the US. One guesses they are hearing from the extremes (8% and 6%) more often than anyone else.

So those are my thoughts. BTW, if any of my readers happen to hold the opinion that the NYTs is center right, I would actually be rather interested in hearing why you drew that conclusion. I’ll admit I do not tend to read them, so I may have missed something. For everyone else, I’m equally curious what you call “center” and how you get there, or just in general what you think of the All Sides rankings. They seem to have gotten my two local papers right (Boston Herald – leans right, Boston Globe – leans left), so as far as I can tell they’re solid.

Good luck out there.

 

What I’m Reading: November 2018

Happy post-Thanksgiving everyone! Hope yours was lovely. I went mostly computer free so if you’ve emailed me or sent me something recently, I promise it’s not off my radar. I didn’t get much reading in, but I did get sent two interesting pop culture graphics that are worth a gander.

First up, a visual representation of how accurate “based on a true story” movies are. Shows not only how often it’s inaccurate, but where those inaccuracies take place and how inaccurate they are. For example, here’s Selma (the highest rated) vs Imitation Game (one of the lowest rated). Bright red means false, light red means false-ish, grey is unknown, light blue is true-ish, dark blue is true. :

Check out the actual site, as you can click on each bar to see exactly what the scene was that got the rating. I was interested to see what they called “unknown”, and it appears that those are mostly things like conversations between two characters who definitely spoke, and almost certainly about that topic, but no specific record of or reference to the conversation exists.

Next up, from John: Are pop lyrics getting more repetitive? Using the same algorithm used to compress digital photos in to smaller file sizes, this guy tries to measure how repetitive the lyrics in the Billboard Top 100 songs for the last few decades. Not only is this an interesting project, but he spells out his methodology, assumptions, the outliers and his step by step process REALLY nicely. He shows examples of songs ranked highly repetitive, why he chose to use a log scale for his axis, and how his algorithm would evaluate a regular paragraph of text. Seriously, if scientific papers in general had methodology sections this robust we wouldn’t have a replication crisis.

So what was the most repetitive song in the 15,000 he looked at? Around the World by Daft Punk. Considering that song is just the phrase “Around the World” repeated 100+ times, this makes sense. He breaks down the most repetitive songs by decade, which I thought might be of interest to folks here. Remember, these are only songs that made it to the Billboard Hot 100:

1960s top 3:

  • Chain of Fools (Part 1) – Jimmy Smith, 1968 (92% size reduction)
  • Jingo – Sanata, 1969  (85% size reduction)
  • Any Way You Want It – The Dave Clark Five, 1964 (83% reduction)

(Note to my Dad – You Really Got Me by the Kinks was #5 for the decade at 81%)

1970s top 3:

  • Let’s All Chant – The Michael Zager Band, 1978 (88% size reduction)
  • Keep it Comin’ Love – KC and the Sunshine Band, 1977 (87%)
  • Who’d She Coo? – Ohio Players, 1976 (86%

1980s top 3:

  • Pump Up the Jam – Technotronic, 1989 (85%)
  • Funkytown – Lipps Inc. 1980 (85%)
  • Got My Mind Set On You – George Harrison, 1987 (80%)

1990s top 3:

  • Around the World – Daft Punk, 1997 (98%)
  • The Rockafeller Skank – Fatboy Slim, 1998 (95%)
  • Send Me On My Way – Rusted Root, 1995 (85%)

2000s top 3:

  • Better Off Alone – Alice Deejay, 2000 (84%)
  • Thong Song – Sisqo, 2000 (81%)
  • Dance With Me – 112, 2001 (81%)

2010s top 3:

  • Get Low – Dillon Francis & DJ Snake, 2015 (90%)
  • Barbra Streisand – Duck Sauce, 2011 (89%)
  • Feliz Navidad – Jose Feliciano, 2017 (89%)

Overall, songs did get more repetitive, both overall and the top 10 from each year. In 1960 the average song on the Top 100 was 46% compressible, while in 2015 it was 56% compressible. Interestingly, the top 10 songs are always more repetitive than the rest of them by about 2-6% or so.

There’s also a lot of interesting breakdowns by artist. I learned that the Guess Who was particularly repetitive for the 70s, and that country is much less repetitive than pop music. Apparently this even applies within artists, as Taylor Swift showed a sharp rise in repetitive lyrics after she switched from country to pop.

Anyway, go check it out, the graphics are great!

 

5 Things About the Many Analysts, One Data Set Paper

I’ve been a little slow on this, but I’ve been meaning to get around to the paper “Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results. This paper was published back in August, but I think it’s an important one for anyone looking to understand why science can often be so difficult.

The premise of this paper was simple, but elegant: give 29 teams the same data set and the same question to answer, then see how everyone does their analysis and if all of those analyses yield the same results. In this case, the question was “do soccer referees give red cards to dark skinned players more than light skinned players”. The purpose of the paper was to highlight how seemingly minor choices in data analysis can yield different results, and all participants had volunteered for this study with full knowledge of what the purpose was. So what did they find? Let’s take a look!

    1. Very few teams picked the same analysis methods. Every team in this study was able to pick whatever method they thought best fit the question they were trying to answer, and boy did the choices vary. First, the choice of analysis method varied: Next, the choice of covariates varied wildly. The data set had contained 14 covariates, and the 29 teams ended up coming up with 21 different combinations to look at:
    2. Choices had consequences As you can imagine, this variability produced some interesting consequences. Overall 20 of the 29 teams found a significant effect, but 9 didn’t. The effect sizes they found also varied wildly, with odds ratios running from .89 to 2.93. While that shows a definite trend in favor of the hypothesis, it’s way less reliable than the p<.05 model would suggest.
    3. Analytic choices didn’t necessarily predict who got a significant result. Now because all of these teams signed up knowing what the point of the study was, the next step in this study was pretty interesting. All the teams methods (but not their results) were presented to all the other teams, who then rated them. The highest rated analyses gave a median odds ratio of 1.31, and the lower rated analyses gave a median odds ratio of…..1.28. The presence of experts on the team didn’t change much either. Teams with previous experience teaching or publishing on statistical methods generated odds ratios with a median of 1.39, and the ones without such members had a median OR of 1.30. They noted that those with statistical expertise seemed to pick more similar methods, but that didn’t necessarily translate in to significant results.
    4. Researchers beliefs didn’t influence outcomes. Now of course the researchers involved in this had self-selected in a to a study where they knew other teams were doing the same analysis they were, but it’s interesting to note that those who said up front they believed the hypothesis was true were not more likely to get significant results than those who were more neutral. Researchers did change their beliefs over the course of the study however, as this chart showed:While many of the teams updated their beliefs, it’s good to note that the most likely update was “this is true, but we don’t know why”, followed by “this is true, but may be caused by something we didn’t captured in this data set (like player behavior)”.
    5. They key differences in analysis weren’t things most people would pick up on. At one point in the study, the teams were allowed to debate back and forth and look at each others analysis. One researcher noted that those teams that had included league and club as covariates were the ones who got non-significant results. As the paper states “A debate emerged regarding whether the inclusion of these covariates was quantitatively
      defensible given that the data on league and club were
      available for the time of data collection only and these
      variables likely changed over the course of many players’
      careers”. This is a fascinating debate, and one that would likely not have happened had these papers just been analyzed by one team. This choice was buried deep in the methods section, and I doubt under normal circumstances anyone would have thought twice about it.

That last point gets to why I’m so fascinated by this paper: it shows that lots of well intentioned teams can get different results even if no one is trying to be deceptive. These teams had no motivation to fudge their results or skew anything, and in fact were incentivized in the opposite direction. They still got different results however, for reasons that were so minute and debatable, they had to take multiple teams to discuss them. This shows nicely Andrew Gelman’s Garden of Forking Paths, how small choices can lead to big changes in outcomes. With no standard way of analyzing data, tiny boring looking choices in analysis can actually be a big deal.

The authors of the paper propose more group approaches may help mitigate some of these problems and give us all a better sense of how reliable results really are. After reading this, I’m inclined to agree. Collaborating up front also takes the adversarial part out, as you don’t just have people challenging each others research after the fact. Things to ponder.

Book Recommendation: Bad Blood

Well, my audit went well last week. The inspector called us “the most boring audit he’d ever had”, which quite frankly is what you want to hear from a regulator. Interest = violations = citations = sad BS King.

As someone who has now dealt with quite a few inspectors over the years, I am always interested to see how exactly they choose to go about surveying everything given the time constraints. This particular inspector had an interesting tactic: he ran down the list of regulations we should be following, and asked us verbally if we followed it or not. Everything tenth one or so, he would suddenly pivot and ask us to provide proof. He mentioned afterwards that he put a lot of weight on how quickly we were able to produce what he asked for. From what I can tell, his theory was that if you produce proof for random questions easily and without hesitation, you probably prepared for everything fairly well. Not a bad theory. Luckily for me, our preparation strategy had been to read through every standard, then prepare a response for it. Thus, we were boring, and my sanity is restored.

I was thinking about all this as I sat down to relax this weekend and picked up the book “Bad Blood: Secrets and Lies in a Silicon Valley Startup” by John Carreyrou. This book covers the rise and fall of Theranos and its founder Elizabeth Holmes, a topic I’ve mentioned on this blog before. To say I couldn’t put it down is a near literal statement: I started it at 5pm last night and finished it by noon today. The book converges on many of my interests: health, medicine, technology, data, and how very smart people can be deceived in to believing something that isn’t true. It also doesn’t hurt that the companies founder is a woman about my age who was once touted as being the first self-made female billionaire in a field I have actually worked in.

For those unfamiliar with Theranos, I’ll give the short version. Theranos was a company started in 2003 by then 19 year old Stanford drop out Elizabeth Holmes. Her vision was to create a blood analyzer that could run regular lab tests on just a few drops of blood, so patients could use a finger stick (like with home glucose monitoring) rather than get their blood drawn the conventional way. Ten years in, the company was worth almost $10 billion, but there was an issue: their product didn’t really work the way they claimed, and the company was using extreme tactics to cover this up. Eventually, in a bid to get somebody to pay attention to this, the story was brought to the attention of a Wall Street Journal reporter (John Carreyrou, who wrote the book) and he managed to untangle the web. Despite the highlights all being pretty well publicized at the time, I found the details and timeline reconstruction to be a fascinating read.

What interested me most about the book was that my characterization in my blog post 2 years ago was a little bit wrong. I had snarked that Carreyrou was one of the first to question them, but as I read the book I discovered that actually a lot of people had questioned Theranos, even during its prime. It actually restored my faith in humanity to see how many people had attempted to raise concerns about what they saw. Many of these people were young, with student debt, or marketing people unfamiliar with science, or simply people with ethics who just got uncomfortable, and many of them only stopped pushing when they were on the receiving end of some downright frightening legal (and sometimes not so legal) intimidation tactics. Additionally, many people who were deceived really couldn’t be blamed. In one particularly bizarre anecdote, Carreyrou mentions that a fellow Wall Street Journal reporter had gone to a meeting with Theranos and they had promised to show him how the machine worked. It turns out the machine didn’t work, but they’d written a program to hide any error messages with a progress screen, and then when he left the room they swapped out his sample and ran it on a regular analyzer they had in another room. Not really his fault for not picking up on that. She got her deal with Walgreens by performing a similar slight of hand. Since the initial WSJ articles, Theranos has paid out millions in lawsuits claiming that they intentionally deceived investors, and Holmes and Ramesh Balwani (her #2 guy and former boyfriend) are under indictment.

Throughout the book, Carreyrou returns to two related but slightly different central points:

  1. Holmes and her investors wanted to believe she was the next Steve Jobs or Bill Gates.
  2. Healthcare doesn’t work like other tech sector products. Claiming your technology works before it’s ready could kill someone.

It was interesting for me to reflect that if Holmes hadn’t entered the healthcare realm, she might have actually succeeded. While the biographies of people like Steve Jobs are actually littered with the stories of broken promises, many of the people who flipped on Holmes stated that they were compelled to resign their jobs or talk to reporters because they feared the shoddy work was going to kill someone.

So if this was so obvious, how did Theranos get to $10 billion? And how did they end up with people like Henry Kissinger, George Schulz and James Mattis on the board? A few lessons I gleaned:

  1. Watch out for the narrative, ask for data. One of the few things everyone agrees upon in this story was that Holmes was a compelling CEO. She could spin a strong narrative to anyone who asked, and was kind and easy to work with as long as you let her stick to the story. Throughout the story though, anyone who asked for proof of anything she said was met with responses ranging from frosty to belligerent. This is what initially reminded me of my inspection. We were able to provide proof just as readily as we were able to provide verbal confirmation, which is why our inspector ended up believing us.
  2. Look for real experts. After Carreyrou published his first article about the concerns with the company, he notes that Theranos issued quite a few heavily worded denials and legal threats to the Wall Street Journal. Luckily for him, he noted that post-publication several other media outlets jumped in and started asking questions. He noted that one of the reasons they were so quick to pounce is that a quick look at Theranos’s board and investors revealed that no one involved really knew anything about biotech. While names like Henry Kissinger are impressive, people quickly started noting that the board was mostly military men and diplomats. The lack of any medical leadership seemed out of place. Additionally, some investing groups (like Google Health) that specialize in biotech had passed on Theranos. This was enough to cause other news outlets to turn up the heat on Holmes, as the lack of real experts struck everyone as suspicious.
  3. Look at the history. In an interview he gave, Carreyrou pointed out that it wasn’t the initial investors in Theranos who screwed up, as early investors are often gambling on half-baked ideas. The people who failed their due diligence were those who invested a decade in. He notes that those people should have been pushing harder for financial statements and peer reviewed studies, and that didn’t happen. For Theranos not to have peer reviewed studies in their first year was understandable. To still be lacking them in their tenth year was a very bad sign.
  4. Apply the right standards to the right industry. Healthcare isn’t the same as a cell phone. There are laws, and regulating bodies that can and will shut you down. A 1% product failure rate can kill people. Don’t get so excited by the idea of “disruption” that you ignore reality.

Come to think of it, with a few tweaks these are all pretty good life lessons about how to avoid bad actors in your personal life as well. I really do recommend this book, if only as a counter-narrative to the whole “everyone said we couldn’t do it, but we proved the naysayers wrong!” thing. Sometimes naysayers are right.

Although maybe not forever. As an interesting end note: according to this article, Holmes is currently fundraising in Silicone Valley for another start up.

Voter Turnout vs Does My Vote Count

Welp, we have another election day coming up. I’ll admit I’ve been a little further removed from this election cycle than most people, for two reasons:

  1. We are undergoing a massive inspection at work tomorrow (gulp) and have been swamped preparing for it. Any thoughts or prayers for this welcome.
  2. I live in a state where most of the races are pretty lopsided.

For point #2, we have Democratic Senator Elizabeth Warren currently up by 22 points, and Republican Governor Charlie Baker currently up by almost 40 points. My rep for the House of Representatives is running unopposed. The most interesting race in our state was actually two Democrats with major streets/bridges named after their families duking it out, but that got settled during the primaries. I’ll vote anyway because I actually have strong feelings about some of our ballot questions, but most of our races are the very definition of “my vote doesn’t make a difference”.

However, I still think there are interesting reasons to vote even if your own personal vote counts minimally. In an age of increasing market segmentation and use of voter files, the demographics that show they consistently vote will always be more catered to by politicians. I mentioned this a while ago in my post about college educated white women. As a group they are only 10% of the voting public, but they are one of the demographics most likely to actually vote, and thus they get more attention than others.

This shows up in some interesting ways. For example, according to Pew Research, during the election Gen Xers and younger will be the majority of eligible voters, yet will not make up the majority of actual voters:

There are race based differences as well. Black voters and white voters vote at similar rates, but Hispanic and Asian voters vote less often.  Additionally, those with more education and those who are richer tend to vote more often.  While that last link mentions that it’s not clear that extra voters would change election results, I still think it’s likely that if some groups with low turnout turned in to groups with high turnout, we may see some changes in messaging.

While this may be mixed for some people who don’t tend to vote with their demographic,  it does seem like getting on the electoral radar is probably a good thing.

So go vote Tuesday!

 

Death Comes for the Appliance

Our dryer died this week. Or rather, it died last weekend and we got a new one this week. When we realized it was dead (with a full load of wet clothes in it, naturally), the decision making process was pretty simple.

We’re only the third owners of our (early 1950s) house, and the previous owners spent most of the 5 years they had it trying to flip it for a quick buck. We’ve owned it for 6 years now, so any appliance that wasn’t new when we moved in was probably put in by them when they moved in. That made the dryer about 11 years old, and it was a cheap model. I was pretty sure a cheap dryer over a decade old (that had been slowly increasing in drying time for a year or so, unhelped by a thorough cleaning) would be more trouble to repair than it was worth, so we got a new one.

After making the assertion above, I got a little curious if there was any research backing up the life span of various appliances. As long as I can remember I’ve been fairly fascinated by dead or malfunctioning appliances, which I blame on my Yankee heritage. I’ve lived with a lot of half-functioning appliances in my lifetime, so I’ve always been interested in what appliance sounds/malfunctions mean “this is an appliance that will last three more years if you just never use that setting and jerry-rig (yes that’s a phrase) a way to turn it off/on” and which sounds mean “this thing is about to burst in to flames, get a new one”.

It turns out there actually is research on the topic, summarized here, and that there’s a full publication on the topic here:

So basically it looks like we were on schedule for a cheap dryer to go. Our washing machine was still working, but it was cheaper if we replaced them both at the same time.

This list suggests our dishwasher was weak as it went at about 7 years (they refused to repair it for under the cost of replacement), but our microwave is remarkably strong (10 years and counting). We had to replace our refrigerator earlier than should have been necessary (that was probably the fault of a power surge), but our oven should have a few more years left.

Good to know.

Interestingly, when I mentioned this issue to my brother this weekend, he asked me if I realized what the longest lasting appliance in our family history was. He stumped me until he told me the location….a cabin owned by our extended family. The refrigerator in it has been operational since my mother was a child, and I’m fairly sure it’s this model of Westinghouse that was built in the 1950s, making it rather close to 70 years old:

Wanna see the ad? Here you go!

It’s amusing that it’s advertised as “frost free”, as my strongest childhood memories of this refrigerator were having to unplug it at the end of the summer season and then put towels all around it until all the ice that had built up in it melted. We’d take chunks out to try to hurry the process along.

Interestingly, the woman in the ad up there was Betty Furness, who ended up with a rather fascinating career that included working for Lyndon Johnson. She was known for her consumer advocacy work, which may be why the products she advertised lasted so darn long, or at least longer than my dryer.