I’m a bit late to the party on this one, but a few weeks ago there was a bit of a kerfluffle around a comment from a Congressman from Minnesota’s comments about maternal mortality in states like Texas and Missouri:
Now I had heard about the high maternal mortality rate in Texas, but it wasn’t until I read this National Review article about the controversial Tweet that I discovered that the numbers I’d heard reported may not be entirely accurate.
While it’s true that Texas had a very high rate of maternal mortality reported a few years ago, the article points to an analysis done after the initial spike was seen. A group of Texas public health researchers went back and recounted the maternal deaths within the state, this time trying a different counting method. Instead of relying on deaths that were coded as occurring during pregnancy or shortly afterward, they decided to actually look at the records and verify that the women had been pregnant. In half the cases, they found that no medical records could be found to corroborate that the woman was pregnant at the time of death. This knocked the maternal mortality rate down from 38.4 per 100,000 to 14.6 per 100,000. Yikes.
The problem appeared to be the way the death certificate itself was set up. The “pregnant vs not-pregnant” status was selected via dropdown menu. The researcher suspected that the 70 or so miscoded deaths were due to people accidentally clicking on the wrong option. They suggested replacing a dropdown with a radio button. To make sure this error wasn’t being made in both directions, they did actually go back and look at fetal death certificates and other death certificates for women of child bearing age to make sure that some weren’t incorrectly classified in the other direction. Unsurprisingly, it appears that when people want to classify a death as “occurring during pregnancy” they didn’t tend to make a mistake.
The researchers pointed out that such a dramatic change in rate suggested that every state should probably go back and recheck their numbers, or at least assess how easy it would be to miscode something. Sounds reasonable to me.
This whole situation reminded me of a class I attended a few years back that was sponsored by the hospital network I work for. Twice a year they invite teams to apply with an idea for an improvement project, and they give resources and sponsorship to about 13 groups during each session. During the first meeting, they told us our assignment was to go gather data about our problem, but they gave us an interesting warning. Apparently every session at least one group gathers data and discovers the burning problem that drove them to apply isn’t really a problem. This seems crazy, but it’s normally for reasons like what happened in Texas. In my class, it happened to a pediatrics group who was trying to investigate why they had such low vaccination rates in one of their practices. While the other 11 clinics were at >95%, they struggled to stay above 85%. Awareness campaigns among their patients hadn’t helped.
When they went back and pulled the data, they discovered the problem. Two or three of their employees didn’t know that when a patient left the practice, you were supposed to click a button that would take them off the official “patients in this practice” list. Instead, they were just writing a comments that said “patient left the practice”. When they went back and corrected this, they found out their vaccination rates were just fine.
I don’t know how widespread this is, but based on that classroom anecdote and general experience, I wouldn’t be surprised to find out 5-10% of public health data we see has some serious flaws. Unfortunately we probably only figure this out when it gets bad enough to pay attention to, like in the Texas case. Things to keep in mind.