Mistakes Were Made, Sometimes By Me

A few weeks ago, I put out a call asking for opinions on how a blogger should correct errors in their own work. I was specifically interested in errors that were a little less clear cut than normal: quoting a study that later turned out to be less convincing it initially appeared (failed to replicate), studies whose authors had been accused of misconduct, or studies that had been retracted.

I got a lot of good responses, so thanks to everyone who voted/commented/emailed me directly. While I came to realize there is probably not a perfect solution, there were a few suggestions I think I am going to follow up on:

  1. Updating the individual posts (as I know about them) It seemed pretty unanimous that updating old posts was the right thing to do. Given that Google is a thing and that some of my most popular posts are from over a year ago, I am going to try to update old posts if I know there are concerns about them. My one limitation is not always indexing well what studies I have cited where, so this one isn’t perfect. I’ll be putting a link up in the sidebar to let people know I correct stuff.
  2. Creating a “corrected” tag to attach to all posts I have to update. This came out of jaed’s comment on my post and seemed like a great idea. This will make it easier to track which type of posts I end up needing to update.
  3. Creating an “error” page to give a summary of different errors, technical or philosophical, that I made in individual posts, along with why I made them and what the correction was. I want to be transparent about the type of error that trip me up. Hopefully this will help me notice patterns I can improve upon. That page is up here, and I kicked it off with the two errors I caught last month. I’m also adding it to my side bar.
  4. Starting a 2-4 times a year meta-blog update Okay, this one isn’t strictly just because of errors, though I am going to use it to talk about them. It seemed reasonable to do a few posts a year mentioning errors or updates that may not warrant their own post. If the correction is major, it will get its own post, but this will be for the smaller stuff.

If you have any other thoughts or want to take a look at the initial error page (or have things you think I’ve left off), go ahead and meander over there.

Calling BS Read-Along Week 6: Data Visualization

Welcome to the Calling Bullshit Read-Along based on the course of the same name from Carl Bergstorm and Jevin West  at the University of Washington. Each week we’ll be talking about the readings and topics they laid out in their syllabus. If you missed my intro and want the full series index, click here or if you want to go back to Week 5 click here.

Oh man oh man, we’re at the half way point of the class! Can you believe it? Yup, it’s Week 6, and this week we’re going to talk about data visualization. Data visualization is an interesting topic because good data with no visualization can be pretty inaccessible, but a misleading visualization can render good data totally irrelevant. Quite the conundrum. [Update: a sentence that was originally here has been removed. See bottom of the post for the original sentence and the explanation] It’s easy to think of graphics as “decorations” for the main story, but as we saw last week with the “age at death graph”, sometimes those decorations get far more views than the story itself.

Much like last week, there’s a lot of ground to cover here, so I’ve put together a few highlights:

Edward Tufte The first reading is the (unfortunately not publicly available) Visual Display of Quantitative Information by the godfather of all data viz Edward Tufte.  Since I actually own this book I went and took a look at the chapter, and was struck by how much of his criticism was really a complaint about the same sort of “unclarifiable unclarity” we discussed in Week 1 and 2. Bad charts can arise because of ignorance of course, but frequently they exist for the same reason verbal or written bullshit does. Sometimes people don’t care how they’re presenting data as long as it makes their point, and sometimes they don’t care how confusing it is as long as they look impressive. Visual bullshit, if you will. Anything from Tufte is always worth a read, and this book is no exception.

Next up are the “Tools and Tricks” readings which are (thankfully) quite publicly available. These cover a lot of good ground themselves, so I suggest you read them.

Misleading axes The first reading goes through the infamous but still-surprisingly-commonly-used case of the chopped y-axis. Bergstrom and West put forth a very straightforward rule that I’d encourage the FCC to make standard in broadcasting: bar charts should have a y-axis that starts at zero, line charts don’t have to. Their reasoning is simple: bar charts are designed to show magnitude, line charts are designed to show variation, therefore they should have different requirements. A chart designed to show magnitude needs to show the whole picture, whereas one designed to show variation can just show variation. There’s probably a bit of room to quibble about this in certain circumstances, but most of the time I’d let this bar chart be your guide:

They give several examples of charts, sometimes published or endorsed by fairly official sources screwing this up, just to show us that no one’s immune. While the y-axis gets most of the attention, it’s worth noting the x-axis should be double check too. After all, even the CDC has been known to screw that up. Also covered are the problems with multiple y-axes, which can give impressions about correlations that aren’t there or have been scaled-for-drama. Finally, they cover what happens when people invert axes and just confuse everybody.

Proportional Ink The next tool and trick reading comes with a focus on “proportional ink” and is similar to the “make sure your bar chart axis includes zero” rule the first reading covered. The proportional ink rule is taken from the Tufte book and it says: “The representation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the numerical quantities represented”. 

[Added for clarity: While Tufte’s rule here can refer to all sorts of design choices, the proportional ink rule hones in on just one aspect: the shaded area of the graph.] This rule is pretty handy because it gives some credence to the assertion made in the misleading axes case study: bar charts need to start at zero, line charts don’t. The idea is that since bar charts are filled in, not starting them at zero violates the proportional ink rule and is misleading visually. To show they are fair about this, the case study also asserts that if you fill in the space under a line graph you should be starting at zero. It’s all about the ink.

Next, we dive in to the land of bubble charts, and then things get really murky. One interesting problem they highlight is that in this case following the proportional ink rule can actually lead to some visual confusion, as people are pretty terrible at comparing the sizes of circles. Additionally, there are two different ways to scale circles: area and radius. Area is probably the fairer one, but there’s no governing body enforcing one way or the other. Basically, if you see a graph using circles, make sure you read it carefully. This goes double for doughnut charts. New rule of thumb: if your average person can’t remember how to calculate the area of a shape, any graph made with said shape will probably be hard to interpret. Highly suspect shapes include:

  • Circles
  • Anything 3-D
  • Pie charts (yeah, circles with angles)
  • Anything that’s a little too clever

To that last point, they also cover some of the more dense infographics that have started popping up in recent years, and how carefully you must read what they are actually saying in order to judge them accurately. While I generally applaud designers who take on large data sets and try to make them accessible, sometimes the results are harder to wade through than a table might have been. My dislike for infographics is pretty well documented, so I feel compelled to remind everyone of this one from Think Brilliant:

Lots of good stuff here, and every high school math class would be better off if they taught a little bit more of this right from the start. Getting good numbers is one thing, but if they’re presented in a deceptive or difficult to interpret way, people can still be left with the wrong impression.

Three things I would add:

  1. Track down the source if possible One of the weird side effects of social media is that pictures are much easier to share now, and very easy to detach from their originators. As we saw last week with the “age at death” graph, sometimes graphs are created to accompany nuanced discussions and then the graph gets separated from the text and all context is lost. One of the first posts I ever had go somewhat viral had a graph in it, and man did that thing travel. At some point people stopped linking to my original article and started reporting that the graph was from published research. Argh! It was something I threw together in 20 minutes one morning! It even had axis/scale problems that I pointed out in the post and asked for more feedback! I gave people the links to the raw data! I’ve been kind of touchy about this ever since….and I DEFINITELY watermark all my graphs now. Anyway, my personal irritation aside, this happens to others as well. In my birthday post last year I linked to a post by Matt Stiles who had put together what he thought was a fun visual (now updated) of the most common birthdays. It went viral and  quite a few people misinterpreted it, so he had to put up multiple amendments.  The point is it’s a good idea find the original post for any graph you find, as frequently the authors do try to give context to their choices and may provide other helpful information.
  2. Beware misleading non-graph pictures too I talk about this more in this post, but it’s worth noting that pictures that are there just to “help the narrative” can skew perception as well. For example, one study showed that news stories that carry headlines like “MAN MURDERS NEIGHBOR” while showing a picture of the victim cause people to feel less sympathy for the victim than headlines that say “LOCAL MAN MURDERED”. It seems subconsciously people match the picture to the headline, even if the text is clear that the picture isn’t of the murderer. My favorite example (and the one that the high school students I talk to always love) is when the news broke that only .2% of Tennessee welfare applicants tested under a mandatory drug testing program tested positive for drug use. Quite a few news outlets published stories talking about how low the positive rate was, and most of them illustrated the story with a picture of a urine sample or blood vial. The problem? The .2% positive rate came from a written drug test. The courts in Tennessee had ruled that taking blood or urine would violate the civil rights of welfare applicants, and since lawmakers wouldn’t repeal the law, they had to test them somehow. More on that here. I will guarantee you NO ONE walked away from those articles realizing what kind of drug testing was actually being referenced.
  3. A daily dose of bad charts is good for you Okay, I have no evidence for that statement, I just like looking at bad charts. Junk Charts by Kaiser Fung and the WTF VIZ tumblr and Twitter feed are pretty great.

Okay, that’s all for Week 6! We’re headed in to the home stretch now, hang in there kids.

Week 7 is up! Read it here.

Update from 4/10/17 3:30am ET (yeah, way too early): This post originally contained the following sentence in the first paragraph: “Anyway it’s an important issue to keep in mind since there’s evidence that suggests that merely seeing a graph next to text can make people perceive a story as more convincing and data as more definitive, so this is not a small problem.”  After I posted, it was pointed out to me that the study I linked to in that  sentence is from a lab whose research/data practices have recently come in for some serious questioning.  The study I mentioned doesn’t appear to be under fire at the moment, but the story is still developing and it seems like some extra skepticism for all of their results is warranted. I moved the explanation down here so as to not interrupt the flow of the post for those who just wanted a recap. The researcher under question (Brian Wansink) has issued a response here.

Calling BS Read-Along Week 3: The Natural Ecology of BS

Welcome to the Calling Bullshit Read-Along based on the course of the same name from Carl Bergstorm and Jevin West  at the University of Washington. Each week we’ll be talking about the readings and topics they laid out in their syllabus. If you missed my intro and want the full series index, click here or if you want to go back to Week 2 click here.

Well hi there! It’s week 3 of the read-along, and this week we’re diving in to the natural ecology of bullshit. Sounds messy, but hopefully by the end you’ll have a better handle on where bullshit is likely to flourish.

So what exactly is the ecology of bullshit and why is it important? Well, I think it helps to think of bullshit as a two step process. First, bullshit gets created. We set the stage for this in week one when we discussed the use of bullshit as a tool to make yourself sound more impressive or more passionate about something. However, the ecology of bullshit is really about the second step: sharing, spreading and enabling the bullshit. Like rumors in middle school, bullshit dies on the vine if nobody actually repeats it. There’s a consumer aspect to all of this, and that’s what we’re going to cover now. The readings this week cover three different-but-related conditions that allow for the growth of bullshit: psuedo-intellectual climates, psuedo-profound climates, and social media. Just like we talked about in week one, it is pretty easy to see when the unintelligent are propagating bullshit, but it is a little more uncomfortable to realize how often the more intelligent among us are responsible for their own breed of  “upscale bullshit”.

And where do you start if you have to talk about upscale bullshit? By having a little talk about TED. The first reading is a Guardian article that gets very meta by featuring a TED talk about how damaging the TED talk model can be. Essentially the author argues that we should be very nervous when we start to judge the value of information by how much it entertains us, how much fun we have listening to it, or how smart we feel by the end of it. None of those things are bad in and of themselves, but they can potentially crowd out things like truth or usefulness. While making information more freely available and thinking about how to communicate it to a popular audience is an incredibly valuable skill, leaving people with the impression that un-entertaining science is less valuable or truthful is a slippery slope.1

Want a good example of the triumph of entertainment over good information? With almost 40 million views, Amy Cuddy’s Wonder Woman/power pose talk is the second most watched TED talk of all time. Unfortunately, the whole thing is largely based on a study that  has (so far) failed to replicate. The TED website makes no note of this [Update: After one of the original co-authors publicly stated they no longer supported the study in Oct 2016, TED added the following note to the summary of the talk “Note: Some of the findings presented in this talk have been referenced in an ongoing debate among social scientists about robustness and reproducibility. Read Amy Cuddy’s response under “Learn more” below.”], and even the New York Times and Time magazine fail to note this when it comes up. Now to be fair, Cuddy’s talk wasn’t bullshit when she gave it, and it may not even be bullshit now. She really did do a study (with 21 participants) that found that power posing worked. The replication attempt that failed to find an effect (with 100 participants) came a few years later, and by then it was too late, power posing had already entered the cultural imagination. The point is not that Cuddy herself should be undermined, but that we should be really worried about taking a nice presentation as the final word on a topic before anyone’s even seen if the results hold up.

The danger here of course is that people/things that are viewed as “smart” can have a much farther reach than less intellectual outlets. Very few people would repeat a study they saw in a tabloid, but if the New York Times quotes a study approvingly most people are going to assume it is true. When smart people get things wrong, the reach can be much larger. One of the more interesting examples of the “how a smart person gets things wrong” vs “how everyone else gets things wrong” phenomena I’ve ever seen is from the 1987 documentary “A Private Universe”. In the opening scene Harvard graduates are interviewed at their commencement ceremony and asked a simple question quite relevant to anyone in Boston: why does it get colder in the winter? 21 out of 23 of them get it wrong (hint: it isn’t the earth’s orbit)….but they sound pretty convincing in their wrongness. The documentary then interviews 9th graders, who are clearly pretty nervous and stumble through their answers. About the same number get the question wrong as the Harvard grads, but since they are so clearly unsure of themselves that you wouldn’t have walked away convinced. The Harvard grads weren’t more correct, just more convincing.

Continuing with the theme of “not correct, but sounds convincing”, our next reading is the delightfully named  “On the reception and detection of pseudo-profound bullshit” from Gordon Pennycook.  Pennycook takes over where Frankfurt’s “On Bullshit” left off and actually attempts to empirically study our tendency to fall for bullshit. His particular focus is what others have called “obscurantism” defined as “[when] the speaker… [sets] up a game of verbal smoke and mirrors to suggest depth and insight where none exists”…..or as commenter William Newman said in response to my last post “adding zero dollars to your intellectual bank”. Pennycook proposes two possible reasons we fall for this type of bullshit:

  1. We generally like to believe things rather than disbelieve them (incorrect acceptance)
  2. Purposefully vague statements make it hard for us to detect bullshit (incorrect failure to reject)

It’s a subtle difference, but any person familiar with statistics at all will immediate recognize this as a pretty classic hypothesis test. In real life, these are not mutually exclusive. The study itself took phrases from two websites I just found out existed and am now totally amused by (Wisdom of Chopra and the New Age Bullshit Generator), and asked college students to rank how profound the (buzzword filled but utterly meaningless) sentences were2. Based on the scores, the researchers assigned a “bullshit receptivity scale” or BSR to each participant. They then went through a series of 4 studies that related bullshit receptivity to other various cognitive features. Unsurprisingly, they found that bullshit receptivity was correlated with belief in other potentially suspect beliefs (like paranormal activity), leading them to believe that some people have the classic “mind so open their brain falls out”. They also showed that those with good bullshit detection (i.e. those who could rank legitimate motivational quotes as profound while also ranking nonsense statements as nonsense) scored higher on analytical thinking skills. This may seem like a bit of a “well obviously” moment, but it does suggest that there’s a real basis to Sagan’s assertion that you can develop a mental toolbox to detect baloney. It also was a good attempt at separating out those who really could detect bullshit from those who simply managed to avoid it by saying nothing was profound. Like with the psuedo-intellectualism, the study authors hypothesized that some people are particularly driven to find meaning in everything, so they start finding it in places that it doesn’t exist.

Last but not least, we get to the mother of all bullshit spreaders: social media. While it is obvious social media didn’t create bullshit, it is undeniably an amazing bullshit delivery system. The last paper “Rumor Cascades“, attempts to quantify this phenomena by studying how rumors spread on Facebook. Despite the simple title, this paper is absolutely chock full of interesting information about how rumors get spread and shared on social media, and the role of debunking in slowing the spread of false information. To track this, they took rumors found on Snopes.com and used the Snopes links to track the spread of their associated rumors through Facebook. Along the way they pulled the number of times the rumor was shared, time stamps to see how quickly things were shared (answer: most sharing is done within 6 hours of a post going up), and if responding to a false rumor by linking to a debunking made a difference (answer: yes, if the mistake was embarrassing and the debunking went up quickly). I found this graph particularly interesting, as it showed a fast linking to Snopes (they called it being “snoped”) was actually pretty effective in getting the post taken down:

Snopetoreshare.pngIn terms of getting people to delete their posts, the most successful debunking links were things like “those ‘photos of Trayvon Martin the media doesn’t want you to see’ are not actually of Trayvon Martin“. They also found that while more false rumors are shared, true rumors spread more widely. Not a definitive paper by any means but a fascinating initial look at the new landscape. Love it or hate it, social media is not going away any time soon, and the more we understand about how it is used to spread information, the better prepared we can be3.

Okay, so what am I taking away from this week?

  1. If bullshit falls in the forest and no one hears it, does it make a sound? In order to fully understand bullshit, you have to understand how it travels. Bullshit that no one repeats does minimal damage.
  2. Bullshit can grow in different but frequently overlapping ecosystems Infotainment, the psuedo-profound, and close social networks all can spread bullshit quickly.
  3. Analytical thinking skills and debunking do make a difference The effect is not as overwhelming as you’d hope, but every little bit helps

I think separating out how bullshit grows and spreads from bullshit itself is a really valuable concept. In classic epidemiology disease causation is modeled using the “epidemiologic triad“, which looks like this (source):epidemiologictriad

If we consider bullshit a disease, based on the first three weeks I would propose its triad looks something like this:

triadofbullshit

And on that note, I’ll see you next week for some causality lessons!

Week 4 is up! If you want to read it, click here.

1. If you want  a much less polite version of this rant with more profanity, go here.
2. My absolute favorite part of this study is that part way through they included an “attention check” that asked the participants to skip the answers and instead write “I read the instructions” in the answer box. Over a third of participants failed to do this. However, they pretty much answered the rest of the survey the way the other participants did which kinda calls in to question how important paying attention is if you’re listening to bullshit.
3. It’s not a scientific study and not just about bullshit, but for my money the single most important blog post ever written about the spread of information on the internet is this one right here. Warning: contains discussions of viruses, memetics, and every controversial political issue you can think of. It’s also really long.

Internet Science: Some Updates

Well, it’s that time of year again: back to school. Next week I am again headed back to high school to give the talk that spawned my Intro to Internet Science series last year.

This is my third year talking to this particular class, and I have a few updates that I thought folks might be interested in. It makes more sense if you read the series (or at least the intro to what I try to do in this talk), so if you missed that you can check it out here.

Last year, the biggest issue we ran in to was kids deciding they can’t believe ANY science, which I wrote about here. We’re trying to correct that a bit this year, without losing the “be skeptical” idea. Since education research kinda has a replication problem, all the things we’re trying are generally just a discussion between the teacher and I.

  1. Skin in the game/eliminating selection bias In order to make the class a little more interactive, I’ve normally given the kids a quiz to kick things off. We’ve had some trouble over the years getting the kids answers compiled, so this year we’re actually giving them the quiz ahead of time. This means I’ll be able to have the results available before the talk, so I can show them during the talk. I’m hoping this will help me figure out my focus a bit. When I only know the feedback of the kids who want to raise their hands, it can be hard to know which issues really trip the class up.
  2. Focus on p-values and failure to replicate In the past during my Crazy Stats Tricks part, I’ve tried to cram a lot in. I’ve decided this is too much, so I’m just going to include a bit about failed replications. Specifically, I’m going to talk about how popular studies get repeated even when it turns out they weren’t true. Talking about Wonder Woman and power poses is a pretty good attention getter, and I like to point out that the author’s TED talk page contains no disclaimer that her study failed to replicate (Update: As of October 2016, the page now makes note of the controversy). It does however tell us it’s been viewed 35,000,000 times.
  3. Research checklist As part of this class, these kids are eventually going to have to write a research paper. This is where the whole “well we can’t really know anything” issue got us last year. So to end the talk, we’re going to give the kids this research paper checklist, which will hopefully help give them some guidance. Point #2 on the checklist is “Be skeptical of current findings, theories, policies, methods, data and opinions” so our thought is to basically say “okay, I got you through #2….now you have the rest of the year to work through the rest”. I am told that many of the items on that list meet the learning objectives for the class, so this should give the teacher something to go off of for the rest of the year as well.

Any other thoughts or suggestions (especially from my teacher readers!) are more than welcome. Wish me luck!

Pictures: Trying to Distract You (Part 4)

Note: This is part 4 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to part 3 here.

When last we met, we covered what I referred to as “narrative pictures”, or pictures that were being used to add to the narrative part of the story.  In this section, we’re going to start looking at pictures where the problems are more technical in nature…ie graphs and sizing. This is really a blend of a picture problem and a proof problem, because these deceptions are using numbers not narratives.   Since most of these issues are a problem of scales or size, I’m calling this section:

Graphs: Changing the View

Okay, so what’s the problem here?

The problem, once again, is a little bit of marketing. Have you ever judged the quality of a company based on how slick their website looks? Or judged a book by it’s cover, to coin a phrase? Well, it turns out we’re very similar when it comes to judging science. In a 2014 study, (Update: this lab that performed this study has come under review for some questionable data practices. It is not clear if this study is affected, but you can read the details of the accusations here and here) researchers gave people two articles on the effectiveness of a made up drug. The text of both articles was the same, but one had a graph that showed the number of people the drug helped, the other did not.  Surprisingly, 97% of the people who saw the graph believed the drug worked, whereas only 68% of people who read the text did. The researchers did a couple of other experiments, and basically found that not just graphs, but ANY “science-ish” pictures (chemical formulas, etc) influenced what people thought of the results.

So basically, adding graphs or other “technical” pictures to things to lend credibility to their articles or infographic, and you need to watch out.

Okay, so what kind of things should we be looking out for?

Well, in many cases, this isn’t a really a problem. Graphs or charts that reiterate the point of the article are not necessarily bad, but they will influence your perception. If the data warrants it, a chart reiterating the point is fantastic. It’s how nearly every scientific paper written operates, and it’s not inherently deceptive….but you may want to be aware that these pictures by themselves will influence your perception of the data. Not necessarily a problem, but good to be aware of under any circumstance.

There are some case though, where the graph is a little trickier.  Let’s go through a few:

Here’s one from Jamie Bernstein over at Skepchick, who showed this great example in a Bad Chart Thurday post:pica

Issue: The graph y-axis shows percent of growth, not absolute value. This makes hospitalized pica cases look several times larger than anorexia or bulimia cases. In reality, hospitalized anorexia cases are 5 times as common and bulimia cases are 3 times as common as pica cases. These numbers are given at the bottom, but the graph itself could be tricky if you don’t read it carefully.

How about this screen shot from Fox News, found here?

 Issue: Visually, this chart shows the tax rate will quadruple or quintuple if the Bush tax cuts aren’t extended. If the axis started at zero however, the first bar would be about 90% the size of the second one.

How about this tweeted graph from shared by the National Review?

Issue: The problem with this one is the axis does start with zero. The Huffington Post did a good cover of this graph here, along with what some other graphs would look like if you set the scale that large. Now of course there can be legitimate discussion over where a fair axis scale would be, but you should make sure the visual matches the numbers.

And one more example that combines two issues in one:

See those little gas pumps right there? They’ve got two issues going on. The first is a start date that had an unusually low gas price:

gas

The infographic implies that Obama sent gas prices through the roof….but as we can see gas prices were actually bizarrely low the day he took office.  Additionally, the gas pumps involved are deceptive:

b1fb2-gas

If you look, they’re claiming to show that prices doubled. However, the actual size of the second one is four times the one of the first one.  They doubled the height and the width:

76fed-gas2

While I used a lot of political examples here, this isn’t limited to politics. Andrew Gelman caught the CDC doing it here, and even he couldn’t figure out why they’d have mucked with the axis.

There’s lots of repositories for these, and Buzzfeed even did a listicle here if you want more. It’s fun stuff.

Why do we fall for this stuff?

Well, as we’ve said before, visual information can reinforce or skews your perceptions, and visual information with numbers can intensify that effect. This isn’t always a bad thing…after all nearly every scientific paper ever published includes charts and graphs. When you’re reading for fun though, it’s easy to let these things slip by. If you’re trying to process text, numbers, implications AND read the x and y axis and make sure the numbers are fairly portrayed, it can be a challenge.

So what can we do about it?

A few years ago, I asked a very smart colleague how he was able to read and understand so many research papers so quickly. He seemed to read and retain a ton of highly technical literature, while I always found my attention drifting and would miss things. His advice was simple: start with the graphs. See I would always try to read papers from start to finish, looking at graphs when they were cited. He suggested using the graphs to get a sense of the paper, then read the paper with an eye towards explaining the graphs. I still do this, even when I’m reading for fun. If there’s a graph, I look at it first when I know nothing, then read the article to see if my questions about it get answered. It’s easier to notice discrepancies this way. At the very least, it reminds you that the graph should be there to help you. Any evidence that it’s not should make you suspicious of the whole article and the author’s agenda.

So that wraps up our part on Pictures! In part 5, we’ll finally reach the text of the article.

Read Part 5 here.