Human Bias vs Machine Bias

One of the most common running themes on this blog is discussions of human bias, a topic I clearly believe deserves quite a bit of attention. In recent years though, I have started thinking a lot more about machine bias, and how it is slowly influencing more of our lives. In thinking about machine bias though, I’ve noticed recently that many people (myself included) actually tend to anthropomorphize machine bias and attempt to evaluate it as though it was bias coming from a human with a particularly wide-spread influence. Since anthropomorphism is actually a cognitive bias itself, I thought I’d take a few minutes today to talk about things we should keep in mind when talking about  computers/big data algorithms/search engine results. Quite a few of my points here will be based off of the recent kerfluffle around Facebook offering to target your ads to “Jew haters”, the book Weapons of Math Destruction, and the big data portion of the Calling BS class. Ready? Here we go!

  1. Algorithm bias knows no “natural” limit. There’s an old joke where someone, normally a prankster uncle, tells a child that they’ll stop pouring their drink when they “say when”. When the child subsequently says “stop” or “enough” or someone other non-when word, the prankster keeps pouring and the glass overflows. Now, a normal prankster will pour a couple of extra tablespoons of milk on the table. An incredibly dedicated prankster might pour the rest of the container. An algorithm in this same scenario would not only finish the container but go run out to the store and buy more so they could continue pouring until you realized you were supposed to say when. Nearly every programming 101 class starts at some point with a professor saying “the nice part is, computer’s do what you tell them. The downside is, computers do what you tell them”. Thus, despite the fact that no sane person, even a fairly anti-Semitic one, would request an advertising group called “Jew haters”, a computer will return a result like this if it hits the right criteria.
  2. Thoroughness does not indicate maliciousness. Back in the 90s, there was a sitcom on called “Spin City” about a fictional group of people in the mayor’s office in New York City. At one point the lone African American in the office discovered that you could use their word processing software to find certain words and replace them with others, so in an attempt to make the office more PC, he sets them up to replace the word “black” with “African-American”. This of course promptly leads to the mayor’s office inviting some constituents to an “African-American tie dinner”, and canned laughter ensues. While the situation is fictional, this stuff happens all the time. When people talk about the widespread nature of an algorithm bias, there’s always a sense that some human had to put extra effort in to making the algorithm do absurd things, but it’s almost always the opposite. You have to think of all the absurd things the algorithm could do ahead of time in order to stop them. Facebook almost certainly got in to this mess by asking its marketing algorithm to find often-repeated words in people’s profiles and aggregate those for its ads. In doing so, it forgot that the algorithm would not filter for “clearly being an asshole” and exclude that from the results.
  3. While algorithms are automatic, fixing them is often manual. Much like your kid blurting out embarrassing things in public, finding out your algorithm has done something embarrassing almost certainly requires you to intervene. However, this can be like a game of whack-a-mole, as you still don’t know when these issues are going to pop up. Even if you exclude every ad group that goes after Jewish people, the chances that some other group has a similar problem is high. It’s now on Facebook to figure out who those other groups are and wipe the offending categories from the database one by one. The chances they’ll miss some iteration of this is high, and then it will hit the news again in a year. With a human, this would be a sign they didn’t really “learn their lesson” the first time, but with an algorithm it’s more a sign that no one foresaw the other ways it might screw up.
  4. It is not overly beneficial to companies to fix these things, EXCEPT to avoid embarrassment. Once they’re up and running, algorithms tend to be quite cheap to maintain, until someone starts complaining about them. As long as their algorithms are making money and no one is saying anything negative, most companies will assume everything is okay. Additionally, since most of these algorithms are proprietary, people outside the company almost never get insight in to their weaknesses until they see a bad result so obvious they realize what happened. In her book Weapons of Math Destruction , Cathy O’Neill tells an interesting story about one teachers attempt (and repeated failure) to get an explanation for why an algorithm called her deficient despite excellent reviews, and why so much faith was put in it that she was fired. She never got an answer, and ultimately got rehired by a (ironically better funded, more prestigious) district. One of O’Neills major take-aways is that people will put near unlimited trust in algorithms, while not realizing that the algorithms decision making process could be flawed. It would be nearly impossible for a human to wield that much power while leaving so little trace, as every individual act of discrimination or unfairness would leave a trail. With a machine, it’s just the same process applied over and over.
  5. Some groups have more power than others to get changes made, because some people who get discriminated against won’t be part of traditional groups. This one seems obvious, but hear me out here. Yes, if your computer program ends up tagging photos of black people as “gorillas”, you can expect the outcry to be swift. But with many algorithms, we don’t know if there are new groups we’ve never thought of that are being discriminated against. I wrote a piece a while ago about a company that set their default address for unknown websites to the middle of the country, and inadvertently caused a living nightmare for the elderly woman who happened to own the house closest to that location. This woman had no idea why angry people kept showing up at her door, and had no idea what questions to ask to find out why they got there. We’re used to traditional biases that cover broad groups, but what if a computer algorithm decided to exclude men who were exactly age 30? When would someone figure that out? We have no equivalent in human bias for more oddly specific groups, and probably won’t notice them. Additionally, groups with less computer savvy will be discriminated against, solely due to the “lack of resources to trouble-shoot the algorithm” issues. The poor. Older people. Those convicted of crimes.The list goes on.

Overall, things aren’t entirely hopeless. There are efforts underway to come up with software that can systematically test “black box” algorithms on a larger scale to help identify biased algorithms before they can cause problems. However, until something reliable can be found, people should be aware that the biases we’re looking for are not the ones you would normally see if humans were running the show. One of the reasons AI freaks me out so much is because we really do all default to anthropomorphizing the machines and only look out for the downsides that fit our pre-conceived notions of how humans screw up. While this comes naturally to most of us, I would argue it’s on of the more dangerous forms of underestimating a situation we have going today. So uh, happy Sunday!

Calling BS Read-Along Week 7: Big Data

Welcome to the Calling Bullshit Read-Along based on the course of the same name from Carl Bergstorm and Jevin West  at the University of Washington. Each week we’ll be talking about the readings and topics they laid out in their syllabus. If you missed my intro and want the full series index, click here or if you want to go back to Week 6 click here.

Well hello week 7! This week we’re taking a look at big data, and I have to say this is the week I’ve been waiting for. Back when I first took a look at the syllabus, this was the topic I realized I knew the least about, despite the fact that it is rapidly becoming one of the biggest issues in bullshit today. I was pretty excited to get in to this weeks readings, and I was not disappointed. I ended up walking away with a lot to think about, another book to read, and a decent amount to keep me up at night.

Ready? Let’s jump right in to it!

First, I suppose I should start with at least an attempt at defining “big data”. I like the phrase from the Wiki page here “Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time.” Forbes goes further and compiles 12 definitions here. If you come back from that rabbit hole, we can move in to the readings.

The first reading for the week is “Six Provocations for Big Data” by danah boyd and Kate Crawford. The paper starts off with a couple of good quotes (my favorite: ” Raw data is both an oxymoron and a bad idea; to the contrary, data should be cooked with care”) and a good vocab word/warning for the whole topic: apophenia, the tendency to see patterns where none exist. There’s a lot in this paper (including a discussion about what Big Data actually is), but the six provocations the title talks about are:

  1. Automating Research Changes the Definition of Knowledge Starting with the example of Henry Ford using the assembly line, boyd and Crawford question how radically Big Data’s availability will change what we consider knowledge. If you can track everyone’s actual behavior moment by moment, will we end up de-emphasizing the why of what we do or broader theories of development and behavior? If all we have is a (big data) hammer, will all human experience end up looking like a (big data) nail?
  2. Claims to Objectivity and Accuracy are Misleading I feel like this one barely needs to be elaborated on (and is true of most fields), but it also can’t be said often enough. Big Data can give the impression of accuracy due to sheer volume, but every researcher will have to make decisions about data sets that can introduce bias. Data cleaning, decisions to rely on certain sources, and decisions to generalize are all prone to bias and can skew results. An interesting example given was the original Friendster (Facebook before there was Facebook for the kids, the Betamax to Facebook’s VHS for the non-kids). The developers had read the research that people in real life have trouble maintaining social networks of over 150 people, so they capped the friend list at 150. Unfortunately for them, they didn’t realize that people wouldn’t use online networks the same way they used networks in real life. Perhaps unfortunately for the rest of us, Facebook did figure this out, and the rest is (short term) history.
  3. Bigger Data are Not Always Better Data Guys, there’s more to life than having a large data set. Using Twitter data as an example, they point out that large quantities of data can be just as biased (one person having multiple accounts, non-representative user groups) as small data sets, while giving some people false confidence in their results.
  4. Not all Data are Equivalent With echos of the Friendster example from the second point, this point flips the script and points out that research done using online data doesn’t necessarily tell us how people interact in real life. Removing data from it’s context loses much of it’s meaning.
  5. Just Because it’s Accessible Doesn’t Make it Ethical The ethics of how we use social media isn’t limited to big data, but it definitely has raised a plethora of questions about consent and what it means for something to be “public”. Many people who would gladly post on Twitter might resent having those same Tweets used in research, and many have never considered the implications of their Tweets being used in this context. Sarcasm, drunk tweets, and tweets from minors could all be used to draw conclusions in a way that wouldn’t be okay otherwise.
  6. Limited Access to Big Data Creates New Digital Divides In addition to all the other potential problems with big data, the other issue is who owns and controls it. Data is only as good as your access to it, and of course nothing obligates companies who own it to share it, or share it fairly, or share it with people who might use it to question their practices. In assessing conclusions drawn from big data, it’s important to keep all of those issues in mind.

The general principles laid out here are a good framing for the next reading the Parable of the Google Flu, an examination of why Google’s Flu Trends algorithm consistently overestimated influenza rates in comparison to CDC reporting. This algorithm was set up to predict influenza rates based on the frequency of various search terms in different regions, but over 108 weeks examined it overestimated rates 100 times, sometimes by quite a bit. The paper contains a lot of interesting discussion about why this sort of analysis can err, but one of the most interesting factors was Google’s failure to account for Google itself. The algorithm was created/announced in 2009, and some updates were announced in 2013. Lazer et al point out that over that time period Google was constantly refining its search algorithm, yet the model appears to assume that all Google searches are done only in response to external events like getting the flu. Basically Google was attempting to change the way you search, while assuming that no one could ever change the way you search. They call this internal software tinkering “blue team” dynamics, and point out that it’s going to be hell on replication attempts. How do you study behavior across a system that is constantly trying to change behavior? Also considered are “red team” dynamics, where external parties try to “hack” the algorithm to produce results they want.

Finally we have an opinion piece from a name that seems oddly familiar, Jevin West, called “How to improve the use of metrics: learn from game theory“. It’s short, but got a literal LOL from me with the line “When scientists order elements by molecular weight, the elements do not respond by trying to sneak higher up the order. But when administrators order scientists by prestige, the scientists tend to be less passive.” West points out that when attempting to assess a system that can respond immediately to your assessment, you have to think carefully about what behavior your chosen metrics reward. For example, currently researchers are rewarded for publishing a large volume of papers. As a result, there is concern over the low quality of many papers, since researchers will split their findings in to the “least publishable unit” to maximize their output. If the incentives were changed to instead have researchers judged based on only their 5 best papers, one might expect the behavior to change as well. By starting with the behaviors you want to motivate in mind, you can (hopefully) create a system that encourages those behaviors.

In addition to those readings, there are two recommend readings that are worth noting. The first is Cathy O’Neil’s Weapons of Math Destruction (a book I’ve started but not finished), which goes in to quite a few examples of problematic algorithms and how they effect our lives. Many of O’Neil’s examples get back to point #6 from the first paper in ways most of don’t consider. Companies maintaining control over their intellectual property seems reasonable, but what if you lose your job because your school system bought a teacher ranking algorithm that said you were bad? What’s your recourse? You may not even know why you got fired or what you can do to improve. What if the algorithm is using a characteristic that it’s illegal or unethical to consider? Here O’Neil points to sentencing algorithms that give harsher jail sentences to those with family members who have also committed a crime. Because the algorithm is supposedly “objective”, it gets away with introducing facts (your family members involvement in crimes you didn’t take part in) that a prosecutor would have trouble getting by a judge under ordinary circumstances. In addition, some algorithms can help shape the very future they say they are trying to predict. Why are Harvard/Yale/Stanford the best colleges in the US News rankings? Because everyone thinks they’re the best. Why do they think that? Look at the rankings!

Finally, the last paper is from Peter Lawrence with “The Mismeasurement of Science“. In it Lawrence lays out an impassioned case that the current structure around publishing causes scientists to spend too much time on the politics of publication and not enough on actual science. He also questions heavily who is rewarded by such a system, and if those are the right people. It reminded me of another book I’ve started but not finished yet “Originals: How Non-Conformists Move the World”. In that book Adam Grant argues that if we use success metrics based on past successes, we will inherently miss those who might have a chance at succeeding in new ways. Nicholas Nassim Taleb makes a similar case in Antifragile, where he argues that some small percentage of scientific funding should go to “Black Swan” projects….the novel, crazy, controversial destined-to-fail type research that occasionally produces something world-changing.

Whew! A lot to think about this week and these readings did NOT disappoint. So what am I taking away from this week? A few things:

  1. Big data is here to stay, and with it come ethical and research questions that may require new ways of thinking about things.
  2. Even with brand new ways of thinking about things, it’s important to remember the old rules and that many of them still apply
  3. A million plus data points does not  =/= scientific validity
  4. Measuring systems that can respond to being measured should be approached with some idea of what you’d like that response to be, along with some plans for change if you have unintended consequences
  5. It is increasingly important to scrutinize sources of data, and to remember what might be hiding in “black box” algorithms
  6. Relying too heavily on the past to measure the present can increase the chances you’ll miss the future.

That’s all for this week, see you next week for some publication bias!

Week 8 is up! Read it here.

Digital Nightmares and Things We Don’t Know

It took me a few years of working with data before I realized what my primary job was. You see, back when I was a young and naive little numbers girl, I thought my primary job was to use numbers to expand what we knew about topics. I would put together information, hopefully gain some new insights, and pass the data on thinking my job was done.

It didn’t take me long before I realized the job was barely half finished.

You see, getting new insights from data is good and important, but it’s no more important than what comes next. As soon as you have data that says “x”, the natural inclination of almost everybody is to immediately extrapolate that out to say “Oh great! So we know x, which means we know y and z too!”.  It’s then that my real job kicks in.  Defending, defining and reiterating the limitations of data is a constant struggle, but if you are going to be honest about what you’re doing it’s essential.

I bring this up because I ran across a disturbing story that illustrates how damaging it can be when we don’t read the fine print about our data.  The whole story is here (along with the great subtitle “The Hills Have IPs”), and it’s about one family’s tech-induced ten year nightmare.

The short version: 10 years ago, a company called MaxMind starts a business helping people identify locations for IP addresses associated with particular computers. When they can’t find a location, they set up a default for the geographic center of the USA. Unbeknownst to the company, this gets associated with the street address of a small farmhouse in Kansas.  Over the next decade, every person who attempts to track down an IP address that’s not otherwise located (about 600 million of them) is given this address, which causes a constant stream of irate people, law enforcement and others to show up at the door of this farmhouse believing that’s where their hacker/iPhone thief/caller/harasser etc lives. The family has no idea why this is happening, and the local police department literally says the bulk of their job is now keeping angry and confused people away from this family.

The reporter who wrote the article (seriously, go read it) is the first person to put two and two together and actually figure out where the mix up happened.

What’s interesting about this story is that when it was brought to their attention, the company pointed out they actually have ALWAYS told customers not to trust the addresses given. They have always told people that results were only accurate within zip code or town. It’s not surprising that many individuals failed to recognize this, but it IS concerning that so many law enforcement agencies failed to take this in to account.  This isn’t just local departments either….the FBI and IRS have investigated the address several times.

Want to know the scariest part? The reporter only figured this out by going through the companies records and then having someone build a computer program to find physical addresses associated with high numbers of  IP addresses.  While the Kansas farm was the worst, there were hundreds of other addresses with similar problems, including one that was a hub for lost iPhones that started her crusade. Without people grasping the limitations of this data, all of these homes are subject to people showing up angry, believing that someone else lives there.

As technology and the “big data” era expands, knowing what you don’t know is going to become increasingly critical. Small errors made at any one point in the system can and will be magnified over time until there can be real trouble. The fine print maybe never be as interesting as the big reveal, but it could save you a lot of trouble in the long run.