Note: I started trying to do a regular reading list here, but my reading list has sent me in to an existential tailspin this month, so I’m going to just reflect a little on all of that, talk about some farming, and then I’m going to remind you that artificial intelligence is probably the biggest threat to humanity you haven’t bothered worrying about today. Figured you may want that heads up.
I don’t normally read novels, but my farmer brother loves Wendell Berry and has been encouraging me to read Jayber Crow for quite some time and a few weeks ago I actually got around to it. It’s one of his “Port Williams” novels, which all take place from the perspectives of various members of a fictitious town in Kentucky starting in the 1920s and ending in the 1970s. There’s a lot of religious and theological themes to the novel that quite a few of my readers will probably have opinions about, but that’s not what caught my eye. What intrigued me was how in 300 pages or so, the novel takes the main character from a young man to an old man, and the reflections on how his slice of America and their approach to land changed in that time, and the impact it had on the community. If you don’t have a farming philosophy (or didn’t spend most of your childhood around people who had one, whether they called it that or not), this may not strike you as much as it did me. I grew up hearing about my grandfather’s (who would have been about Jayber’s age) approach and how my uncle tried to change it, how my other Uncle took it over when my grandfather died, and then how my brother continues the tradition. Land use as a reflection of greater social change is kind of a thing in my family, and it was interesting to see that captured in an novel format. The subtle influence of technology on the perceptions of land and farming were also rather fascinating. Also at this point it’s kind of nice to read a recounting of the 20th century that’s not entirely Boomer-centric.
Concurrent with that book, I also read “But What If We’re Wrong? Thinking About the Present As If It Were the Present” by Chuck Klosterman. In it, Klosterman reflects on all the ways we talk about the past, and continuously reminds us that people in 100 years will remember us quite differently than we like to think. He points out that we all know this in general, but if you point to anything specific, our first reaction is to get defensive and explain that whatever particular idea we’re discussing is one of the ones that will endure. We’re willing to acknowledge that the future will be different, but only if that difference is familiar.
To further mess with my self-perception, I still haven’t entirely recovered from reading Antifragile a few months ago. There’s a lot of good stuff in that book (including a discussion of the Lindy Effect, a helpful rule of thumb for what ideas will actually persist in 100 years), but what Taleb is really famous for is his concern about Black Swans. Black Swans are events that are unexpected. They are hard to predict. They are not really what we were focusing on. They shape history dramatically, but we all forget about it because we focus our statistical predictions on things that have a prior probability (if you want to blame the Bayesians) or things that have larger risks (if you want to blame the frequentists).
So on to this mix of risk and uncertainty and reflections on the past and future, I decided to start reading more about artificial intelligence (AI) risk. For the sake of my insomnia, this was an error. However, now that I’m here I’d like to share the pain. If you want an exceptionally good comprehensive overview of where we’re at, try the Wait But Why post on the topic, or for something shorter try this. If you’re feeling really lazy, let me summarize:
- We are racing like hell to create something smarter than ourselves
- We will probably succeed, and sooner than you might think
- Once we do that, pretty much by definition whatever we create will start improving itself faster than we can, towards goals that are not the same as ours
- The ways this could go horribly wrong are innumerable
- Almost no one appears worried about this
By #5 of course I mean no one on my Facebook feed. Bill Gates, Stephen Hawking and Elon Musk are actually all pretty damn concerned about this. The problem is that something like this is so far outside our current experience that it’s hard for most people to even conceive of it being a risk….but that lack of familiarity doesn’t actually translate in to lack of risk. If you want the full blow by blow I suggest you go back and read the articles I suggested, but here’s a quick story to illustrate why AI is so risky:
You know that old joke where someone starts pouring you water or serving you food and says “say when”, then fails to stop when you say things like “stop” or “enough” or “WHAT ARE YOU DOING YOU MANIAC” because “you didn’t say when!”? Well, I know people who will take that joke pretty far. They will spill water on the table or overflow your plate or whatever they need to sell the joke. However, I have never met a person who will go back to the faucet and get more water just to come back to the table and keep pouring. That’s a line that all humans, no matter how dedicated to their joke, understand. It wouldn’t even occur to most people. When we’re talking about computers though, that line doesn’t exist. Computers keep going. Anyone who’s ever accidentally crashed a program by creating an infinite while loop knows this. The oldest joke in the coding world is “the problem with computers is that they do exactly what you tell them to”. Even the most malicious humans have a fundamental bias towards keeping humanity in existence. Maybe not individuals, but the species as a whole is normally not a target. AI won’t have this bias.
Now, I’m not saying we’re all doomed, but I’m definitely in anxious avenue here:
The fact that almost everyone I know has spent more time thinking about their opinion on Donald Trump’s hair than AI risk doesn’t help. At a bare minimum, this should at least register on the national list of things people talk about, and I don’t even think it’s in the top 1000.
On the minus side, this reading list has made me a little pensive. On the plus side, I’m kind of like that anyway. On the DOUBLE plus side, bringing up AI risk in the middle of any political conversation is an incredibly useful tool for getting people to stop talking to you about opinions you’re sick of hearing.
Anyway, if you’d like to send me some lighter and happier reading for February, I might appreciate it.