Idea Selection and Survival of the Fittest

It probably won’t come as a shock to you that I spend a lot of time ruminating over why there are so many bad ideas on the internet. Between my Intro to Internet Science, my review of the Calling Bullshit class, and basically every other post I’ve written on this site, I’ve put a lot of thought in to this.

One of the biggest questions that seems to come up when you talk about truth in the social media age is a rather basic “are we seeing something new here, or are we just seeing more of what’s always happened?” and what are the implications for us as humans in the long run? It’s a question I’ve struggled a lot with, and I’ve gone back and forth in my thinking. On the one hand, we have the idea that social media simply gives bigger platforms to bad actors and gives the rest of more situations in which we may be opining about things we don’t know much about.  On the other hand, there’s the idea that something is changing, and it’s going to corrupt our way of relating to each other and the truth going forward. Yeah, this and AI risk are pretty much what keeps me up at night.

Thus I was interested this week to see this Facebook post by Eliezer Yudkowsky about the proliferation of bad ideas on the internet. The post is from July, but I think it’s worth mentioning. It’s long, but in it Yudkowsky raises the theory that we are seeing the results of hypercompetition of ideas, and they aren’t pretty.

He starts by pointing out that in other fields, we’ve seen the idea that some pressure/competition is good, but too much can be bad. He uses college admissions and academic publishing as two examples. Basically, if you have 100 kids competing for 20 slots, you may get all the kids to step up their game. If you have 10,000 kids competing for 1 slot, you get widespread cheating and  test prep companies that are compared to cartels. Requiring academics to show their work is good, “publish or perish” leads to shoddy practices and probably the whole replication crisis. As Goodheart’s law states “When a measure becomes a target, it ceases to be a good measure”. In practical terms, hypercompetition ends up with a group that optimizes for one thing and only one thing, while leaving the back door completely unguarded.

Now take that entire idea and apply it to news and information in the social media age. While there are many good things about democratizing the spread of information, we have gone from moderate competition (get a local newspaper or major news network to pay attention to you, then everyone will see your story) to hypercompetition (anyone can get a story out there, you have to compete with billions of other stories to be read). With that much competition, we are almost certainly not going to see the best or most important stories rise to the top, but rather the ones that have figured out how to game the system….digital, limbic, or some combination of both. That’s what gets us to Toxoplasma of Rage territory, where the stories that make the biggest splash are the ones that play on your worst instincts. As Yudkowsky puts it “Print magazines in the 1950s were hardly perfect, but they could sometimes get away with presenting a complicated issue as complicated, because there weren’t 100 blogs saying otherwise and stealing their clicks”.

Depressed yet? Let’s keep going.

Hypercompetitive/play to your worst instincts stories clearly don’t have a great effect on the general population, but what happens to those who are raised on nothing other than that? In one of my favorite lines of the post, Yudkowsky says “If you look at how some groups are talking and thinking now,  “intellectually feral children” doesn’t seem like entirely inappropriate language.”  I’ve always thought of things like hyperreality in terms of virtual reality vs physical reality or artificial intelligence vs human intelligence, but what if we are kicking that off all on our own? Wikipedia defines it as “an inability of consciousness to distinguish reality from a simulation of reality, especially in technologically advanced postmodern societies.”, and isn’t that exactly what we’re seeing here on many topics? People use technology, intentionally or unintentionally to build bubbles that skew their view of how the world works, but consistently get reinforcement that they are correct?

Now of course it’s entirely possible that this is just a big “get off my lawn” post and that we’ll all be totally fine. It’s also entirely possible that I should not unwind from long weeks by drinking Pinot Noir and reading rationalist commentary on the future of everything, as it seems to exacerbate my paranoid tendencies. However, I do think that much of what’s on the internet today is the equivalent of junk food, and living in an environment full of junk food doesn’t seem to be working out too well for many of us. In physical health, we may have reached the point where our gains begin to erode, and I don’t think it’s crazy to think that a similar thing could happen intellectually.  Being a little more paranoid about why we’re seeing certain stories or why we’re clicking on certain links may not be the worst thing. For those of us who have still developing kids, making sure their ideas get challenged may be progressively more critical.

Good luck out there.

One thought on “Idea Selection and Survival of the Fittest

  1. I remember how incredibly real the first CD I heard sounded. I remember how incredibly real the Broderbund logo for Carmen Sandiego looked on the Apple IIGS. It looks laughable now. The brain does adapt and begins to see through the fakes, at least for visual and auditory.

    Yet I am not sure the same applies for ideas, and for all of them, is there a limit?

    Like

Comments are closed.