(Deleted section on why I thought cultural general-intelligence software was not much of the work of AGI:)

...because the soft fidelity of implicit unconscious cultural transmission can store less serially deep and intricate algorithms than the high-fidelity DNA transmission used to store the kind of algorithms that appear in computational neuroscience.

I recommend Terrence Deacon's The Symbolic Species for some good discussion of the surprising importance of the shallow algorithms and parameters that can get transmitted culturally. The human-raised chi... (read more)

Showing 3 of 7 replies (Click to show all)

I'm a little late to the game here, but I have a small issue with the above.

I don't think it is accurate to estimate the size of changes in such a manner, as there is an enormous complex of transcription factors that create interplay between small changes, ones of which we may never see any actual trace or are located outside the genome that affect the genome. SNPs are important (such as those in FOXp2) but not the be all end all factor for those expressions as well - epigenetic factors can drive selection just as effectively as chance mutation creates adv

... (read more)
2gjm1yI'm like 96% sure it was intended to apply to the question of how much of the work in making an AGI is about "cultural general-intelligence software". But yeah, I agree that if we destroy our civilization it could take a long time to get it back. Not just because building a civilization takes a long time; also because there are various resources we've probably consumed most of the most accessible bits of, and not having such easy access to coal and oil and minerals could make building a new civilization much harder. But I'm not sure what hangs on that (as opposed to the related but separate question of whether we would rebuild civilization if we lost it) -- the destruction of human civilization would be a calamity, but I'm not sure it would be a much worse calamity if it took 300k years to repair than if it took "only" 30k years.
4Benquo1yI think it matters because of what it implies about how hard a target civilization is to reach. Even if the 300k year process could be sped up a lot by knowing what we're aiming for, it's evidence that the end result was a much weaker natural attractor than our current state is, from a starting point of founding civilization at all.

Is Clickbait Destroying Our General Intelligence?

by Eliezer Yudkowsky 4 min read16th Nov 201853 comments


(Cross-posted from Facebook.)

Now and then people have asked me if I think that other people should also avoid high school or college if they want to develop new ideas. This always felt to me like a wrong way to look at the question, but I didn't know a right one.

Recently I thought of a scary new viewpoint on that subject.

This started with a conversation with Arthur where he mentioned an idea by Yoshua Bengio about the software for general intelligence having been developed memetically. I remarked that I didn't think duplicating this culturally transmitted software would be a significant part of the problem for AGI development. (Roughly: low-fidelity software tends to be algorithmically shallow. Further discussion moved to comment below.)

But this conversation did get me thinking about the topic of culturally transmitted software that contributes to human general intelligence. That software can be an important gear even if it's an algorithmically shallow part of the overall machinery. Removing a few simple gears that are 2% of a machine's mass can reduce the machine's performance by way more than 2%. Feral children would be the case in point.

A scary question is whether it's possible to do subtler damage to the culturally transmitted software of general intelligence.

I've had the sense before that the Internet is turning our society stupider and meaner. My primary hypothesis is "The Internet is selecting harder on a larger population of ideas, and sanity falls off the selective frontier once you select hard enough."

To review, there's a general idea that strong (social) selection on a characteristic imperfectly correlated with some other metric of goodness can be bad for that metric, where weak (social) selection on that characteristic was good. If you press scientists a little for publishable work, they might do science that's of greater interest to others. If you select very harshly on publication records, the academics spend all their time worrying about publishing and real science falls by the wayside.

On my feed yesterday was an essay complaining about how the intense competition to get into Harvard is producing a monoculture of students who've lined up every single standard accomplishment and how these students don't know anything else they want to do with their lives. Gentle, soft competition on a few accomplishments might select genuinely stronger students; hypercompetition for the appearance of strength produces weakness, or just emptiness.

A hypothesis I find plausible is that the Internet, and maybe television before it, selected much more harshly from a much wider field of memes; and also allowed tailoring content more narrowly to narrower audiences. The Internet is making it possible for ideas that are optimized to appeal hedonically-virally within a filter bubble to outcompete ideas that have been even slightly optimized for anything else. We're looking at a collapse of reference to expertise because deferring to expertise costs a couple of hedons compared to being told that all your intuitions are perfectly right, and at the harsh selective frontier there's no room for that. We're looking at a collapse of interaction between bubbles because there used to be just a few newspapers serving all the bubbles; and now that the bubbles have separated there's little incentive to show people how to be fair in their judgment of ideas for other bubbles, it's not the most appealing Tumblr content. Print magazines in the 1950s were hardly perfect, but they could get away with sometimes presenting complicated issues as complicated, because there weren't a hundred blogs saying otherwise and stealing their clicks. Or at least, that's the hypothesis.

It seems plausible to me that basic software for intelligent functioning is being damaged by this hypercompetition. Especially in a social context, but maybe even outside it; that kind of thing tends to slop over. When someone politely presents themselves with a careful argument, does your cultural software tell you that you're supposed to listen and make a careful response, or make fun of the other person and then laugh about how they're upset? What about when your own brain tries to generate a careful argument? Does your cultural milieu give you any examples of people showing how to really care deeply about something (i.e. debate consequences of paths and hew hard to the best one), or is everything you see just people competing to be loud in their identification? The Occupy movement not having any demands or agenda could represent mild damage to a gear of human general intelligence that was culturally transmitted and that enabled processing of a certain kind of goal-directed behavior. And I'm not sure to what extent that is merely a metaphor, versus it being simple fact if we could look at the true software laid out. If you look at how some bubbles are talking and thinking now, "intellectually feral children" doesn't seem like entirely inappropriate language.

Shortly after that conversation with Arthur, it occurred to me that I was pretty much raised and socialized by my parents' collection of science fiction.

My parents' collection of old science fiction.

Isaac Asimov. H. Beam Piper. A. E. van Vogt. Early Heinlein, because my parents didn't want me reading the later books.

And when I did try reading science fiction from later days, a lot of it struck me as... icky. Neuromancer, bleah, what is wrong with this book, it feels damaged, why do people like this, it feels like there's way too much flash and it ate the substance, it's showing off way too hard.

And now that I think about it, I feel like a lot of my writing on rationality would be a lot more popular if I could go back in time to the 1960s and present it there. "Twelve Virtues of Rationality" is what people could've been reading instead of Heinlein's Stranger in a Strange Land, to take a different path from the branching point that found Stranger in a Strange Land appealing.

I didn't stick to merely the culture I was raised in, because that wasn't what that culture said to do. The characters I read didn't keep to the way they were raised. They were constantly being challenged with new ideas and often modified or partially rejected those ideas in the course of absorbing them. If you were immersed in an alien civilization that had some good ideas, you were supposed to consider it open-mindedly and then steal only the good parts. Which... kind of sounds axiomatic to me? You could make a case that this is an obvious guideline for how to do generic optimization. It's just what you do to process an input. And yet "when you encounter a different way of thinking, judge it open-mindedly and then steal only the good parts" is directly contradicted by some modern software that seems to be memetically hypercompetitive. It probably sounds a bit alien or weird to some people reading this, at least as something that you'd say out loud. Software contributing to generic optimization has been damaged.

Later the Internet came along and exposed me to some modern developments, some of which are indeed improvements. But only after I had a cognitive and ethical foundation that could judge which changes were progress versus damage. More importantly, a cognitive foundation that had the idea of even trying to do that. Tversky and Kahneman didn't exist in the 1950s, but when I was exposed to this new cognitive biases literature, I reacted like an Isaac Asimov character trying to integrate it into their existing ideas about psychohistory, instead of a William Gibson character wondering how it would look on a black and chrome T-Shirt. If that reference still means anything to anyone.

I suspect some culturally transmitted parts of the general intelligence software got damaged by radio, television, and the Internet, with a key causal step being an increased hypercompetition of ideas compared to earlier years. I suspect this independently of any other hypotheses about my origin story. It feels to me like the historical case for this thesis ought to be visible by mere observation to anyone who watched the quality of online discussion degrade from 2002 to 2017.

But if you consider me to be more than usually intellectually productive for an average Ashkenazic genius in the modern generation, then in this connection it's an interesting and scary further observation that I was initially socialized by books written before the Great Stagnation. Or by books written by authors from only a single generation later, who read a lot of old books themselves and didn't watch much television.

That hypothesis doesn't feel wrong to me the way that "oh you just need to not go to college" feels wrong to me.