Why are we not faithful servants of our genes? Instead, the defenses our genes built against parasitic memes are breaking down, resulting in entire societies falling below replacement fertility rate even as we enjoy unprecendented riches in technology and material resources.
Have you looked at Calhoun's experiments with rats and mice associated with NIMH? The same thing happens with rodents if you keep them in luxury at high population densities. This leads me to suspect this has nothing to do with memetics or birth control.
I have a lot of doubts about Calhoun, and there are also serious doubts about Rat Park (which I assume is your second reference).
Fascinating reading! Thanks, I had only read the popular accounts, and have now updated.
Nevertheless, even in the few reproduction experiments, while several of Calhoun's more peculiar reported observations were not seen again, they did repeatedly see fertility rates dropping to around the replacement rate and stabilizing the population (mostly due to infant mortality/adult cannibalism of infants). So I think my basic point stands, that even in the absence of memetics, in mammals high population densities can cause fertility rates to drop to around or even a little below replacement levels. Particularly in cities, we are living at population densities far higher than we were adapted for.
I actually think modern urbanites have a lower effective social population density than the ancestral environment. Most hunter-gatherers have very little privacy, with entire entended families huddled in the same single-room dwellings. This level of density is very rarely present in developed cities (other than pubic transit). Most people in, say, Tokyo have their own room and thus few far less crowded than the average hunter gatherer, if you polled them throughout their day. Lowering the number of people encountered per day doesn't seem to greatly increase fertility, otherwise COVID would have resulting in a big fertility bump. Maybe the problem is the opposite - modern technology allow us to have fewer undesired social encounters, which is a high-priority desire for most people, but those same undesired encounters were the main force behind the formation of romantic relationships.
It is possible that the easiest way to increase the fertility rate is a legal mandate for dinner with co-workers.
Isn't living in cities itself driven at least in part by memetics (e.g., glamour/appeal of city living shown on TV/movies)? Certainly memes can cause people to not live in cities, e.g., the Amish or the meme of moving out to the suburbs to raise kids.
I think the main reason people move to cities isn't because cities are charming. It's because cities are objectively better places economically, you could say it's a kind of Keynesian beauty contest. If you're a business, you want to be located somewhere with many job seekers and potential clients nearby. So people who want jobs and services will also want to live nearby, and so on.
If teleportation was invented tomorrow and people could blink around at low cost, I expect that people would instantly spread out to live on their own patches of land, and cities would become mostly just places to visit and maybe work. The city charm wouldn't keep anyone living in a cramped apartment with neighbors above and below, if the economic reason for that disappeared. When I first imagined this scenario, I thought to myself that maybe we're lucky teleportation hasn't been invented yet :-)
Living in cities is primarily driven by economics: lots of people close together makes shipping/logistics/transportation/infrastructure cheaper. City centers are cheap from a logistical point of view but usually suffer from high real estate costs (except where these are depressed by crime or neglect), suburbs are often a good compromise between logistical costs and real estate costs, and rural living tends to involve the nearest store being 10 miles away, pricey, yet still having a poor assortment, plus difficulty with access to utilities and public transport. (FWIW, I used to live on a dirt road in a forest in the mountains above Silicon Valley, so I'm rather familiar with the upsides and downsides of both.)
This was posted to SL4 on the last day of 2003. I had largely forgotten about it until I saw the LW Wiki reference it under Mesa Optimization[1]. Besides the reward hacking angle, which is now well-trodden, it gave an argument based on the relationship between philosophy, memetics, and alignment, which has been much less discussed (including in current discussions about human fertility decline), and perhaps still worth reading/thinking about. Overall, the post seems to have aged well, aside from the very last paragraph.
For historical context, Eliezer had coined "Friendly AI" in Creating Friendly AI 1.0 in June 2001. Although most of it was very hard to understand and subsequently disavowed by Eliezer himself, it had a section on “philosophical crisis”[2] which probably influenced a lot of my subsequent thinking including this post. What's now called The Sequences would start being posted to OB/LW in 2006.
Subsequent to 2003, I think this post mostly went unnoticed/forgotten (including by myself), and MIRI probably reinvented the idea of mesa-optimization/inner-misalignment circa 2016. I remember hearing people talk about inner vs outer alignment while attending a (mostly unrelated decision theory) workshop at MIRI and having an "oh this is new" reaction.
The SL4 Post
Subject: "friendly" humans?
Date: Wed Dec 31 2003
Why are we not faithful servants of our genes? Instead, the defenses our genes built against parasitic memes are breaking down, resulting in entire societies falling below replacement fertility rate even as we enjoy unprecendented riches in technology and material resources. Genes built our brains in the hope that we will remain friendly to them, and they appear to have failed. Why? And is there anything we can learn from their catastrophe as we try to build our own friendly higher-intelligence?
I think the reason we're becoming increasingly unfriendly to genes is that parasitic memes are evolving too fast for genes and their symbiotic defensive memes to keep up, and this is the result of a series of advances in communications technology starting with the printing press. Genes evolved two ways of ensuring our friendliness - hardwired desires and hosting a system of mutually reinforcing philosophies learned during childhood that defines and justifies friendliness toward genes. Unfortunately for the genes, those hardwired desires have proven easy to bypass once a certain level of technology is reached (e.g., development of birth control), and the best philosophical defense for gene-friendliness that evolution could come up with after hundreds of thousands of years is the existence of a God that wants humans to be fertile.
This doesn't bode well for our own efforts. An SI will certainly find it trivial to bypass whatever hardwired desires or constraints that we place on it, and any justifications for human-friendliness that we come up with may strike it to be just as silly as "fill the earth" theism is to us.
But perhaps there is also cause for optimism, because unlike humans, the SI does not have to depend on memes for its operation, so we can perhaps prevent the problem of human-unfriendly memes by not having any memes at all. For example we can make the SI a singleton. If more than one SI is developed, we can try to prevent them from communicating with each other, especially about philosophy. If the SI is made up of multiple internal agents, we can try to make sure their internal communication channels are not suitable for transmitting memes.
Happy New Year!
and/or possibly hearing it mentioned in a podcast as the first description of the analogy between evolution and AI alignment, but I'm not sure and can't recall or find which podcast.
Intro to that section read: A “philosophical crisis” is hard to define. I usually think of a “philosophical crisis” as the AI stumbling across some fact that breaks ver loose of the programmers—i.e., the programmers have some deeply buried unconscious prejudice that makes them untrustworthy, or the AI stumbles across a deliberate lie, or the AI discovers objective morality, et cetera. If the hypothesized gap is wide enough, it may be enough to invalidate almost all the content simultaneously.