One thing I'd like this post to address is the speed at which this process happens.
You could also say that human extinction is inevitable because of the second law of thermodynamics, but it would be remiss not to mention the timescale involved.
I do find this post to be the most clear and persuasive articulation of your position so far. But I still strongly have the intuition that this concern is mostly not worth worrying about. You make a good case that a very large system given a very very long time would eventually converge on AIs that are optimized solely for their own propagation.
But I expect that in practice the external selection pressures would be sufficiently weak and the superintelligent AIs would be sufficiently adept at minimizing errors that this effect might not even show up in a measurable way in our solar system before the sun explodes.
On the other hand, in a world where humans never created more powerful technology than we have today, my intuition is that within a few thousand generations human society would end up dominated by bizarre cultures that explicitly optimize for maximum reproduction above all other values. And humans today explicitly not wanting that would not be sufficient to prevent that outcome. So the superintelligent AI being very good at modelling outcomes is doing some heavy lifting in my model.
This style of thinking is roughly the right way to approach the problem. Although I also agree with John Wentworth's comment that not enough people are directly trying to solve the problem.
You are getting fairly negative reactions because you appear to be unfamiliar with a lot of the existing ways in which people are already trying to win. For example, some people are now trying to influence government policy. But if you can think of a good alternative that people are not already doing, then you should suggest it.
Huge congratulations!!
Reading this feels a bit like reading about meditation. It seems interesting and if I work through it, I could eventually understand it fully.
But I'd quite like a "secular" summary of this and other thoughts of Janus, for people who don't know what Eternal Tao is, and who want to spend as little time as possible on twitter.
You don't really need to read the report to come to this conclusion. Morality / consciousness / valence / qualia are words which don't have widely agreed definitions because they are trying to point at ideas that arise from confused / magical thinking while still maintaining the respectability of analytical philosophy. So any attempt to precisely measure them will inevitably end up looking a bit silly.
Downvoted. This post feels kinda mean. Tyler Cowen has written a lot and done lots of podcasts - it doesn't seem like anyone has actually checked? What's the base rate for public intellectuals ever admitting they were wrong? Is it fair to single out Tyler Cowen?
I agree and this is why research grant proposals often feel very fake to me. I generally just write up my current best idea / plan for what research to do, but I don't expect it actually pan out that way and it would be silly to try to stick rigidly to a plan.
I will (once again!) be raising the bar for what gets included going forward to prevent that.
I'm confused by this because the bar for what gets included seems very low. I mostly don't read these posts because a large fraction of the "news" reported is just random tweets by people in the rationalist / adjacent sphere.
Tangential point, but I'm skeptical this is actually a very common error.