Strange that no one is mentioning that Ada Palmer also wrote one of the most innovative and well-crafted sci-fi series of the 2000s, the Terra Ignota tetralogy. Very clearly an unusual women with an extremely big brain!
I think even if they hit some insane targets in the near term, the act of claiming explosive growth in a legible (and legally serious) growth estimate might be shocking to a lot of third parties, and have some wider memetic ripple effects. While it feels like the public has become "situationally aware" at a rapid pace in the last year, most people have not grappled deeply with the implications of possible transformative AI within the next few years.
I think this is true for sufficiently well-narrated nonfiction as well — I think a great deal of my psychology was shaped by reading about the classical world as a youth. Biography is probably the paradigmatic example of this genre — Ron Chernow's book Titan, about the life of John D. Rockefeller, made the America of the late nineteenth century far more "real" to me than a more boradly informative textbook could have.
Historical fiction also capitalizes on this same effect, as it's able both to bootstrap off the narrative richness and detail of real history and offer the reader an general education in the lived experience of that time.
I think essays like this are not very helpful for the AI safety agenda. In fact, they seem quite likely to do more harm than good?
I see dozens of arguments for and against specific AI risk models every day. A large fraction of these arguments (especially in the Twitter and podcast circles where people think deeply about AI and often work in AI risk or frontier labs) are against the Yudkowsky positions in IABIED. These are often arguments made by very smart people, well versed in the literature (including Yudkowsky's writing), who have significant meta-cogn...
I agree with your sentiment — I suppose I was implicitly presenting the bull case (or paradigmatic case) of cultural drift, wherein the future values are supported by future people but despised by their ancestors.
I think your example is closer to the familiar "Moloch" dynamic, where social and material technology leads to collective outcomes that are obviously undesirable to all involved. Moloch is certain to be an possible issue in any future world!
Although you don't explicitly mention it, I feel like this whole post is about value drift. The doomers are generally right on the facts (and often on the causal pathways), and we do nonetheless consider the post-doom world better, but the 1-nth order effects of these new technologies reciprocally change our preferences and worldviews to favor the (doomed?) world created by the aforementioned new technologies.
The question of value drift is especially strange given that we have a "meta-intuition" that moral/social values evolving and changing is good in hum...
I lean more towards the Camp A side, but I do understand and think there's a lot of benefit to the Camp B side. Hopefully I can, as a more Camp A person, help explain to Camp B dwellers why we don't reflexively sign onto these kinds of statements.
I think that Camp B has a bad habit of failing to model the Camp A rationale, based on the conversations I see in in Twitter discussions between pause AI advocates and more "Camp A" people. Yudkowsky is a paradigmatic example of the Camp B mindset, and I think it's worth noting that a lot of people in the public r...
As someone who's done a fair amount of meditation and read a couple dozen books on the topic, I'd just like to flag the fact that this is pretty well examined in the community, and while meditation as a whole is quite pre-paradigmatic, there seems to be an emerging consensus on some of the ways that meditation harm can manifest.
First off, it's obviously true that if you have a pre-existing tendency towards schizophrenia or any general mental instability, then in a very similar way to psychedelics, meditation can cause a psychotic break or similar episode o...
This approach ignores the fact that if we use advanced LLMs to make new paradigm advancements that are extremely effective RL sociopaths, we'll at that point have the help of the relatively harmless but still very powerful LLMs to do safety work on the RL agents — this is a major help with mitigating autonomy risks! Of course, there's always the risk that new RL architecture discoveries create economic incentives to scale the scary RL agents without sufficient safety work, but the prospect of using HHH AI to align scary AI is weirdly under-explored when talking about that exact advanced LLM + advanced RL learner world.