LESSWRONG
LW

Artyom Kazak
69110
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
We run the Center for Applied Rationality, AMA
Artyom Kazak6y10

The state of confusion you're describing sounds a lot like Kegan's 4.5 nihilism (pretty much everything at meaningness.com is relevant). A person's values have been demolished by a persuasive argument, but they haven't yet internalized that people are "allowed" to create their own systems and values. Alright.

1. I assume that LW-adjacent people should actually be better at guiding people out of this stage, because a lot of people in the community have gone through the same process and there is an extensive body of work on the topic (Eliezer's sequences on human values, David Chapman's work, Scott Alexander's posts on effective altruism / axiology-vs-morality / etc).

2. I also assume that in general we want people to go through this process – it is a necessary stage of adult development.

Given this, I'm leaning towards "guiding people towards nihilism is good as long as you don't leave them in the philosophical dark re/ how to get out of it". So, taking a random smart person, persuading them they
should care about Singularity, and leaving – this isn't great. But introducing people to AI risk in the context of LW seems much more benign to me.

Reply
No wikitag contributions to display.
70Smuggled frames
4y
2