Sorted by New

Wiki Contributions


I am very conflicted about this post. 

On the one hand it deeply resonates with my own observations. Many of my friends from the community seem to be stuck on the addictive loop of proclaiming the end of the world every time a new model comes out. I think it's even more dangerous, as it becomes a social activity: "I am more worried than you about the end of the world, because I am smarter/more agentic than you, and I am better at recognizing the risk that this represents for our tribe." gets implicitly tossed around in a cycle where the members keep trying to one-up each other. This only ends when their claims get so absurd as to say the world will end next month, but even this absurdity seems to keep getting eroded over time. 

Like someone else said here in the comments, if was reading about this issue in some unrelated doomsday cult from a book, I would immediately dismiss them as a bunch of lunatics. "How many doomsday cults have existed in history? Even if yours is based on at least some solid theoretical foundations, what happened to the previous thousands of doomsday cults that also were, and were wrong?"

On the other hand I have to admit that the arguments in your post are a bit weak. They allow you to prove too much. To any objection, you could say "Well, see, you are only objecting to this because you have been thinking about AI risk for too long, and thus you are not able to reason about the issue properly". Even though I personally think you might be right, I cannot use this to help anyone else in good faith, and most likely they will just see through it. 

So yes. Conflicting.

In any case, I think some introspection in the community would be ideal. Many members will say "I have nothing to do with this, I'm a purely technical person, yada yada" and it might be true for them! But is it true in general? Is thinking about AI risk causing harm to some members of the community, and inducing cult-like behaviors? If so, I don't think this is something we should turn a blind eye to. If anything because we should all recognize that such a situation would in itself be detrimental to AI risk research.

To clarify, me and my friend were 100% going to press the button, but we were discouraged by the false alarm. There was no fun at that point, and it made me lose like 1/3 of my total mana. I had to close all my positions to stop the losses, and we went to sleep. When we woke up it was already too late for it to be noteworthy or fun.