I am worried about some of the tonal shifts around the future of AI, as it may relate to more vulnerable members of the community. While I understand Mr. Eudowski’s stance of “Death with Dignity,” I worry that it’s creating a dangerous feeling of desperation for some.

People make bad decisions when they feel threatened and without options. Violence starts to look more appealing, because there is a point where violence is warranted, even if it isn’t something most of us would consider.

One of the big take-always from this community is “do something:” you can make a difference if you’re willing to put in the work. How many have we seem who have taken up AI safety research? This is generally a good thing.

Once people think that they and their families are threatened, “do something” becomes a war cry. After all, bombing OpenAI datacenter might not have a high probability of succeeding, but when you feel like you’re out of options, it might seem to be your only hope.

This becomes more dangerous when you’re talking about people who feel disaffected, and struggle with anxiety and depression. Those who feel alone and desperate. This that don’t feel like they have a lot to lose, and maybe dream about becoming humanity’s savior.

I don’t know what to do about this, but I do think that it’s an important conversation. What can we do to minimize the chance of someone from our community committing violent acts? How do we fight this aspect of Moloch?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 6:47 AM

This post was sitting at -3 karma after 5 votes when reading so I strongly upvoted because it does bring up an important point. 

Namely, in ensuring the energies and efforts of the more volatile are directed towards productive ends.

There is quite a famous quote touching upon this theme by a former Prussian general, who undoubtedly had excellent opportunities for assessing the diversity of human psychology as he served as the last Chief of the Army High Command of the Weimar Republic:

I distinguish four types. There are clever, hardworking, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and hardworking; their place is the General Staff. The next ones are stupid and lazy; they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the mental clarity and strength of nerve necessary for difficult decisions. One must beware of anyone who is both stupid and hardworking; he must not be entrusted with any responsibility because he will always only cause damage.

Kurt Freiherr von Hammerstein-Equord

Ah, you probably meant it as four types of LW readers/commenters, but I first interpreted it as four types of artificial intelligences (replace "clever" with "aligned") and it almost made sense (definitely the "unaligned and hardworking" is the worst kind).

Do you have any ideas for how we could direct these folks to productive ends?

Maybe some of that could be community support, whether to give support to members of our community that are psychologically vulnerable directly, or giving them a way that they can help others. I’m not sure what that would look like though.