I've gotten the distinct feeling that many people in the LessWrong/ACX/EA communities feel that studying AI or trying to develop AI is a BAD IDEA and should not be done. Is this a real, common view that people hold or have I just run across a few outliers and made assumptions?

New Answer
Ask Related Question
New Comment

4 Answers sorted by

The view that I think you're referring to is somewhat more nuanced.  "Studying AI" and "trying to develop AI" are refer to fairly wide classes of activities, which may have very different risk profiles.  If one buys the general class of arguments for AI risk, then "trying to develop AI" almost certainly means advancing AI capabilities, and shortens timelines (which is bad).  "Studying AI" could mean anything from "doing alignment research" (probably good), "doing capabilities research" (probably bad), or something else entirely (the expected value of which would depend on specifics).

I agree that studying AI and trying to develop AI are two different things. I think you can lump them together for this conversation- just exclude "working on alignment" from the cluster and everything else (studying AI, developing AI, improving AI) seems to be considered an infohazard that should be avoided.

I don't think I agree that advancing AI capabilities is definitely bad. It shortens timelines, which is bad. But it seems to me that figuring out AI alignment would take a lot of the same research work that advancing AI capabilities would take. Refusing to participate in the advancement of AI capabilities handicaps alignment research and leaves the cutting edge work to groups who don't care about alignment.

6Eli Tyre5d
Why do you think this? It seems to me that reading books about deep learning is a just fine thing to do, but that publishing papers that push forward the frontier of deep learning is plausibly quite bad. These seem like such different activities that I'm not at all inclined to lump them together for the purposes of this question.
1blackstampede5d
I'm lumping them together because they could (potentially) increase the likelihood of advances in AI. If some number of people read books about deep learning, then it's likely that some fraction of those people will go on to contribute in some small way to the field. Educating people about AI, publishing papers on AI, criticizing papers on AI in a constructive way- even using open source AI platforms could increase demand for more and better AI products. I don't entirely buy this, but someone could argue that anything related to the study of ML/AI is dangerous (see my conversation with Trevor1 down-thread for an immediate example).
3T3t6d
I wouldn't call it an infohazard; generally that refers to information that's harmful simply to know, rather than because it might e.g. advance timelines. There are arguments to be made about how much overlap there is between capabilities research and alignment research, but I think by default most things that would be classified as capabilities research do not meaningful advance AI alignment. For that to be true, you'd need >50% of all capabilities work to advance alignment "by default" (and without requiring any active effort to "translate" that capabilities work into something helpful for alignment), since the relative levels of effort invested are so skewed to capabilities. See also https://www.lesswrong.com/tag/differential-intellectual-progress.
-1blackstampede6d
Ah ok. That was my understanding as well, but I've seen infohazard used to refer to things where a wider awareness could be bad. If this is true, is there not still value in alignment researchers contributing to capabilities advancement in order to be in the room when it's made?
3T3t6d
I think there's probably value in being on an alignment team at a "capabilities" org, or even embedded in a capabilities team if the role itself doesn't involve work that contributes to capabilities (either via first-order or second-order effects). I think that the "in the room" argument might start to make sense when there's actually a plan for alignment that's in a sufficiently ready state to be operationalized. AFAICT nobody has such a plan yet. For that reason, I think maintaining & improving lines of communication is very important, but if I had to guess, I'd say you could get most of the anticipated benefit there without directly doing capabilities work.
1blackstampede5d
Isn't it a problem if the people most concerned with alignment refuse to participate in AI development? Aside from top-down corporate safety inititives, It seems like that would mean that the entire field is selecting for people who are unconcerned with alignment.
1JBlack4d
Yes this does seem to be happening. It also appears to be unavoidable. Our state of knowledge is nowhere near being able to guarantee that any AGI we develop will not kill us all. We are already developing AI that is superhuman in increasingly many aspects. Those who are actively working right now to bring the rest of the capabilities up to and above human levels obviously can't be sufficiently concerned, or they would not be doing it.
1blackstampede4d
If participation yields some amount of insight on alignment then it's not clear to me that no participation is obviously the better course of action over participation. To argue that, I think that someone would have to show that working on (and learning about) AI is of no (or little) value to the study of alignment. It seems possible that a shorter timeline could be worth it if the development process also accelerates alignment work. At the very least, I haven't seen this tradeoff addressed in a serious way. If it has, I'd appreciate a link.

It's basically Yudkowsky that has probably made this viewpoint common, due to his outlier estimates of doom. While Eliezer is thankfully no longer the entire LessWrong community as it was in the 2000s and 2010s, it's still influenced by him a lot.

Yes, this is a common view around here.

IMO people on this forum are making a mistake when they choose not to educate themselves on how ML actually works, or refuse to entertain concepts such as how one might build AGI. There is a big difference between learning about ML and contributing to the development of a new model or technique. Probably unless you work for OpenAI or DeepMind, you are not at risk of making anything dangerous. It's extremely arrogant to think you are going to accidentally contribute to dangerous AI.

Talking about AI on the internet is pretty hazardous in general, for reasons completely unrelated to AGI. For example, news corporations are generally pretty infamous for refusing to criticize the particular conglomerate that owns them.

AI is already being mounted on nuclear stealth missiles, and nobody is capable of influencing nuclear weapons affairs, so abstaining from AI research probably won't do much aside from remove you from the genepool.

If you ever encounter an individual refusing to study AI, you should go out of your way to prevent them from doing that; and detect people doing that who you wouldn't have otherwise noticed, and help them too; and, at minimum, prevent anyone from encouraging other people to surrender their slots in the AI research ecosystem.

In the first paragraph, I think you're saying that discussing AGI/AI on the internet is hazardous because media organizations are incentivized to vilify you. After that, you lost me.

Are you saying that people who are refusing to study AI should be prevented from leaving the field (or encouraged to stay some how)? Can you expand on this?

-6Trevor16d