I've gotten the distinct feeling that many people in the LessWrong/ACX/EA communities feel that studying AI or trying to develop AI is a BAD IDEA and should not be done. Is this a real, common view that people hold or have I just run across a few outliers and made assumptions?
I agree that studying AI and trying to develop AI are two different things. I think you can lump them together for this conversation- just exclude "working on alignment" from the cluster and everything else (studying AI, developing AI, improving AI) seems to be considered an infohazard that should be avoided.
I don't think I agree that advancing AI capabilities is definitely bad. It shortens timelines, which is bad. But it seems to me that figuring out AI alignment would take a lot of the same research work that advancing AI capabilities would take. Refusing to participate in the advancement of AI capabilities handicaps alignment research and leaves the cutting edge work to groups who don't care about alignment.