Now that the idea of AI Existential Risk has gained more acceptability, I am surprised that the thought leaders are not more often mentioned and contacted by AI executives and researchers in the cutting edge (OpenAI, Anthropic, etc.), recognized academic figures  (Bengio, Hinton, Hofstadter, etc.), journalists, or political leaders who have expressed an interest.

To be sure, they are mentioned in articles;  MIRI was represented in a congressional hearing; and Mira Murati reached out to Eliezer. But still, it seems that the profile of the pioneers is much lower than I'd expect.

Fifteen years ago, we could have said that AI XRisk was treated as a crackpot idea and that MIRI in particular might be ignored, as it operates outside a standard framework like a university. But today, not only have the ideas spread, but many top AI-capabilities researchers have, I think, entered the field  specifically because of inspiration from the MIRI/LessWrong circle. 

Though some journalists might still be hung up on MIRI's lack of social status markers, I don't think that many others, including cutting-edge AI researchers, are.

So what is going on?

New Answer
New Comment

2 Answers sorted by

ryan_greenblatt

Jan 08, 2024

2111

I think Bostrom isn't necessarily that interested in currently working on AI x-Risk as opposed to other topics (see his recent work here for example). This seems pretty reasonable from my perspective, I think his comparative advantage plausibly lies elsewhere at this point.

As far as Yudkowsky, I think he's often considered (by people at AI labs and in AI governance) to have incorrect views and to not be a "thought leader" with good views on AI x-Risk. He is also maybe just somewhat annoying to work with and interact with. (A roughly similar story probably applies for Nate Soares.)

Probably part of the reason AI labs don't pay attention to Yudkowsky is that he is clearly advocating for a total stop to frontier AI development which might make AI labs (the people doing frontier AI development) less likely to pay attention to him.

(My personal take is that Yudkowsky doesn't really have very good views on the topics of what AI governance should do at the margin or what AI labs should do at the margin and thus the situation seems acceptable. That said, it seems like an obviously a bad dynamic if (public) criticism of AI labs makes them not want to interact with you.)

He is also maybe just somewhat annoying to work with and interact with

I have heard that elsewhere as well. Still, I don't really see that myself, whether in his public posting or in my limited interactions with him. He can be rough and on rare occasion has said things that could be considered personally disrespectful, but I didn't think that people were that delicate.

...  advocating for a total stop... which might make AI lab... less likely to pay attention to him. 

True, but I had thought better of those people. I would have thought that they cou... (read more)

2Ilio4mo
You may wish to update on this. I’ve only exchange a few words with one of the name, but that was enough to make clear he doesn’t bother being respectful. That may work in some non delicate research environment I don’t want to know about, but most bright academic I know like to have fun at work, and would leave any non delicate work environment (unless they make their personal duty to clean the place).

Gordon Seidoh Worley

Jan 09, 2024

33

Just slight pushback here to say that in some circles they are getting a lot more attention, just not necessarily in public from people in leadership positions. Neither of them are especially respectable for various reasons, and so leaders don't want to associate with them too much, though we don't know if they are paying attention to them in private.

4 comments, sorted by Click to highlight new comments since: Today at 10:24 PM

Part of the reason is that Yudkowsky radicalized his position to stay out of the overton window. Fifteen years ago, his position was "we need to do research into AI safety, because AI will pose a threat to humanity some time this century". Now, the latter is becoming mainstream-adjacent, but he shifted to "it's too late to do research, we need to stop all capability work or else we all die in 10-15 years". And, "even if we stop all capability work as much as an international treaty can conceivably accomplish, we must augment human intelligence in adults in order to be able to solve the problem in time."

I suspect Bostrom would be receiving more attention if he hadn't written a certain email. Likely a combination of people distancing themselves from him, people feeling like he might not be the best choice of ambassador at the moment and Bostrom mostly laying low for the time being.

As for Eliezer,  he is selective about the podcasts he wishes to appear on. He might be selective about interviews as well. He also is less proactive about reaching out than he could be.

Though some journalists might still be hung up on MIRI's lack of social status markers, I don't think that many others, including cutting-edge AI researchers, are.

Sorry for the quick nitpick- I don't know if social status is the right way to look at this. Journalists are more pragmatic, like bayesians, and their mental model of the world does not include highly competent orgs like MIRI, instead focusing on their own inability to evaluate research; if an org has a risk of being a crackpot org (or ending up seen as a crackpot org by readers or the company's editors), individual journalists face incentives to avoid being called out for accidentally citing a crackpot org.

Not to depict them as more responsible than they actually are. A surprisingly large proportion are nihilistic and obsessed with power games.

Yes, I agree with your point about most journalists. Still, I think well enough of the professors and AI developers that I mentioned to imagine that they would have a more positive attitude.