LESSWRONG
LW

echo_echo
14020
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Litigate-for-Impact: Preparing Legal Action against an AGI Frontier Lab Leader
echo_echo2d10

Is there any update on this?

Reply
Anthropic's leading researchers acted as moderate accelerationists
echo_echo4d156

This is an excellent write-up. I'm pretty new to the AI safety space, and as I've been learning more (especially with regards to the key players involved), I have found myself wondering why more people do not view Dario with a more critical lens. As you detailed, it seems like he was one of the key engines behind scaling, and I wonder if AI progress would have advanced as quickly as it did if he had not championed it. I'm curious to know if you have any plans to write up an essay about OpenPhil and the funding landscape. I know you mentioned Holden's investments into Anthropic, but another thing I've noticed as a newcomer is just how many safety organization OpenPhil has helped to fund. Anecdotally, I have heard a few people in the community complain that they feel that OpenPhil has made it more difficult to publicly advocate for AI safety policies because they are afraid of how it might negatively affect Anthropic.

Reply
No wikitag contributions to display.
No posts to display.