LESSWRONG
LW

Miranda Zhang
50Ω3120
Message
Dialogue
Subscribe

Ops Generalist @ Anthropic.

Much more active on the EA Forum: https://forum.effectivealtruism.org/users/miranda-zhang

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
DeepMind alignment team opinions on AGI ruin arguments
Miranda Zhang3yΩ83020

This was interesting and I would like to see more AI research organizations conducting + publishing similar surveys.

Reply
Pitching an Alignment Softball
Miranda Zhang3y40

I agree that AI safety can be successfully pitched to a wider range of audiences even without mentioning superintelligence, though I'm not sure this will get people to "holy shit, x-risk." However, I do think that appealing to the more near-term concerns that people have could be sufficiently concerning to policymakers and other important stakeholders, and possibly speed up their willingness to implement useful policy.

Of course, this assumes that useful policy for near-term concerns will also be useful policy for AI x-risk. It seems plausible to me that the most effective policies for the latter look quite different from policies that clearly overlap with both, but still seems directionally good!

Reply
26Introducing the Anthropic Fellows Program
9mo
0