Epistemic status: Noticing confusion
There is little discussion happening on LessWrong with regards to AI governance and outreach. Meanwhile, these efforts could buy us time to figure out technical alignment. And even if we figure out technical alignment, we still have to solve crucial governmental challenges so that totalitarian lock-in or gradual disempowerment don't become the default outcome of deploying aligned AGI.
Here's three reasons why we think we might want to shift much more resources towards governance and outreach:
1. MIRI's shift in strategy
The Machine Intelligence Research Institute (MIRI), traditionally focused on technical alignment research, has pivoted to broader outreach. They write in their 2024 end of year update:
Although we continue to support some
... (read 356 more words →)
This is more towards the personal for someone who does the work vs how society should act towards them:
One interpretation: No one knows you saved the world or everyone thinks they did it themselves.
When people trying to save the world are working for the recognition/results they quickly start goodhearting.
PS: Love the post title.