I Would Have Solved Alignment, But I Was Worried That Would Advance Timelines
The alignment community is ostensibly a group of people concerned about AI risk. Lately, it would be more accurate to describe it as a group of people concerned about AI timelines. AI timelines have some relation to AI risk. Slower timelines mean that people have more time to figure out...
AI risk is not common knowledge. There are many people who do not believe there's any risk. I really wish people who make arguments of the following form:
would acknowledge this fact. It is simply not true that everyone in frontier labs thinks this way. You can ask them! They'll tell you!
It would be nice if we were just in a prisoner's dilemma type co-ordination problem. But when someone is publicly saying "hitting DEFECT has no downsides whatsoever, I plan on doing that as much as possible" you need to take this into consideration.