Under what conditions should humans stop pursuing technical AI safety careers?
So, LLM-powered systems can do research now. That includes basic safety research. And it looks like they can have good research taste after a bit of fine-tuning. And they haven't tried to take over the world yet, as far as I know. At some point in the past, I expected...
Why do you say this?