Yeah I can see that analogy, I just don't think most non-rationalist types have realized this
Isn't it very likely that AI safety research is one of the very first things to be cut if AI companies start to have less access to VC money? I don't think the company has a huge incentive for AI safety training, particularly in a way that people allocating funding would understand. Isn't this a huge problem? Maybe this has been adressed and I missed it.
I think this is true to an extent. But not fully.
I think its quite unlikely that funding certain kinds of essential AI safety research leads you to more profitable AI.
Namely mechinterp, preventing stuff like scheming. Not all AI safety research is aimed at getting the user to follow a prompt, yet the research may be very important for stuff like existential risk.
The opportunity cost is funding research into how you can make your model more engaging, performant or cheaper. I would be suprised if these things aren't way more effective for your dollar.