LESSWRONG
LW

152
ceselder
1130
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1ceselder's Shortform
2d
8
ceselder's Shortform
ceselder1d10

I think this is true to an extent. But not fully.

I think its quite unlikely that funding certain kinds of essential AI safety research leads you to more profitable AI. 

Namely mechinterp, preventing stuff like scheming. Not all AI safety research is aimed at getting the user to follow a prompt, yet the research may be very important for stuff like existential risk. 

The opportunity cost is funding research into how you can make your model more engaging, performant or cheaper. I would be suprised if these things aren't way more effective for your dollar. 

 

Reply
ceselder's Shortform
ceselder2d10

Yeah I can see that analogy, I just don't think most non-rationalist types have realized this

Reply
ceselder's Shortform
ceselder2d20

Isn't it very likely that AI safety research is one of the very first things to be cut if AI companies start to have less access to VC money? I don't think the company has a huge incentive for AI safety training, particularly in a way that people allocating funding would understand. Isn't this a huge problem? Maybe this has been adressed and I missed it.

Reply
1ceselder's Shortform
2d
8
1Why all the opinions?
2d
0