1 min read2 comments
This is a special post for quick takes by Heramb. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

2 comments, sorted by Click to highlight new comments since:

Everyone who seems to be writing policy papers/ doing technical work seems to be keeping generative AI at the back of their mind, when framing their work or impact.

This narrow-eyed focus on gen AI might almost certainly be net-negative for us- unknowingly or unintentionally ignoring ripple effects of the gen AI boom in other fields (like robotics companies getting more funding leading to more capabilities, and that leads to new types of risks).

And guess who benefits if we do end up getting good evals/standards in place for gen AI? It seems to me companies/investors are clear winners because we have to go back to the drawing board and now advocate for the same kind of stuff for robotics or a different kind of AI use-case/type all while the development/capability cycles keep maturing.

We seem to be in whack-a-mole territory now because of the overton window shifting for investors.

(Copying my quick take from the EA Forum)

I find the Biden chip export controls a step in the right direction, and it also made me update my world model of compute governance being an impactful lever. However, I am concerned that our goals aren't aligned with theirs; US policymakers' incentive right now is to curb China's tech growth and fun trade war reasons, not pause AI.

This optimization for different incentives is probably going to create some split between US policymakers and AI safety folks as time goes on.

It also makes China more likely to treat this as a tech race, which sets up interesting competitive race dynamics between the US and China, which I don't see talked about enough.