As part of his Twitter Spaces with RJK jr. Elon said (Youtube-based transcript):

On my recent trip to China, with the senior leadership there, we had I think some very productive discussions on artificial intelligence risks and the the need for some oversight or regulation. 

My understanding from those conversations is that China will be initiating AI regulation in China so that that was those those were very promising discussions.

And you know, I pointed out that if you know if if there is a digital super intelligence that is overwhelmingly powerful developed in China, it it is actually a risk to the sovereignty of the Chinese government and I think they took that concern to heart 

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 11:08 AM

This bodes well for greenlighting Human Intelligence Amplification research in China (the ultimate goal being to produce better alignment researchers who can hopefully fix the current inadequacy).

Human Intelligence Amplification has recently been gaining momentum as a winning strategy, and China already has incredible comparative advantages and a yearslong lead when it comes to producing fundamental research for Human Intelligence Amplification. It might also be a perfect fit for the government's existing policies on creativity promotion.

The actual effectiveness of national-level regulation is heavily  contingent on breakout time. (The time elapsed between initial detection of something wrong to being irreversible)

If it's 2 years, there's sufficient time to go through all the paperwork and then coordinating Washington, Beijing, Brussels, Moscow, New Delhi, Tokyo, etc., in something approaching lockstep, is quite feasible, provided political decision makers are willing to sacrifice some other political goals in a type of mutual trade.

If it's 2 days, then commitments, or even concrete decisions, are nearly irrelevant since the bureaucratic apparatus can't possibly respond fast enough. 

So for the former case there might be substance behind the commitments, for the latter case there couldn't even feasibly be any substance.

New to LessWrong?