David_Kristoffersson

Co-founder of and researcher at Convergence. Convergence does foundational existential risk strategy research. See here for our growing list of publications.

Past: R&D Project Manager, Software Engineer.

Wiki Contributions

Comments

I think I agree with your technological argument, but I'd take your 6 months and 2.5 years and multiply them by a factor of 2-4x.

Party of it is likely that we are conceiving the scenarios a bit differently. I might be including some additional practical considerations.

Thank you for this post, Max.

My background here:

  • I've watched the Ukraine war very closely since it started.
  • I'm not at all familiar with nuclear risk estimations.

Summary: I wouldn't give 70% for WW3/KABOOM from conventional NATO retaliation. I would give that 2-5% in this moment (I spent little time thinking about the precise number).

Motivation: I think conventional responses from NATO will cause Russia to generally back down. I think Putin wants to use the threat of nukes, not actually use them.

Even when cornered yet further, I expect Putin to assess that firing off nukes will make his situation even worse. Nuclear conflict would be an immense direct threat against himself and Russia, and the threat of nuclear conflict also increases the risk of people on the inside targeting him (because they don't want to die). Authoritarians respect force. A NATO response would be a show of force.

Putin has told the Russian public in the past that Russia couldn't win against NATO directly. Losing against NATO actually gives him a more palatable excuse: NATO is too powerful. Losing against Ukraine though, their little sibling, would be very humiliating. Losing in a contest of strength against someone supposedly weaker is almost unacceptable to authoritarians.

I think the most likely outcome is that Putin is deterred from firing a tactical nuke. And if he does fire one, NATO will respond conventionally (such as taking out the Black sea fleet), and this will cause Russia to back down in some manner.

The amount of effort going into AI as a whole ($10s of billions per year) is currently ~2 orders of magnitude larger than the amount of effort going into the kind of empirical alignment I’m proposing here, and at least in the short-term (given excitement about scaling), I expect it to grow faster than investment into the alignment work.

There's a reasonable argument (shoutout to Justin Shovelain) that the risk is that work such as this done by AI alignment people will be closer to AGI than the work done by standard commercial or academic research, and therefore accelerate AGI more than average AI research would. Thus, $10s of billions per year into general AI is not quite the right comparison, because little of that money goes to matters "close to AGI".

That said, on balance, I'm personally in favor of the work this post outlines.

Unfortunately, there is no good 'where to start' guide for anti-aging. This is insane, given this is the field looking for solutions to the biggest killer on Earth today.

Low hanging fruit intervention: Create a public guide to that effect on a web site.

That being said, I would bet that one would be able to find other formalisms that are equivalent after kicking down the door...

At least, we've now hit one limit in the shape of universal computation: No new formalism will be able to do something that couldn't be done with computers. (Unless we're gravely missing something about what's going on in the universe...)

When it comes to the downside risk, it's often that there are more unknown unknown that produce harm then positive unknown unknown. People are usually biased to overestimate the positive effects and underestimate the negative effects for the known unknown.

This seems plausible to me. Would you like to expand on why you think this is the case?

The asymmetry between creation and destruction? (I.e., it's harder to build than it is to destroy.)

Very good point! The effect of not taking an action depends on what the counterfactual is: what would happen otherwise/anyway. Maybe the article should note this.

Excellent comment, thank you! Don't let the perfect be the enemy of the good if you're running from an exponential growth curve.

Looks promising to me. Technological development isn't by default good.

Though I agree with the other commenters that this could fail in various ways. For one thing, if a policy like this is introduced without guidance on how to analyze the societal implications, people will think of wildly different things. ML researchers aren't by default going to have the training to analyze societal consequences. (Well, who does? We should develop better tools here.)

Load More