Jeffrey Ladish

Wiki Contributions

Comments

And then on top of that there are significant other risks from the transition to AI. Maybe a total of more like 40% total existential risk from AI this century? With extinction risk more like half of that, and more uncertain since I've thought less about it.

40% total existential risk, and extinction risk half of that? Does that mean the other half is some kind of existential catastrophe / bad values lock-in but where humans do survive?

This is a temporary short form, so I can link people to Scott Alexander's book review post. I'm putting it here because Substack is down, and I'll take it down / replace it with a Substack link once it's back up. (also it hasn't been archived by Waybackmachine yet, I checked)

The spice must flow.

Edit: It's back up, link: https://astralcodexten.substack.com/p/book-review-what-we-owe-the-future

Thanks for the reply!

I hope to write a longer response later, but wanted to address what might be my main criticism, the lack of clarity about how big of a deal it is to break your pledge, or how "ironclad" the pledge is intended to be.

I think the biggest easy improvement would be amending the FAQ (or preferably something called "pledge details" or similar) to present the default norms for pledge withdrawal. People could still choose to choose different norms if they preferred, but it would make it more clear what people were agreeing to, and how strong the commitment was intended to be, without adding more text to the main pledge.

 

I'm a little surprised that I don't see more discussion of ways that higher bandwidth brain-computer interfaces might help, e.g. neurolink or equivalent. Like it sounds difficult but do people feel really confident it won't work? Seems like if it could work it might be achievable on much faster timescales than superbabies.

Oh cool. I was thinking about writing some things about private non-ironclad commitments but this covers most of what I wanted to write. :) 

I cannot recommend this approach on the grounds of either integrity or safety 😅

Yeah, I think it's somewhat boring without without more. Solving the current problems seems very desirable to me, very good, and also really not complete / compelling / interesting. That's why I'm intending to try to get at in part II. I think it's the harder part.

This could mitigate financial risk to the company but I don't think anyone will sell existential risk insurance, or that it would be effective if they did

Load More