LESSWRONG
LW

Sam Iacono
8070
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Consider chilling out in 2028
Sam Iacono2mo1-1

I admit, it’s pretty disheartening to hear that, even if we had until 2040 (which seems less and less likely to me anyway), you’d still think there’s not much we could do but grieve in advance.

Reply
Consider chilling out in 2028
Sam Iacono3mo211

What’s the longest timeline that you could still consider a short timeline by your own metric, and therefore a world “we might just want to grieve over”? I ask because, in your original comment you mentioned 2037 as a reasonably short timeline, and personally if we had an extra decade I’d be a lot less worried.

Reply
A Slow Guide to Confronting Doom
Sam Iacono5mo10

Most of that comes from me sharing the same so-called pessimistic (I would say realistic) expectations as some LWers (e.g. Yudkowsky's AGI Ruin: A List of Lethalities) that the default outcome of AI progress is unaligned AGI -> unaligned ASI -> extinction, that we're fully on track for that scenario, and that it's very hard to imagine how we'd get off that track.


Ok, but I don’t read see those LWers also saying >99%, so what do you know that they don’t which allows you to justifiably hold that kind of confidence? 

That's a disbelief in superintelligence.

For what it’s worth, after rereading my own comment I can see how you might think that. With that said, I do think super intelligence is overwhelmingly likely to be a thing.

Reply
A Slow Guide to Confronting Doom
Sam Iacono5mo10

I think your defense of the >99% thing is in your first comment where you provided a list of things that cause doom to be “overdetermined”- meaning you believe that any one of those things is sufficient enough to ensure doom on its own (which seems nowhere near obviously true to me?).

Ruby says you make a good case, but considering what you’re trying to prove, (I.e. near-term “technological extinction” is our nigh-inescapable destiny) I don’t think it’s an especially sufficient case, nor is it treading any new ground. Like yeah, the chances don’t look good, and it would be a good case (as Ruby says) if you were just arguing for a saner type of pessimism, but to say it’s overdetermined to the point where it’s a >99% chance that not even an extra 50 years could move just seems crazy to me, whether you feel like defending it or not.

As far as the policy thing goes, I don’t really know what the weakest thing I could see doing that could avert an apocalypse would be. Although, something I’d like to see would be some kind of coordination regarding setting standards for testing or minimum amounts of safety research and then have compliance reviewed by a board maybe, with both legal and financial penalties to be administered in case of violations.

Probably underwhelming to you, but then as far as concrete policy goes it’s not something I think about a ton, and I think we’ve already established my views are less extreme than yours. And absent of any of my idea being remotely feasible, that still wouldn’t get me up to >99%. Something that would get me there would be actually seeing the cloud of poison spewing death drones (or whatever) flying towards me. Heck, even if I had a crystal ball right now and saw exactly that, I still wouldn’t see previously having a >99% credence as justifiable.


Am I just misunderstanding you here?

Reply
A Slow Guide to Confronting Doom
Sam Iacono5mo10

Do you really think p(everyone dies) is >99%?

Reply
If I have some money, whom should I donate it to in order to reduce expected P(doom) the most?
Sam Iacono1y-10
[This comment is no longer endorsed by its author]Reply
If I have some money, whom should I donate it to in order to reduce expected P(doom) the most?
Sam Iacono1y*-10
[This comment is no longer endorsed by its author]Reply
No posts to display.