LESSWRONG
LW

Greg C
2472700
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
How We Might All Die in A Year
Greg C3mo10

One problem is that this assumption of the ASI society being mostly structured as well-defined persistent individuals with long-term interests is questionable

Very questionable. Why would it be separate individuals in a society, and not be - or just very rapidly collapse into - a singleton? In fact, the dominant narrative here on LW has always featured a singleton ASI as the main (existential) threat. And my story here reflects that.

Reply
How We Might All Die in A Year
Greg C3mo10

being able to discover new laws of nature and to exploit the consequences of that.

Ok, but I think that still basically leads to the end result of all humans (and biological life) dead. 

It seems odd to think that it's more likely such a discovery would lead to the AI disappearing into it's own universe (like in Egan's Crystal Nights), than just obliterating our Solar System with it's new found powers. Nothing analogous has happened in the history of human science and tech development (we have only become more destructive of other species and their habitats).

Reply
How We Might All Die in A Year
Greg C4mo10

then it would be better to use an example not directly aimed against “our atoms”

All the atoms are getting repurposed at once, no special focus on those in our bodies (but there is in the story, to get the reader to empathise). Maybe I could've included more description of non-alive things getting destroyed.

mucking with quantum gravity too recklessly, or smth in that spirit

I'm trying to focus on plausible science/tech here.

they need to do experiments in forming hybrid consciousness with humans to crack the mystery of human subjectivity, to experience that first-hand for themselves, and to decide whether that is of any value to them based on the first-hand empirical material (losing that option without looking is a huge loss)

Interesting. But even if they do find something valuable in doing that, there's not much to keep the vast majority of humans around. And as you say, they could just end up as "scans", with very few being run as oracles.

Reply
How We Might All Die in A Year
Greg C5mo10

Where does my writing suggest that it's a "power play" and "us vs them"? (That was not the intention at all! I've always seen indifference, and "collateral damage" as the biggest part of ASI x-risk.)

as we know, compute is not everything, algorithmic improvement is even more important

It should go without saying that it would also be continually improving it's algorithms. But maybe I should've made that explicit.

the action the ASI is taking in the OP is very suboptimal and deprives it of all kinds of options

What are some examples of these options?

Reply
How We Might All Die in A Year
Greg C5mo20

They don't have a choice in the matter - it's forced by the government (nationalisation). This kind of thing has happened before in wartime (without the companies or people involved staging a rebellion).

Reply
How We Might All Die in A Year
Greg C5mo10

On one hand, it's not clear if a system needs to be all that super-smart to design a devastating attack of this kind...

Good point, but -- and as per your second point too -- this isn't an "attack", it's "go[ing] straight for execution on its primary instrumental goal of maximally increasing its compute scaling" (i.e. humanity and biological life dying is just collateral damage).

probably would not want to irreversibly destroy important information without good reasons

Maybe it doesn't consider the lives of individual organisms as "important information"? But if it did, it might do something like scan as it destroys, to retain the information content.

Reply
Why Were We Wrong About China and AI? A Case Study in Failed Rationality
Greg C5mo20

Are you saying they are suicidal?

Reply
AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now
Greg C2y50

LessWrong:

A post about all the reasons AGI will kill us: No. 1 all time highest karma (827 on 467 votes; +1.77 karma/vote)
A post about containment strategy for AGI: 7th all time highest karma (609 on 308 votes; +1.98 karma/vote)
A post about us all basically being 100% dead from AGI: 52nd all time highest karma (334 on 343 votes; +0.97 karma/vote, a bit more controversial)





Also LessWrong:

A post about actually doing something about containing the threat from AGI and not dying [this one]: downvoted to oblivion (-5 karma within an hour; currently 13 karma on 24 votes; +0.54 karma/vote)




My read: y'all are so allergic to anything considered remotely political (even though this should really not be a mater of polarisation - it's about survival above all else!) that you'd rather just lie down and be paperclipped than actually do anything to prevent it happening. I'm done.

Reply
AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now
Greg C2y30

From the Abstract:

Rather than targeting state-of-the-art performance, our objective is to highlight GPT-4’s potential

They weren't aiming for SOTA! What happens when they do?

Reply
AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now
Greg C2y80

The way I see the above post (and it's accompaniment) is knocking down all the soldiers that I've encountered talking to lots of people about this over the last few weeks. I would appreciate it if you could stand them back up (because I'm really trying to not be so doomy, and not getting any satisfactory rebuttals).

Reply
Load More
6How We Might All Die in A Year
5mo
13
25AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now
2y
12