Warning: extremely bleak views.
Why no one is talking about s-risks in the advent of non aligned AGI?
Why does Eliezer only say "either we do this, or we die".
I'm not that scared about paperclippers or grey goo. Death is a certainty of life. I'm way more scared about the electrode-produced smiley faces for eternity and the rest. That's way, way worse than dying.
And s-risks just info hazards? Is it just too much to handle for most people, so we should just not talk about it? Is that why Eliezer, Bostrom and almost everyone else never mention them?
So I pose these questions?
-
Is it possible to "know what we know" and have any sanity? How?
-
Is possible to "know what we know" and have any hope? How?
-
I'd like to have Paul Christiano's view that the "s-risk-risk" is 1/100 and that AGI is 30 years off, but I find that extremely naive. What about you? What are your timelines and s-risk-risk? Could AGI arrive tomorrow in its present state?
-
When Eliezer talks about "a miracle that would allows us to die with more dignity in the mainline". Maybe he's hinting at this questions...? Maybe a "more dignified death" is the only way out, as bleak as it sounds? Because I'm with him on not finding any hope on alignment.
-
What can I do as a 30 year old from Portugal with no STEM knowledge? Start learning math and work on alignment from home?
In short, I need some help here... Thanks in advance.
This sounds much better than extinction to me! Values might be complex, yeah, but if the AI is actually programmed to maximise human happiness then I expect the high wouldn't wear off. Being turned into a wirehead arguably kills you, but it's a much better experience than death for the wirehead!
I think the kind of Bostromian scenario you're imagining is a slightly different line of AI concern than the types that Paul & the soft takeoff crowd are concerned about. The whole genie in the lamp thing, to me at least, doesn't seem likely to create suffering. If this hypothetical AI values humans being alive & nothing more than that, it might separate your brain in half so that it counts as 2 humans being happy, for example. I think most scenarios where you've got a boundless optimiser superintelligence would lead to the creation of new minds that would perfectly satisfy its utility function.