Could it be due to aliefs about attainability of success becoming lower, and that leading to lower motivation? (Cf. "motivation equation".) (It's less likely we'll be able to attain a flourishing post-human future if the world is deeply insane, mostly run by sociopaths, or similarly horrible.)
Or maybe: As one learns about horrors, the only thing that feels worth working on is mitigating the horrors; but that endeavour is difficult, has sparse (or zero) rewards, low probability of success, etc., and consequently does not feel very exciting?
(Also: IIUC, you keep updating towards "world is more horrible than I thought"? If so: why not update all the way, to the point that you can no longer predict which way you'll update in future?)
Suppose you succeed at doing impactful science in AI. What is your plan for ensuring that those impacts are net-positive? (And how would you define "positive" in this context?)
(CTRL+F'ing this post yielded zero safety-relevant matches for "safe", "beneficial", or "align".)
It's unclear whether there is a tipping point where [...]
Yes. Also unclear whether the 90% could coordinate to take any effective action, or whether any effective action would be available to them. (Might be hard to coordinate when AIs control/influence the information landscape; might be hard to rise up against e.g. robotic law enforcement or bioweapons.)
Don't use passive voice for this. [...]
Good point! I guess one way to frame that would be as
by what kind of process do the humans in law enforcement, military, and intelligence agencies get replaced by AIs? Who/what is in effective control of those systems (or their successors) at various points in time?
And yeah, that seems very difficult to predict or reliably control. OTOH, if someone were to gain control of the AIs (possibly even copies of a single model?) that are running all the systems, that might make centralized control easier? </wild, probably-useless speculation>
A potentially somewhat important thing which I haven't seen discussed:
(This looks like a decisionmaker is not the beneficiary -type of situation.)
Why does that matter?
It has implications for modeling decisionmakers, interpreting their words, and for how to interact with them.[1]
If we are in a gradual-takeoff world[2], then we should perhaps not be too surprised to see the wealthy and powerful push for AI-related policies that make them more wealthy and powerful, while a majority of humans become disempowered and starve to death (or live in destitution, or get put down with viruses or robotic armies, or whatever). (OTOH, I'm not sure if that possibility can be planned/prepared for, so maybe that's irrelevant, actually?)
For example: we maybe should not expect decisionmakers to take risks from AI seriously until they realize those risks include a high probability of "I, personally, will die". As another example: when people like JD Vance output rhetoric like "[AI] is not going to replace human beings. It will never replace human beings", we should perhaps not just infer that "Vance does not believe in AGI", but instead also assign some probability to hypotheses like "Vance thinks AGI will in fact replace lots of human beings, just not him personally; and he maybe does not believe in ASI, or imagines he will be able to control ASI". ↩︎
Here I'll define "gradual takeoff" very loosely as "a world in which there is a >1 year window during which it is possible to replace >90% of human labor, before the first ASI comes into existence". ↩︎
Thank you for (being one of the horrifyingly few people) doing sane reporting on these crucially important topics.
Typo: "And humanity needs all the help we it can get."
Out of (1)-(3), I think (3)[1] is clearly most probable:
(Of course one could also come up with other possibilities besides (1)-(3).)[2]
or some combination of (1) and (3) ↩︎
E.g. maybe he plans to keep ASI to himself, but use it to implement all-of-humanity's CEV, or something. OTOH, I think the kind of person who would do that, would not exhibit so much lying, manipulation, exacerbating-arms-races, and gambling-with-everyone's-lives. Or maybe he doesn't believe ASI will be particularly impactful; but that seems even less plausible. ↩︎
Note that our light cone with zero value might also eclipse other light cones that might've had value if we didn't let our AGI go rogue to avoid s-risk.
That's a good thing to consider! However, taking Earth's situation as a prior for other "cradles of intelligence", I think that consideration returns to the question of "should we expect Earth's lightcone to be better or worse than zero-value (conditional on corrigibility)?"
To me, those odds each seem optimistic by a factor of about 1000, but ~reasonable relative to each other.
(I don't see any low-cost way to find out why we disagree so strongly, though. Moving on, I guess.)
But this isn't any worse to me than being killed [...]
Makes sense (given your low odds for bad outcomes).
Do you also care about minds that are not you, though? Do you expect most future minds/persons that are brought into existence to have nice lives, if (say) Donald "Grab Them By The Pussy" Trump became god-emperor (and was the one deciding what persons/minds get to exist)?
In the post and comments, you've said that you're reflectively stable, in the sense of endorsing your current values. In combination with the sadistic kinks/values described above, that raises some questions:
What exactly stops you from inflicting suffering on people, other than the prospect of social or legal repercussions? Do you have some values that countervail against the sadism? If yes, what are they, and how do you reconcile them with the sadism? [1]
Asking partly because: I occasionally run into sadistic parts in myself, but haven't found a way to reconcile them with my more empathetic parts, so I usually just suppress/avoid the sadistic parts. And I'd like to find a way to reconcile/integrate them instead. ↩︎