To me, those odds each seem optimistic by a factor of about 1000, but ~reasonable relative to each other.
(I don't see any low-cost way to find out why we disagree so strongly, though. Moving on, I guess.)
But this isn't any worse to me than being killed [...]
Makes sense (given your low odds for bad outcomes).
Do you also care about minds that are not you, though? Do you expect most future minds/persons that are brought into existence to have nice lives, if (say) Donald "Grab Them By The Pussy" Trump became god-emperor (and was the one deciding what persons/minds get to exist)?
IIUC, your model would (at least tentatively) predict that
If so, how do you reconcile that with e.g. non-sadistic serial killers, rapists, or child abusers? Or non-sadistic narcissists in whose ideal world everyone else would be their worshipful subject/slave?
That last point also raises the question: Would you prefer the existence of lots of (either happily or grudgingly) submissive slaves over oblivion?
To me it seems that terrible outcomes do not require sadism. Seems sufficient that P be low in empathy, and want from Q something Q does not want to provide (like admiration, submission, sex, violent sport, or even just attention).[1] I'm confused as to how/why you disagree.
Also, AFAICT, about 0.5% to 8% of humans are sadistic, and about 8% to 16% have very little or zero empathy. How did you arrive at "99% of humanity [...] are not so sadistic"? Did you account for the fact that most people with sadistic inclinations probably try to hide those inclinations? (Like, if only 0.5% of people appear sadistic, then I'd expect the actual prevalence of sadism to be more like ~4%.) ↩︎
It seems like you're assuming people won't build AGI if they don't have reliable ways to control it, or else that sovereign (uncontrolled) AGI would be likely the be friendly to humanity.
I'm assuming neither. I agree with you that both seem (very) unlikely. [1]
It seems like you're assuming that any humans succeeding in controlling AGI is (on expectation) preferable to extinction? If so, that seems like a crux: if I agreed with that, then I'd also agree with "publish all corrigibility results".
I expect that unaligned ASI would lead to extinction, and our share of the lightcone being devoid of value or disvalue. I'm quite uncertain, though. ↩︎
It's more important to defuse the bomb than it is to prevent someone you dislike from holding it.
I think there is a key disanalogy to the situation with AGI: The analogy would be stronger if the bomb was likely to kill everyone, but also had a some (perhaps very small) probability of conferring godlike power to whomever holds it. I.e., there is a tradeoff: decrease the probability of dying, at the expense of increasing the probability of S-risks from corrupt(ible) humans gaining godlike power.
If you agree that there exists that kind of tradeoff, I'm curious as to why you think it's better to trade in the direction of decreasing probability-of-death for increased probability-of-suffering.
So, the question I'm most interested in is the one at the end of the post[1], viz
What (crucial) considerations should one take into account, when deciding whether to publish---or with whom to privately share---various kinds of corrigibility-related results?
Didn't put it in the title, because I figured that'd be too long of a title. ↩︎
Taking a stab at answering my own question; an almost-certainly non-exhaustive list:
Would the results be applicable to deep-learning-based AGIs?[1] If I think not, how can I be confident they couldn't be made applicable?
Do the corrigibility results provide (indirect) insights into other aspects of engineering (rather than SGD'ing) AGIs?
How much weight one gives to avoiding x-risks vs s-risks.[2]
Who actually needs to know of the results? Would sharing the results with the whole Internet lead to better outcomes than (e.g.) sharing the results with a smaller number of safety-conscious researchers? (What does the cost-benefit analysis look like? Did I even do one?)
How optimistic (or pessimistic) one is about the common-good commitment (or corruptibility) of the people who one thinks might end up wielding corrigible AGIs.
Something like the True Name of corrigibility might at first glance seem applicable only to AIs of whose internals we have some meaningful understanding or control. ↩︎
If corrigibility were easily feasible, then at first glance, that would seem to reduce the probability of extinction (via unaligned AI), but increase the probability of astronomical suffering (under god-emperor Altman/Ratcliffe/Xi/Putin/...). ↩︎
I think the main value of that operationalization is enabling more concrete thinking/forecasting about how AI might progress. Models some of the relevant causal structure of reality, at a reasonable level of abstraction: not too nitty-gritty[1], not too abstract[2].
which would lead to "losing the forest for the trees", make the abstraction too effortful to use in practice, and/or risk making it irrelevant as soon as something changes in the world of AI ↩︎
e.g. a higher-level abstraction like "AI that speeds up AI development by a factor of N" might at first glance seem more useful. But as you and ryan noted, speed-of-AI-development depends on many factors, so that operationalization would be mixing together many distinct things, hiding relevant causal structures of reality, and making it difficult/confusing to think about AI development. ↩︎
I think this approach to thinking about AI capabilities is quite pertinent. Could be worth including "Nx AI R&D labor AIs" in the list?
Cogent framing; thanks for writing it. I'd be very interested to read your framing for the problem of "how do we get to a good future for humanity, conditional on the first attractor state for AGI alignment?"[1]
Would you frame it as "the AGI lab leadership alignment problem"? Or a governance problem? Or something else? ↩︎
Thanks for the answer. It's nice to get data about how other people think about this subject.
the concern that the more sociopathic people wind up in positions of power is the big concern.
Agreed!
Do I understand correctly: You'd guess that
If so, then I'm curious -- and somewhat bewildered! -- as to how you arrived at those guesses/numbers.
I'm under the impression that narcissism and sadism have prevalences of very roughly 6% and 4%, respectively. See e.g. this post, or the studies cited therein. Additionally, probably something like 1% to 10% of people are psychopaths, depending on what criteria are used to define "psychopathy". Even assuming there's a lot of overlap, I think a reasonable guess would be that ~8% of humans have at least one of those traits. (Or 10%, if we include psychopathy.)
I'm guessing you disagree with those statistics? If yes, what other evidence leads you to your different (much lower) estimates?
Do you believe that someone with (sub-)clinical narcissism, if given the keys to the universe, would bring about good outcomes for all (with probability >90%)? Why/how? What about psychopaths?
Do you completely disagree with the aphorism that "power corrupts, and absolute power corrupts absolutely"?
Do you think that having good intentions (and +0 to +3 SD intelligence) is probably enough for someone to produce good outcomes, if they're given ASI-grade power?
FWIW, my guesstimates are that
it would be so easy to benefit humanity, just by telling your slave AGI to go make it happen. A lot of people would enjoy being hailed as a benevolent hero
I note that if someone is using an AGI as a slave, and is motivated by wanting prestige status, then I do not expect that to end well for anyone else. (Someone with moderate power, e.g. a medieval king, with the drive to be hailed a benevolent hero, might indeed do great things for other people. But someone with more extreme power -- like ASI-grade power -- could just... rewire everyone's brains; or create worlds full of suffering wretches, for him to save and be hailed/adored by; or... you get the idea.)
Even relatively trivial things like social media or drugs mess lots of humans up; and things like "ability to make arbitrary modifications to your mind" or "ability to do anything you want, to anyone, with complete impunity" are even further OOD, and open up even more powerful superstimuli/reward-system hacks. Aside from tempting/corrupting humans to become selfish, I think that kind of situation has high potential to just lead to them going insane or breaking (e.g. start wireheading) in any number of ways.
And then there are other failure modes, like insufficient moral uncertainty and locking in some parochial choice of values, or a set of values that made sense in some baseline human context but which generalize to something horrible. ("Obviously we should fill the universe with Democracy/Christianity/Islam/Hedonism/whatever!", ... "Oops, turns out Yahweh is pretty horrible, actually!") ↩︎
That's a good thing to consider! However, taking Earth's situation as a prior for other "cradles of intelligence", I think that consideration returns to the question of "should we expect Earth's lightcone to be better or worse than zero-value (conditional on corrigibility)?"