Posts

Sorted by New

Wiki Contributions

Comments

havequick32

I'm curious though if you have any hopes for the situation regarding the nebulous motivations of some AGI researchers, especially as AI and its risks have recently started becoming "mainstream". Do you expect to see changes and their views challenged? My question is loaded, but it seems you are already invested in its answer.

I think there's a case to be made for AGI/ASI development and deployment as a "hostis humani generis" act; and others have made the case as well. I am confused (and let's be honest, increasingly aghast) as to why AI doomers rarely try to press this angle in their debates/public-facing writings.

To me it feels like AI doomers have been asleep on sentry duty, and I'm not exactly sure why. My best guesses look somewhat like "some level of agreement with the possible benefits of AGI/ASI" or "a belief that AGI/ASI is overwhelmingly inevitable and so it's better not to show any sign of adversariality towards those developing it, so as to best influence them to mind safety", but this is quite speculative on my part. I think LW/EA stuff inculcates in many a grievous and pervasive fear of upsetting AGI accelerationists/researchers/labs (fear of retaliatory paperclipping? fear of losing mostly illusory leverage and influence? getting memed into the idea that AGI/ASI is inevitable and unstoppable?).

It seems to me like people whose primary tool of action/thinking/orienting is some sort of scientific/truth-finding rational system will inevitably lose against groups of doggedly motivated, strategically+technically competent, cunning unilateralists who gleefully use deceit / misdirection to prevent normies from catching on to what they're doing and motivated by fundamentalist pseudo-religious impulses ("the prospect of immortality, of solving philosophy").

I feel like this foundational dissonance makes AI doomers come across as confused fawny wordcels or hectoring cultists whenever they face AGI accelerationists / AI risk deniers (who in contrast tend to come across as open/frank/honest/aligned/of action/assertive/doers/etc). This vibe is really not conducive to convincing people of the risks/consequences of AGI/ASI.

I do have hopes but they feel kinda gated on "AI doomers" being many orders of magnitudes more honest, unflinchingly open, and unflatteringly frank about the ideologies that motivate AGI/ASI researchers and the intended/likely consequences of their success -- even if "alignment/control" gets solved -- of total technological unemployment and consequential social/economic human disempowerment, instead of continuing to treat AGI/ASI as some sort of neutral(if not outright necessary)-but-highly-risky technology like rockets or nukes or recombinant DNA technology. Also gated on explicitly countering the contentions that AGI/ASI -- even if aligned -- is inevitable/necessary/good or that China is a viable contender in this omnicidal race or that we need AGI/ASI to fight climate change or asteroids or pandemics or all the other (sorry for being profane) bullshit that gets trotted out to justify AGI/ASI development. And gated on explicitly saying that AGI/ASI accelerationists are transhumanist fundamentalists who are willing to sacrifice the entire human species on the altar of their ideology.

I don't think AGI/ASI is inherently inevitable, but as long as AI doomers shy away from explaining that the AGI/ASI labs are specifically seeking (and likely soonish succeeding) to build systems strong enough to turn the yet-unbroken -- from hunter-gatherer bands to July 2023 -- bedrock ("human labor is irreplaceably valuable") assumption of human society into fine sand; I think there's little hope of stopping AGI/ASI development.

havequick3-3

The risk that it simply ends up being owned by the few who create it leading thus to a total concentration of the productive power of humanity isn't immaterial, in fact it looks like the default outcome.

Yes, this is why I've been frustrated (and honestly aghast, given timelines) at the popular focus on AI doom and paperclips rather than the fact that this is the default (if not nigh-unavoidable) outcome of AGI/ASI, even if "alignment" gets solved. Comparisons with industrialization and other technological developments are specious because none of them had the potential to do anything close to this.

havequick73

Wouldn't an important invention such as the machine gun or obviously fission weapons fit your argument pattern? You could make a reasonable case that, like a world with technological unemployment, worlds where humans are cheap to slaughter is overall worse. That if you could coordinate with the world powers at that time to agree to an "automatic weapon moratorium" it would resort in a better world.

The problem is Kaiser Wilhelm and other historical leaders are going to say "suuurrrreee", agree to the deal, and you already know the nasty surprise any power honoring such a deal will face on the battlefield. (Or Stalin would have said "sureee" to such a deal on fission weapons, and we can assume would immediately renege and test the devices in secret, only announcing their existence with a preemptive first strike on the enemies of the USSR).

I might be misunderstanding your point but I wasn't trying to argue that it's easy (or even feasible) to make robust international agreements not to develop AGI.

The machine gun and nuclear weapons don't, AFAICT, fit my argument pattern. Powerful weapons like those certainly make humans easier to slaughter on industrial scales, but since humans are necessary to keep economies and industries and militaries running, military/political leaders have robust incentives to prevent large-scale slaughter of their own citizens and soldiers (and so do their adversaries for their own people). Which OK, this can get done by deterrence or arms-control agreement but it's also started arms races, preemptive strikes, and wars hot and cold. Nevertheless, the bedrock of "human labor/intelligence is valuable/scarce" creates strong restoring forces towards "don't senselessly slaughter tons of people". It is possible to create robust-ish (pretty sure Russia's cheating with them Novichoks) international agreements against weapons that are better at senseless civilian slaughter than at achieving military objectives, chemical weapons are the notable case.

The salient threat to me isn't "AGI gives us better ways to kill people" (society has been coping remarkably well with better ways to kill people, up to and including a fleet of portable stars that can be dispatched to vaporize cities in the time it took me to write this comment), the salient threat to me (which seems inherent to the development of AGI/ASI) is "AGI renders the overwhelming majority of humanity economically/socially irrelevant, and therefore the overwhelming majority of humanity loses all agency, meaning, decision-making power, and bargaining power, and is vulnerable to inescapable and abyssal oppression if not outright killing because there's no longer any robust incentives to keep them alive/happy/productive".

havequick169

I very much agree with you here and in your AGI deployment as an act of aggression post; the overwhelming majority of humans do not want AGI/ASI and its straightforward consequences (total human technological unemployment and concomitant abyssal social/economical disempowerment), regardless of what paradisaical promises (for which there is no recourse if they are not granted: economically useless humans can't go on strike, etc) are promised them.

The value (this is synonymous with "scarcity") of human intelligence and labor output has been a foundation of every human social and economic system, from hunter-gatherer groups to highly-advanced technological societies. It is the bedrock onto which humanity has built cooperation, benevolence, compassion, and care. The value of human intelligence and labor output gives humans agency, meaning, decision-making power, and bargaining power towards each other and over corporations / governments. Beneficence flows from this general assumption of human labor value/scarcity.

So far, technological development has left this bedrock intact, even if it's been bumpy (I was gonna say "rocky" but that's a mixed metaphor for sure) on the surface. The bedrock's still been there after the smoke cleared, time and time again. Comparing opponents of AGI/ASI with Luddites or the Unabomber, accusing them of being technophobes, or insinuating that they would have wanted to stop the industrial revolution is wildly specious: unlike every other invention or technological development, successful AGI/ASI development will convert this bedrock into sand. So far, technological development has been wildly beneficial for humanity: technological development that has no need for humans is not likely to hold to that record. The OpenAI mission is literally to create "highly autonomous systems that outperform humans at most economically valuable work", a flowery way to say "make human labor output worthless". Fruitful cooperation between AGI/ASI and humans is unlikely to endure since at some point the transaction costs (humans don't have APIs, are slow, need to sleep/eat/rest, etc) outweigh whatever benefits of cooperation.

There's been an significant effort to avoid reckoning with or acknowledging these aspects of AGI/ASI (again, AGI/ASI is the explicit goal of AI labs like OpenAI; not autoregressive language models) and those likely (if not explicitly sought out) consequences in public-facing discourse of doomers vs accelerationists. As much as it pains me to come to this conclusion it really does feel like there's a pervasive gentleman's agreement to avoid saying "the goal is literally to make systems capable of bringing about total technological unemployment". This is not aligned with the goals/desires/lives of the overwhelming majority of humanity, and the deception deployed to avoid widespread public realization of this sickens me.

I wrote a handful of comments on the EA forum about this as well.

AGI is potentially far more useful and powerful than nuclear weapons ever were, and also provides a possible route to breaking the global stalemate with nuclear arms.

If this is true -- or perceived to be true among nuclear strategy planners and those with the authority to issue a lawful launch order -- it might creates disturbingly (or delightfully; if you see this as a way to prevent the creation of AGI altogether) strong first-strike incentives for nuclear powers which don't have AGI, don't want to see their nuclear deterrent turned to dust, and don't want to be put under the sword of an adversary's AGI.

Re "they should also pop out in a sane person's mind within like 15 seconds of actually hearing that statement" I agree with that in the abstract; few people will say that a state of high physiological alertness/vigilance is Actually A Good Idea to cultivate for threats/risks not usefully countered by the effects of high physiological alertness.

Being able to reason about that in the abstract doesn't necessarily transfer to actually stopping doing that. Like personally, I feel like being told something along the line of "you're working yourself up into a counterproductive state of high physiological alertness about the risks of [risk] and counterproductively countering that with incredibly abstract thought disconnected from useful action" is not something I am very good at hearing from most people when I am in that sort of extraordinarily afraid state. It can really feel like someone wants to manipulate me into thinking that [risk] is not a big deal, or discourage me from doing anything about [risk], or that they're seeking to make me more vulnerable to [risk]. These days this is rarely the case; but the heuristic still sticks around. Maybe I should find its commanding officer so it can be told by someone it trusts that it's okay to stand down...

With the military analogy; it's like you'd been asked to keep an eye out for a potential threat, and your commanding officer tells you on the radio to get on REDCON 1. Later on you hear an unfamiliar voice on the radio which doesn't authenticate itself, and it keeps telling you that your heightened alertness is actually counterproductive and that you should stand down.

Would you stand down? No, you'd be incredibly suspicious! Interfering with the enemy's communication is carte blanche in war. Are there situations where you would indeed obey the order from the unfamiliar voice? Perhaps! Maybe your commanding officer's vehicle got destroyed, or more prosaically, maybe his radio died. But it would have to be in a situation where you're confident it represents legitimate military authority. It would be a high bar to clear, since if you do stand down and it was an enemy ruse, you're in a very bad situation regardless if you get captured by the enemy or if you get court-martialed for disobeying orders. If it seems like standing down makes zero tactical/strategic sense, your threshold would be even higher! In the extreme, nothing short of your commanding officer showing up in person would be enough.

All of this is totally consistent with the quoted section in OP that mentions "Goals and motivational weightings change", "Information-gathering programs are redirected", "Conceptual frames shift", etc. The high physiological alertness program has to be a bit sticky, otherwise a predator stalking you could turn it off by sitting down and you'd be like "oh, I guess I'm not in danger anymore". If you've been successfully tricked by a predator into thinking that it broke off the hunt when it really was finding a better position to attack you from, the program's gonna be a bit stickier, since its job is to keep you from becoming food.

To get away from the analogies, I really appreciate this piece and how it was written. I specifically appreciate it because it doesn't feel like it is an attempt to make me more vulnerable to something bad. Also I think it might have helped me get a bit of a felt sense shift.