I think that this may be the case, but I would be much more cautious about trying to regulate AI development. I'd start with baby steps that mostly won't cost too much or provoke backlash, like interpretability research.
My model of the situation is:
People are more or less rational, that is we shouldn't expect deviations from rational agent models.
People are mostly selfish, with altruism being essentially signalling, which has little value here.
AI has enough of a chance to bring vastly positive changes on par with a singularity that it dominates other considerations.
In other words, even if there was a 1% chance of a singularity, it would have enough impact that even believing in high AI risk is insufficient to get the population on your side.
This is why I do not think the post is correct, in a nutshell, and that I think the AI governance/digital democracy/privacy movements are way overestimating what costs can be imposed on AI companies (Also known as alignment taxes).
I think AI governance could be surprisingly useful. But attempts to slow things down significantly are mostly unrealistic for the time being.
Good to read your thoughts.
I would agree that slowing further AI capability generalisation developments down by more than half in the next years is highly improbable. Got to work with what we have.
My mental model of the situation is different.
People engage in positively reinforcing dynamics around social prestige and market profit, even if what they are doing is net bad for what they care about over the long run.
People are mostly egocentric, and have difficulty connecting and relating, particularly in the current individualistic social signalling and “divide and conquer” market environment.
Scaling up deployable capabilities of AI has enough of a chance to reap extractive benefits for narcissistic/psychopathic tech leader types, that they will go ahead with it, while sowing the world with techno-optimistic visions that suit their strategy. That is, even though general AI will (cannot not) lead to wholesale destruction of everything we care about in the society and larger environment we’re part of.
So we agree that people are selfish/egocentric, essentially.
My problem is that from a selfish perspective, even a low chance of the technological singularity (let's say you can survive to see from your perspective essentially a near-heaven) outweighs the high chance harm to the self and others, by multiple orders of magnitude. Arguably more than 10 orders of magnitude.
Even most non-narcissists/non-psychopaths would take this deal, and unless convenient plot induced stupidity occurs, we should expect this again and again.
So I disagree with numbers 1 and 3, since given their selfishness, they can distribute externalities to others.
I think there are bunch of relevant but subtle differences in terms of how we are thinking about this. My beliefs after quite a lot of thinking are:
A. Most people don’t care about tech singularity. People are captured by the AI hypes cycles though, especially people who work under the tech elite. The general public is much more wary overall though of current use of AI, and are starting to notice the harms in their daily lives (eg. addictive and ideology + distorted self-image reinforcing social media, exploitative work gigs handed to them by algorithms).
B. Tech singularity, as envisioned in the past, involved a lot of motivated and simplifying reasoning about directing the complex world into utopias using complicated tech that cannot realistically be caused to happen using those methods. Tech elites like to co-opt these nerdy utopian visions for their own ends.
C. By your descriptions, I think you are essentialising humans as rational individuals who are socially signalling for self-benefit. I’m actually saying that, yes, people are egocentric right now, particularly in the neoliberalist consumption-oriented market and self-presentation-oriented culture we are exposed to right now. But also, humans are social creatures and can relate and interact based on deeper shared needs. So in that, I’m not essentialising people as fundamentally selfish. I’m saying that within the social environment on top of our tribal and sex+survival oriented psychological predispositions, people come out as particularly egocentric.
D. I don’t think baby steps are going to do it, given that we’re dealing with potential auto-scaling/catalysing technology that would mark the end of organic DNA-based life. The baby steps description reminds me of various scenes in the film “Don’t Look Up” where bystanders kept signalling to the main actors not to “overdo it”.
E. Interpretability techniques are used by tech elites to justify further capability developments. Interpretability techniques do not and cannot contribute to long-term AGI safety (https://www.lesswrong.com/posts/NeNRy8iQv4YtzpTfa/why-mechanistic-interpretability-does-not-and-cannot).
So 1 and 3 were my descriptions about what is actually happening and how that would continue, not about the end conclusion of what’s happening. To disagree with the former, I think you would need to clarify your observations/analysis of why something opposite/different is happening.
One key point to keep in mind is that my arguments aren't about refuting the idea of slowing down AI, instead it's about offering a reality check.
The reason I said baby steps is that 1. They might be enough, but 2. even if it isn't enough, one common failure mode in politics is to go fully maximalist in your agenda first. This is a route to failure for your agenda. It is better instead to progress your agenda from the least controversial/costly, and if necessary go then add more costly/controversial laws. However this is extremely dangerous, a single case of bad publicity or otherwise making it very controversial to govern AI may well doom the effort.
Another lesson for politics is that your opposition (AI companies) is probably rational, but having very different goals compared to the median LW/EA person. So we shouldn't expect unusually easy wins in this area, and progress will likely be slow, especially in lobbying.
It's still very useful for AI governance to do it, the high risk does not mean there aren't high rewards, especially if you think AI Alignment is possible, but governance can help AI Alignment do it's best, as well as preventing s-risks, but I do think that AI governance may be overestimating what costs the public and companies are willing to bear for regulations. Especially if AI companies can make externalities.
For example, the climate change agenda stalled until solar, wind and batteries became cheap enough in the 2010s that moving out of fossil fuels represented a very cheap way to decarbonize. And still there's some opposition here.
That’s clarifying. I agree that immediately trying to impose costly/controversial laws would be bad.
What I am personally thinking about first here is “actually trying to clarify the concerns and find consensus with other movements concerned about AI developments” (which by itself does not involve immediate radical law reforms).
We first need to have a basis of common understanding from which legislation can be drawn.
Let me also copy over Forrest’s (my collaborator) notes here:
> people who believe false premises tend to take bad actions.
Argument 3:.
- 1; That AGI can very easily be hyped so that even smart people
can be made to falsely/incorrectly believe that there "might be"
_any_chance_at_all_ that AGI will "bring vastly positive changes".
- ie, strongly motivated marketing will always be stronger than truth,
especially when VC investors can be made to think (falsely)
that they could maybe get 10000X return on investment.
- that the nature of AGI, being unknown and largely artificial,
futuristic, saturated with modernism and tech optimism,
high geekery, and is also very highly funded, means that AGI
capabilities development has arbitrary intelligent marketing support.
- 2; People (and nearly all other animals) are mostly self-oriented.
- that altruism is usually essentially social signalling,
and is actually of very little value as benefit to anything
other than maybe some temporary social prestige building.
- ie, each possibly participating person will see the possibility
that maybe they could ride "up to riches" on the research bandwagon,
and/or on any major shift in the marketing dynamics;
that in any change there will be winners and loosers,
and they want a chance to be "on the winning side", since
everyone has bio-builtin social/market game addiction tendencies
(biases) and they think that they can use their high intelligence
to gain some personal strategic advantage.
- 3; People are *selectively* rational.
- Ie, that we should not expect deviations from rational agent models,
because our selective notion of rationality will likely match
our *also* self-selected models of 'rational actors'.
- as such, we can expect that there will be all sorts of
seemingly rational "arguments" that suggest that individual
selfish and self supporting action (favoring tech development)
is maybe "mostly harmless", and that at least some of the risks
are maybe over emphasized, and that "therefore" we should
maybe shift our actions towards the more (manufactured) "consensus"
that the "robustly good" action is "keep doing AGI capability
development" and also "increase safety work" -- and to be assuming
that anything else is either impossible or maybe "robustly bad",
or that at the very least, that the things that seem obvious
are probably not at all obvious, for complicated "rational reasons"
that just happen to align with their motivated preferred view.
- 4; thus the false belief that there "might be" some non-zero
small chance that AGI can be "aligned" so as to bring about
whatever positive changes (hype the huge return on investment!)
is so strong/motivating that it dominates all other considerations.
- as that selective motivated reasoning in the possibility that
someone can be part of the winning team and make history is
so strong that even the suggestion that the very notion that
*any* AGI persistently existing is inherently contradictory
with the notion of the continuing survival of life on this planet
is completely rejected without any further examination.
I am honestly very confused on how Forrest is so confident that radical positive changes will not happen in our lifetime.
More importantly, he seems to be complaining that his opponents have different goals, and claims they're selectively rational. Heads up, but rational behavior can only be determined once what goals you have are determined. Now, to him, his goals probably are much less selfish than those that want AI progress to speed up, so it's not rational for AI capabilities to increase. I too do not think AI progress is beneficial, and believe it probably is harmful, so I'd slowdown on the progress too.
This is critical, because Forrest is misidentifying why AI progress people want AI to progress. The fact that they have very different goals compared to you is the reason why they want AI to progress, and not a rationality failure.
Another critical crux is I am far more optimistic than Forrest or Remmelt on AGI Alignment working out in the end. If I had a pessimism level comparable to Forrest or Remmelt, I too would probably advocate far more around governance strategies.
This is for several reasons:
My general prior is most problems are solvable. This doesn't always occur, see the halting problem's unsolvability, or the likely non-solvability of a perpetual motion machine, but my prior is if there isn't a theorem prohibiting it and it doesn't rely on violating the laws of physics, I'd say it solvable. And AGI alignment is in this spot.
I believe alignment is progressing, not enough to be clear, but if AI alignment was as well resourced as AI capabilities research, then I'd give it a fair shot of solving the problem.
Finally, time. In the more conservative story described here, it still takes 20-30 years, and while AGI now would probably be incompatible with life due to instrumental convergence and inner alignment failures, so long as you have extremely pessimistic beliefs about progress in AI alignment, this is the type of time frame where I'd place 60% probability on having a working solution to the AGI alignment problem due to progress on it.
Responding below:
See reasons to shift your prior: https://www.lesswrong.com/posts/Qp6oetspnGpSpRRs4/list-3-why-not-to-assume-on-prior-that-agi-alignment
Again no reasons given for the belief that AGI alignment is “progressing” or would have a “fair shot” of solving “the problem” if as well resourced as capabilities research. Basically nothing to argue against, because you are providing no arguments yet.
No reasons given, again. Presents instrumental convergence and intrinsic optimisation misalignment failures as the (only) threat models in terms of artificial general intelligence incompatibility with organic DNA-based life. Overlooks substrate-needs convergence.
I'll concede here that I unfortunately do not have good arguments, and I'm updating towards pessimism regarding the alignment problem.
Appreciating your honesty, genuinely!
Always happy to chat further about the substantive arguments. I was initially skeptical of Forrest’s “AGI-alignment is impossible” claim. But after probing and digging into this question intensely over the last year, I could not find anything unsound (in terms of premises) or invalid (in terms of logic) about his core arguments.
A friend in technical AI Safety shared a list of cruxes for their next career step.
The first crux was that they did not believe that progress on AI can be expected to stop.
Copy-pasting a list that I compiled in response (with light edits):