LESSWRONG
LW

geoffreymiller
578Ω26800
Message
Dialogue
Subscribe

Psychology professor at University of New Mexico. BA Columbia, PhD Stanford. Works on evolutionary psychology, Effective Altruism, AI alignment, X risk. Worked on neural networks, genetic algorithms, evolutionary robotics, &  autonomous agents back in the 90s.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
A case for courage, when speaking of AI danger
[+]geoffreymiller2mo-140
80,000 Hours is producing AI in Context — a new YouTube channel. Our first video, about the AI 2027 scenario, is up!
geoffreymiller2mo70

Excited to see this! 

Well done to the 'AI in Context' team. 

I'll share the video on X.

Reply
A case for courage, when speaking of AI danger
geoffreymiller2mo30

boazbarak -- I don't understand your implication that my position is 'radical'.

I have exactly the same view on the magnitude of ASI extinction risk that every leader of a major AI company does -- that it's a significant risk.

The main difference between them and me is that they are willing to push ahead with ASI development despite the significant risk of human extinction, and I think they are utterly evil for doing so, because they're endangering all of our kids. 

In my view, risking extinction for some vague promise of an ASI utopia is the radical position. Protecting us from extinction is a mainstream, commonsense, utterly normal human position.

Reply1
A case for courage, when speaking of AI danger
geoffreymiller2mo7-2

TsviBT - thanks for a thoughtful comment. 

I understand your point about labelling industries, actions, and goals as evil, but being cautious about labelling individuals as evil. 

But I don't think it's compelling. 

You wrote 'You're closing off lines of communication and gradual change. You're polarizing things.' 

Yes, I am. We've had open lines of communication between AI devs and AI safety experts for a decade. We've had pleas for gradual change. Mutual respect, and all that. Trying to use normal channels of moral persuasion. Well-intentioned EAs going to work inside the AI companies to try to nudge them in safer directions. 

None of that has worked. AI capabilities development is outstripping AI safety developments at an ever-increasing rate. The financial temptations to stay working inside AI companies keep increasing, even as the X risks keep increasing. Timelines are getting shorter. 

The right time to 'polarize things' is when we still have some moral and social leverage to stop reckless ASI development. The wrong time is after it's too late.

Altman, Amodei, Hassabis, and Wang are buying people's souls -- paying them hundreds of thousands or millions of dollars a year to work on ASI development, despite most of their workers they supervise knowing that they're likely to be increasing extinction risk. 

This isn't just a case of 'collective evil' being done by otherwise good people. This is a case of paying people so much that they ignore their ethical qualms about what they're doing. That makes the evil very individual, and very specific. And I think that's worth pointing out.

Reply
A case for courage, when speaking of AI danger
geoffreymiller2mo32

Sure. But if an AI company grows an ASI that extinguishes humanity, who is left to sue them? Who is left to prosecute them? 

The threat of legal action for criminal negligence is not an effective deterrent if there is no criminal justice system left, because there is no human species left.

Reply
A case for courage, when speaking of AI danger
geoffreymiller2mo136

Drake -- this seems like special pleading from an AI industry insider.

You wrote 'I think working at an AI lab requires less failure of moral character than, say, working at a tobacco company, for all that the former can have much worse effects on the world.'

That doesn't make sense to me. Tobacco kills about 8 million people a year globally. ASI could kill about 8 billion. The main reason that AI lab workers think that their moral character is better than that of tobacco industry workers is that the tobacco industry has already been morally stigmatized over the last several decades -- whereas the AI industry has not yet been morally stigmatized in proportion to its likely harms. 

Of course, ordinary workers in any harm-imposing industry can always make the argument that they're good (or at least ethically mediocre) people, that they're just following orders, trying to feed their families, weren't aware of the harms, etc.

But that argument does not apply to smart people working in the AI industry -- who have mostly already been exposed to the many arguments that AGI/ASI is a uniquely dangerous technology. And their own CEOs have already acknowledged these risks. And yet people continue to work in this industry.

Maybe a few workers at a few AI companies might be having a net positive impact in reducing AI X-risk. Maybe you're one of the lucky few. Maybe.

Reply3
A case for courage, when speaking of AI danger
geoffreymiller2mo10

Richard -- I think you're just factually wrong that 'people are split on whether AGI/ASI is an existential threat'.

Thousands of people signed the 2023 CAIS statement on AI risk, including almost every leading AI scientist, AI company CEO, AI researcher, AI safety expert, etc. 

There are a few exceptions, such as Yann LeCun. And there are a few AI CEOs, such as Sam Altman, who had previously acknowledged the existential risks, but now downplay them. 

But if all the leading figures in the industry -- including Altman, Amodei, Hassabis, etc -- have publicly and repeatedly acknowledged the existential risks, why would you claim 'people are split'?

Reply
A case for courage, when speaking of AI danger
geoffreymiller2mo20

Knight -- thanks again for the constructive engagement.

I take your point that if a group is a tiny and obscure minority, and they're calling the majority view 'evil', and trying to stigmatize their behavior, that can backfire.

However, the surveys and polls I've seen indicate that the majority of humans already have serious concerns about AI risks, and in some sense are already onboard with 'AI Notkilleveryoneism'. Many people are under-informed or misinformed in various ways about AI, but convincing the majority of humanity that the AI industry is acting recklessly seems like it's already pretty close to feasible -- if not already accomplished. 

I think the real problem here is raising public awareness about how many people are already on team 'AI Notkilleveryoneism' rather than team 'AI accelerationist'. This is a 'common knowledge' problem from game theory -- the majority needs to know that they're in the majority, in order to do successful moral stigmatization of the minority (in this case, the AI developers). 

Reply
A case for courage, when speaking of AI danger
geoffreymiller2mo42

Ben -- your subtext here seems to be that only lower-class violent criminals are truly 'evil', whereas very few middle/upper-class white-collar people are truly evil (with a few notable exceptions such as SBF or Voldemort) -- with the implications that the majority of ASI devs can't possibly be evil in the ways I've argued.

I think that doesn't fit the psychological and criminological research on the substantial overlap between psychopathy and sociopathy, and between violent and non-violent crime.

It also doesn't fit the standard EA point that a lot of 'non-evil' people can get swept up in doing evil collective acts as parts of collectively evil industries, such as slave-trading, factory farming, Big Tobacco, the private prison system, etc. - but that often, the best way to fight such industries is to use moral stigmatization.

Reply
A case for courage, when speaking of AI danger
geoffreymiller2mo115

Hi Knight, thanks for the thoughtful reply.

I'm curious whether you read the longer piece about moral stigmatization that I linked to at EA Forum? It's here, and it addresses several of your points.

I have a much more positive view about the effectiveness of moral stigmatization, which I think has been at the heart of almost every successful moral progress movement in history. The anti-slavery movement stigmatized slavery. The anti-vivisection movement stigmatized torturing animals for 'experiments'. The women's rights movement stigmatized misogyny. The gay rights movement stigmatized homophobia. 

After the world wars, biological and chemical weapons were not just regulated, but morally stigmatized. The anti-landmine campaign stigmatized landmines. 

Even in the case of nuclear weapons, the anti-nukes peace movement stigmatized the use and spread of nukes, and was important in nuclear non-proliferation, and IMHO played a role in the heroic individual decisions by Arkhipov and others not to use nukes when they could have.

Regulation and treaties aimed to reduce the development, spread, and use of Bad Thing X, without moral stigmatization of Bad Thing X, doesn't usually work very well. Formal law and informal social norms must typically reinforce each other. 

I see no prospect for effective, strongly enforced regulation of ASI development without moral stigmatization of ASI development. This is because, ultimately, 'regulation' relies on the coercive power of the state -- which relies on agents of the state (e.g. police, military, SWAT teams, special ops teams) being willing to enforce regulations even against people with very strong incentives not to comply. And these agents of the state simply won't be willing to use government force against ASI devs violating regulations unless these agents already believe that the regulations are righteous and morally compelling.

Reply1
Load More
10Biomimetic alignment: Alignment between animal genes and animal brains as a model for alignment between humans and AI systems
2y
1
51A moral backlash against AI will probably slow down AGI development
2y
10
82The heritability of human values: A behavior genetic critique of Shard Theory
3y
63
10Brain-over-body biases, and the embodied value problem in AI alignment
Ω
3y
Ω
6
10The heterogeneity of human value types: Implications for AI alignment
3y
2
12AI alignment with humans... but with which humans?
3y
33