LESSWRONG
LW

304
geoffreymiller
734Ω271030
Message
Dialogue
Subscribe

Psychology professor at University of New Mexico. BA Columbia, PhD Stanford. Works on evolutionary psychology, Effective Altruism, AI alignment, X risk. Worked on neural networks, genetic algorithms, evolutionary robotics, &  autonomous agents back in the 90s.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
My talk on AI risks at the National Conservatism conference last week
geoffreymiller21m10

Matrice -- maybe, if it was possible for people to boycott Google/Deepmind, or Microsoft/OpenAI.

But as a practical matter, we can't expect hundreds of millions of people to suddenly switch from gmail to some email alternative, or to switch from Windows to Linux.

It's virtually impossible to organize a successful boycott of all the Big Tech companies that have oligarchic control of people's digital lives, and that are involved in AGI/ASI development.

I still think the key point of leverage of specific, personalized, grassroots social stigmatization of AGI/ASI developers and people closely involved in what they're doing.

(But I could be convinced that Big Tech boycotts might be a useful auxiliary strategy).

Reply
My talk on AI risks at the National Conservatism conference last week
geoffreymiller3h10

PS the full video of my 15-minute talk was just posted today on the NatCon YouTube channel; here's the link

Reply
My talk on AI risks at the National Conservatism conference last week
geoffreymiller1d1-3

Matrice -- for more on the stigmatization strategy, see my EA Forum post from a couple years ago, here

IMHO, a grassroots moral stigmatization campaign by everyone who knows AGI devs would be much more effective that just current users of a company's products boycotting that company.

Reply11
My talk on AI risks at the National Conservatism conference last week
geoffreymiller2d142

Oliver -- that's all very reasonable, and I largely agree. 

I've got no problem with people developing narrow, domain-specific AI such as self-driving cars, or smarter matchmaking apps, or suchlike. 

I wish there were better terms that could split the AI industry into 'those focused on safe, narrow, non-agentic AI' versus 'those trying to build a Sand God'. It's only the latter who need to be highly stigmatized.

Peace out :)

Reply1
My talk on AI risks at the National Conservatism conference last week
geoffreymiller2d10

Seth - thanks for sharing that link; I hadn't seen it, and I'll read it.

I agree that we should avoid making AI safety either liberal-coded or conservative-coded. 

But, we should not hesitate to use different messaging, emphasis, talking points, and verbal styles when addressing liberal or conservative audiences. That's just good persuasion strategy, and it can be done with epistemic and ethical integrity.

Reply
My talk on AI risks at the National Conservatism conference last week
geoffreymiller2d32

Russell -- I take your point that in most alternative timelines, we would already be dead, decades ago, due to nuclear war, and I often make that point in discussing AI risk, to hammer home the point that humanity does not have any magical 'character armor' that will protect us from extinction, and that nobody is coming to save us if we're dumb enough to develop AGI/ASI.

However, I disagree with the claim that 'our current situation is not one to preserve'. I know people in the military/intelligence communities who work full time on nuclear safety, nuclear non-proliferation, counter-terrorism, etc. There are tens of thousands of smart people across dozens of agencies across many countries who spend their entire lives reducing the risks of nuclear war. They're not just activists making noise from outside the centers of power. They're inside the government, with high security clearances, respected expertise, and real influence. I'm not saying the risk of nuclear war has gone to zero, but it is taken very seriously by all the major world governments.

By contrast, AI safety remains something of a fringe issue, with virtually no representation inside governments, corporations, media, academia, or any other power centers. That's the thing that needs to change.

We don't need a 'hail Mary' where we develop AGI/ASI and then hope that it can reduce nuclear risk more than it increases all other risks.

Reply
My talk on AI risks at the National Conservatism conference last week
geoffreymiller2d52

I didn't say that all Rationalists are evil. I do consider myself a Rationalist in many ways, and I've been an active member of LessWrong and EA for years, and have taught several college courses on EA that include Rationalist readings.

What I did say, in relation to my claim that 'they’ve created in a trendy millenarian cult that expects ASIs will fill all their material, social, and spiritual needs', is that 'This is the common denominator among millions of tech bros, AI devs, VCs, Rationalists, and effective accelerationists'.

The 'common denominator' language implies overlap, not total agreement. 

And I think there is substantial overlap among these communities -- socially, financially, ethically, geographically. 

Many Rationalists have been absolutely central to analyzing AI risks, advocating for AI safety, and fighting the good fight. But many others have gone to work for AI companies, often in 'AI safety' roles that do not actually slow down AI capabilities development. And many have become e/accs or transhumanists who see humanity as a disposable stepping-stone to something better.

Reply
My talk on AI risks at the National Conservatism conference last week
geoffreymiller2d40

Yes. And way too much 'AI safety work' boils down to 'getting paid huge amounts by AI companies to do safety-washing & public relations, to kinda sorta help save humanity, but without upsetting my Bay Area roommates & friends & lovers who work on AI capabilities development'. 

Reply
My talk on AI risks at the National Conservatism conference last week
geoffreymiller2d10-3

Ok, let's say we get most of the 8 billion people in the world to 'come to an accurate understanding of the risks associated with AI', such as the high likelihood that ASI would cause human extinction.

Then, what should those people actually do with that knowledge? 

Wait for the next election cycle to nudge their political representatives into supporting better AI safety regulations and treaties -- despite the massive lobbying and campaign contributions by AI companies? Sure, that would be nice, and it would eventually help a little bit.

But it won't actually stop AGI/ASI development fast enough or decisively enough to save humanity. 

To do that, we need moral stigmatization, right now, of everyone associated with AGI/ASI development. 

Note that I'm not calling for violence. Stigmatization isn't violence. It's leveraging human instincts for moral judgment and social ostracism to negate the status and prestige that would otherwise be awarded to people. 

If AI devs are making fortunes endangering humanity, and we can't negate their salaries or equity stakes, we can at least undercut the social status and moral prestige of the jobs that they're doing. We do that by calling them our as reckless and evil. This could work very quickly, without having to wait for national regulations or global treaties.

Reply
My talk on AI risks at the National Conservatism conference last week
geoffreymiller3d20

Well my toddler pronounces it 'pee doom'

Reply
Load More
49My talk on AI risks at the National Conservatism conference last week
3d
33
10Biomimetic alignment: Alignment between animal genes and animal brains as a model for alignment between humans and AI systems
2y
1
51A moral backlash against AI will probably slow down AGI development
2y
10
82The heritability of human values: A behavior genetic critique of Shard Theory
3y
63
10Brain-over-body biases, and the embodied value problem in AI alignment
Ω
3y
Ω
6
10The heterogeneity of human value types: Implications for AI alignment
3y
2
12AI alignment with humans... but with which humans?
3y
33