1010

LESSWRONG
LW

1009
AIWorld Optimization
Frontpage

11

[ Question ]

What's the difference between GAI and a government?

by DirectedEvolution
21st Oct 2020
1 min read
A
3
5

11

11

What's the difference between GAI and a government?
8crl826
4DirectedEvolution
4magfrump
3Viliam
2Dagon
New Answer
New Comment

3 Answers sorted by
top scoring

crl826

Oct 22, 2020

80

Why Not Just: Think of AGI Like a Corporation?

Add Comment
[-]DirectedEvolution5y40

Thanks, I didn’t know this series existed and it looks like it covers a lot of my questions in an accessible way!

Reply

magfrump

Oct 22, 2020

40

I think this post is pointing to some strong analogies between them, though there are also some obvious disanalogies, like time it takes for a completely new agent to arise.

Add Comment

Viliam

Oct 23, 2020

30

Do the restraints that have so far prevented governments/corporations from paperclipping the world map onto any proposed strategies for AI alignment?

I think the main restraint here is time. Specifically, self-enhancing of governments and corporations is very slow and unreliable.

And this planet has already been partially "paperclipped". The environment is destroyed, corporations and governments oppress people in many places.

From a pessimistic perspective, the only reason democracy works is that satisfied and educated humans are economically more productive, so you can extract more resources from them if you keep them happy. With invention of human-level AI, this restraint will be gone.

Add Comment
[-]Dagon5y20

From a pessimistic perspective, the only reason democracy works is that satisfied and educated humans are economically more productive, so you can extract more resources from them if you keep them happy.

I find this an optimistic perspective.  If Moloch is aligned with satisfaction and education, the win is stable.

With invention of human-level AI, this restraint will be gone.

Perhaps.  A lot depends on exact values and whether it remains true that overall productivity depends on satisfied and educated humans.  And also on whether human-level A... (read more)

Reply
Rendering 0/2 comments, sorted by
top scoring
(show more)
Click to highlight new comments since: Today at 12:08 PM
Moderation Log
More from DirectedEvolution
View more
Curated and popular this week
A
3
0
AIWorld Optimization
Frontpage

I have zero technical AI alignment knowledge. But this question has kept recurring to me for like a year now so I thought I'd ask.

A lot of the arguments for the danger of GAI revolve around the notion that an agent that is smarter than a human is un-boxable, self-creating and self-enhancing, and not necessarily aligned with human interests.

That pattern-matches very well onto "governments," "corporations," and other forms of collective agencies. They have access to collective intelligence far beyond what's accessible to an individual. That intelligence brings them power beyond the ability of even the most clever individual to avoid long-term. Their goals aren't necessarily aligned with human values. They use their intelligence and power to enhance their own intelligence and power. They're not always successful, but they are often able to learn from their mistakes. If one agency destroys itself, another takes its place.

How much bearing does this have on technical AI alignment work? Can AI alignment work translate into solutions for the problems we presently have in aligning these agencies to human values? Do the restraints that have so far prevented governments/corporations from paperclipping the world map onto any proposed strategies for AI alignment?