LESSWRONG
LW

12
Ulf M. Pettersson
1010
Message
Dialogue
Subscribe

Interested in improving the human condition.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
What Is The Alignment Problem?
Ulf M. Pettersson8mo21

>  Instead of trying to directly align individual agents' objectives, we could focus on creating environmental conditions and incentive structures that naturally promote collaborative behavior.

I think you are really on to something here. To achieve alignment of AI systems and agents, it is possible to create solutions based on existing institutions that ensure alignment in human societies.

Look to the literature in economics and social science that explain how societies manage to align the interests of millions of intelligent human agents, despite all those agents acting in their own self-interest.

Reply
No posts to display.