AnticipatingTheFuture

Posts

Sorted by New

Wiki Contributions

Comments

I also wrote about this interview via a LinkedIn article: On AGI: Excerpts from Lex Fridman’s interview of Sam Altman with commentary. I appreciated reading you post, in part because you picked-up on some topics I overlooked.  My own assessment is that Altman's outlook derives from a mixture of utopianism and the favorable position of OpenAI.  Utopianism can be good if tethered to realism about existing conditions, but realism seemed lacking in many of Altman's statements.

Altman’s vision would be more admirable if the likelihood of achieving it were higher. Present world conditions are likely to result in very different AGIs emerging from the western democracies and China, with no agreement on a fundamental set of shared values. At worst, this could cause an unmanageable escalation of tensions. And in a world where the leading AI powers are in conflict over values and political and economic supremacy, and where all recognize the pivotal significance of AI, it is hard to imagine the adoption of a verifiable and enforceable agreement to slow, manage, or coordinate AGI development.  In the western democracies this is likely to mean both managed and intensified competition: intensified as awareness of the global stakes grows, and managed because, increasingly, competition will have to be coordinated with national security needs and with efforts to preserve social cohesion and economic openness. AGI could confer unassailable first mover advantages that could lead to extremely broad economic, if not social and political, domination, something the western democracies must prevent if they want to sustain their values.