Or, would a higher level of Global coordination and cooperation be required?

E.g.

- Loosely Regulated Capitalism: Capitalist societies often prioritize profit and short-term gains, which might discourage thorough testing and careful deployment of AI systems. There could be a race to market, potentially leading to the deployment of AI systems that are not fully aligned with Human values. Furthermore, capitalism often encourages competition, which could discourage the kind of global cooperation that may be necessary for safe AI alignment.

- Non-Aligned Countries Leading to an Arms Race: If AI development and regulation are not globally aligned, it could lead to an arms race where countries or corporations try to create the most powerful AI first, potentially neglecting safety protocols and alignment principles. This could lead to the development of AI that is not adequately controlled or understood by Humans, threatening global stability.

- High Energy Cost: The high energy demands of advanced AI systems could strain existing resources, potentially leading to conflict over these resources and jeopardizing the alignment process.

- Data Bias and Privacy: Unregulated use of AI could lead to invasion of privacy and perpetuate bias in decision-making, from hiring to law enforcement.

- Climate Change: High-energy computational processes contribute to climate change. As AI models grow larger, the energy required to train them increases. This could exacerbate global warming and have profound societal impacts, which might in turn make AI alignment more challenging by adding to the complexity of Human values that AI systems need to understand and align with.

- Social Inequality: In a situation where the benefits of AI are not equally distributed, social inequality can be exacerbated. A misaligned AI system could lead to a further concentration of wealth and power in the hands of a few, potentially causing societal unrest. Moreover, increased inequality could lead to a larger, more desperate Human population seeking employment and resources, which could in turn be exploited for the mass production of robots with Human-like dexterity and skills. For more on this, see Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment 

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 2:53 PM

The most common assumption on Less Wrong has been that once AGI is created, it will swiftly become superintelligent, and humans will lose all power over it. Under this paradigm, for an aligned future to occur, it is necessary and sufficient that the first self-enhancing AGI is aligned, and retains that alignment as it becomes superintelligent. If it is aligned and remains aligned, it will build an aligned future, no matter what troubles humans might create. And similarly, if it is unaligned or becomes unaligned, no amount of human effort or ingenuity will be able to compensate for that. 

Under this paradigm, AGI alignment has a chance of being achieved in worlds that are arbitrarily misguided or dystopian, so long as the first self-enhancing AGI turns out to be properly aligned. 

This is the paradigm associated with Eliezer Yudkowsky. A contrasting paradigm, associated with Robin Hanson, is that social structures, such as politics, economy, or culture, will always have more power than any individual intelligence. In that scenario, social arrangements continue to matter for alignment, even after AGI is created, and not just before.