Why Engaging with Global Majority AI Policy Matters
Over the past 6-8 months, I have been involved in drafting AI policy recommendations and official statements directed at governments and institutions across the Global Majority: Chile, Lesotho, Malaysia, the African Commission on Human and Peoples' Rights (ACHPR), Israel, and others. At first glance, this may appear to be a less impactful use of time compared to influencing more powerful jurisdictions like the United States or the European Union. But I argue that engaging with the Global Majority is essential, neglected, and potentially pivotal in shaping a globally safe AI future. Below, I outline four core reasons. 1. National-Level Safeguards Are Essential in a Fracturing World As global alignment becomes harder, we need decentralized, national-level safety nets. Some things to keep in mind: * What if the EU AI Act is watered down tomorrow due to lobbying? * The U.S. Biden Executive Order on AI has already been rolled back. In such a world, country-level laws and guidance documents serve as a final line of retreat. Even modest improvements in national frameworks can meaningfully reduce the risk of AI misuse, particularly in high-leverage areas like biometric surveillance, automated welfare allocation, or predictive policing. Moreover, in many Global Majority countries, the state remains the most powerful actor. When risks emerge, it is not always corporations but often ministries, police departments, or public-sector procurement decisions that determine outcomes. Consider the history of state-led atrocities enabled by surveillance or classification systems. Examples include Rwanda’s classification systems (during the 1994 genocide), which used bureaucratic data categories to identify targets, and Apartheid-era South Africa, which collected data to enforce racial segregation. Engaging with the government, building governance capacity, and public-sector-specific guardrails are therefore critical. 2. The Space Is Underserved and Entry Barriers Are Lower Tha