You can possibly put the lid on European AI research, the biggest sign to this is civilian administration oversight over the internet which is incredibly tight compared to European Union's non authoritarian government model where individual human will is a very little factor compared to institutional decisions. United States? You can decapitate AI, sure. Much more unlikely than Europe (And a Europe wide decapacitation campaign is already very unlikely for lawmakers to wrap their head around). The US with its current edge in development at least showed the most organic development regarding Alignment research. This should make you worried, other countries started from the already developed technological doctrines of America without (usually) adopting the logical doctrines that have led to this state of technology. China didn't need to have Von Neumann and Turing to get to the Baidu search engine.

What about Japan, Brazil, South Korea, India, Singapore and Taiwan? Players that don't cross your mind much but usually pass some of Western Europe in terms of behind the scene developments or atleast replication, does MIRI have any reach in Singapore? They sure seem to be very out of the hearing range for pleas coming from Eliezer while also developing parallel and much less safe models. Try to find a native alignment group in Asia, they weren't raised on 100 years of Science Fiction and Ex Machinas (except for Japan and Brazil). These nations will develop AGI reminiscent models with less than half of the care we expect from OpenAI. Most researchers do not expect corrigibility and alignment from OpenAI . What should follow is several "buts" and a "we are increasing global outreach", (well if Singularity is coming in 2029 that's a very tight schedule to from tentative influence in English Speaking world to entire globe) so lets say I accept all the preliminary arguments for why the asian tigers listen to you. Or will start listening to you even with the lack of breakthroughs regarding Alignment yet.

What about Russia? A country with a competitive defensive AI program focused on nuclear and autonomous killing systems, that's a yuck right? It's a black box you don't get insight into. Even the civilian wing of development usually puts effort to prevent you from interfering. Military wing on the other hand is directly aiming for "Allied Mastercomputer" (Such is the case with all governments but particularly more so for Russia) without having read I have no mouth. These models doesnt need to aim for general intelligence in the fancy of AM even and don't have to have intelligence capacity to plan actively for getting rid of humanity. They just need to be uncalibrated and unhinged when left to roam and you get a direct loss state for humanity. 

There are at least two other secretive players too: Israel and China, building defensive (and in some cases offensive) systems with researchers clueless about alignment and actively obscuring their developments. That's your recipe for death right there. With China there is a very high likelihood you will not realize even the day before singularity happens that China was trying a new novel method of developing AGI.  Last secretive, competent country where alignment does hold near zero weight I want to mention is Iran which actually surpasses (by selective metrics) Russian research capabilities and have high hopes for their future. 
 

I am sorry to say Eliezer but the possibility of dignified death seems non-existent, a success is unthinkable as you say. If obtaining effective regulation over Machine Learning models in the United States is a <%5 likelihood with even regulated research leading into alignment being around perhaps <%1 then we are all walking in coffins if we expect global success.

This post teeters around the edge of universal nihilism, I imagine it is something that everyone would like to disagree with. Still anyone who is able to combine contemporary national policies directly by the current state of Alignment research in their head has to be looking at an extremely grim portrait. AI future models usually benefit from not trying to contain every single factor but going country by country regarding AI safety seems to blindside most, it certainly did with my beta reader. Particularism here works better since we know the number of AI developing actors. On the other hand me and others do tend to name names. OpenAI, Stable Diffusion, Character AI does get extreme interest even though they compromise a fraction of total effort. Still they are the most competitive ones in their field beside ChinaMilitarymedicalsurveillance, moderation, industrial automation capabilities by their nature are less public and much more threatening regarding the possibility of a death scenario without even bringing in AGI. These areas also tend to be the ones that non western powerhouses are on par with the western ones.

Where do we go from here? I will try to not formulate a preemptive answer, I just wanted to show that if anything Safety Research has even less reach then its pessimistic opponents frame it as. I did not get into particulars of how humanity could be destroyed before we get singularity, it requires little imagination regarding military and medical projects but for others I will not needlessly throw out my scenarios. Speculation regarding "what to do then?" is beyond the goal of this post. Thank you for reading and please do say I am wrong and convince me otherwise.


 

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 6:21 PM

I definitely want to see more of this.

I think you have a pretty solid understanding of this, and what would have been really helpful would be if there was links or sources. Not because I doubt any of the claims here, but because there's a pretty long list of things here that I'd like to read more about, but there's nothing to click.

This is a big part of Open Source Intelligence which is becoming more popular on Lesswrong; even if someone gets half of the stuff wrong or doesn't know what they're talking about (which isn't what happened here, this is a solid understanding), there's so much going on in the international space that providing links gives any reader tons of opportunities to discover tons of information, that they probably never would have collided with otherwise.

Can you go into more detail about China? I've heard that China is the biggest concern since there's a big AI gap between China and the rest, and I've also heard that there's a pretty big and potentially long-lasting gap between China's AI industry and the US due to the US successfully recruiting talent from China (of course, all these countries have intelligence agencies that can steal secrets from all sorts of places). But it's really hard to get good info about China because, as you said, it's a really non-transparent country.

[-][anonymous]1y10

Oh I fumbled the formatting hence no links thank you for notifying, still not exactly source rich though. Regarding China I personally experienced how competent chinese chatbots and image generation can be, doesnt really feel an inch behind its western competitors at all. There is enough economic support, means and will to produce AGI better then anywhere else on the world without none of the responsibility we try to beholden the likes of Google with. 

I think the best reason for hope here, is the cosmopolitanism of technical communication. I mentioned a version of this a few years ago. One cannot control the politics and culture of all the different contenders in the AI race, but they all pay attention to (and many contribute to) various public technical literatures, and so one may hope that crucial ideas regarding what is risky and what is safe, if published, will travel beyond the particular AI "research culture" that originates them. 

[-][anonymous]1y10

I imagined the exact reply falling neatly into the many programs of asia (India, Singapore, Taiwan) where there isnt effective rach currently but there is high cultural compability and an open scientific culture. On the cases of China and Israel though the problem is both their state and private initiatives are more "hostile" to any sort of outsider idea coming from any place in the world which dictates that they should intentionally neuter their progress unless they have got a domestic legal reason to do so. We can hope that Alignment is such an important topic that it will get thorough somehow but considering how little reach it already has in an "open" enviroment that has been primed for taking near future issues seriously I dont see any dignified death scenario where atleast all current major AI companies have agreed their first priority is general human alignment.

Regarding AI research programs that are closed and are pushing capabilities ahead as fast as possible, I see two reasons for hope. 

First: if they are smart enough to genuinely have a chance of creating superhuman AI, hopefully they are also smart enough to understand that superhuman AI could have its own agenda, and has the potential to be a threat from within, rivaling any external threats that may be motivating the researchers. 

Second: as AI advances, AI itself can contribute to alignment theory. The technology itself therefore has some possibility of improving the strategic wisdom of any group trying to develop it.  

Probably the most relevant post I've seen on here in a while. I do think the politics around AI (especially global politics) are the most pressing issue at this point in time.