Up until the point where independent Artificial General Intelligence exists, it is at least theoretically possible for humanity to prevent it from happening, but there are two questions: Should we prevent AGI? How can AGI be prevented? Similar questions can be asked for Artificial Super Intelligence: Should we prevent ASI? How can ASI be prevented? I think the "Should" questions are interesting,[1] but the rest of this post is more on the "How can" questions.
PCs can pass Turing Tests
If you have not tried running a large language model on an ordinary personal computer in the past three months, I recommend doing so[2]. For almost any computer from the past decade that has at least 4 GiB of RAM, it is possible to run an LLM on it that can pass a non-deception[3] Turing test. For most conversations, the local LLM will come off as at least as smart as a fifth grader, and in many cases as smart or smarter than an average adult. Remember, this is running on your local computer (disconnect the network if you need to prove that to yourself). LLMs are not optimal for many tasks, so this is not the ceiling for what software on an ordinary PC can do.
Too many computers can do AGI
As the experiment above with an LLM on your own computer may have convinced you, ordinary personal computers act in ways that we considered intelligent, such as holding a conversation, or playing a game of chess or arithmetic (or at least things we used to think required intelligence, until we got computers to do them). There are plenty of estimates that a simulating a human brain is not possible on an ordinary personal computer, but that doesn't really tell us what is required for human equivalent intelligence. As Steve Byrnes pointed out[4], performing AGI by simulating a human brain’s neurons is similar to multiplying a number by “do[ing] a transistor-by-transistor simulation of a pocket calculator microcontroller chip, which in turn is multiplying the numbers.” Byrnes estimated that AGI could be probably be done on a NVIDIA RTX 4090 and 16 GiB of RAM[4:1]. Eliezer Yudkowsky estimated that AGI can be done on a home computer from 1995[5]. As for myself, I think Yudkowsky and Barnes are making reasonable claims, and you might have to restrict hardware to circa 1985 home computer levels to be sure that AGI can't run on it[6]. So if these estimates are correct, computers capable of running AGI are widely available, and people are running AI programs on computers much more powerful than needed to run AGI.
Preventing AGI requires restricting both Research and Computers
If the hardware to create AGI is widely available, the alternative is preventing the knowledge of AI techniques from being available and being used. Restricting hardware is hard, restricting information is harder.
Large numbers of unknowns
One final challenge is the unknowns. How dangerous is AGI? How close are we to creating the software? How much hardware is needed for AGI? How dangerous is ASI? How close are we to ASI? Does AGI go foom into ASI quickly? Acting under uncertainty is vastly more challenging than if we knew answers to these questions.
Personally, stopping AGI looks to be near impossible, stopping ASI[7] from happening in the near term looks like it would be the hardest thing humans have ever accomplished.
I am curious what others think however. How do we make it thru the next few years either to the other side of having AGI, or how do we prevent creating AGI( or ASI) if needed?
I recommend trying Ollama: https://github.com/ollama/ollama or llama.cpp: https://github.com/ggerganov/llama.cpp on your own computer with Gemma3:1b or gpt-oss-20b (gemma3 requires about 4 GiB, gpt-oss about 16 GiB). If you are using llama.cpp, the 4 Bit quantized versions (such as gemma-3-1b-it-Q4_K_M.gguf from https://huggingface.co/unsloth/gemma-3-1b-it-GGUF ) usually work pretty well. Another fun thing to try is compare the difference between system prompts like "You are a helpful assistant" versus "You are a chatty emotionally supportive AI". ↩︎
The standard form of the Turing test requires the computer to decieve a human that the computer is a human. A non-deception version just requires the computer to convince a human that the computer is at least as smart as a human. And as Turing himself said at a AT&T cafeteria: "No, I'm not interested in developing a powerful brain. All I'm after is just a mediocre brain, something like the President of the American Telephone and Telegraph Company." (From Alan Turing: The Enigma, by Andrew Hodges, pg 316) ↩︎
The proposed treaty at ifanyonebuildsit is a useful look at what might be needed to prevent ASI. Remember, this needs to be a more or less worldwide treaty, and in almost all ways would be harder to do than global nuclear weapon disarmament: https://ifanyonebuildsit.com/treaty↩︎
Up until the point where independent Artificial General Intelligence exists, it is at least theoretically possible for humanity to prevent it from happening, but there are two questions: Should we prevent AGI? How can AGI be prevented? Similar questions can be asked for Artificial Super Intelligence: Should we prevent ASI? How can ASI be prevented? I think the "Should" questions are interesting, [1] but the rest of this post is more on the "How can" questions.
PCs can pass Turing Tests
If you have not tried running a large language model on an ordinary personal computer in the past three months, I recommend doing so [2] . For almost any computer from the past decade that has at least 4 GiB of RAM, it is possible to run an LLM on it that can pass a non-deception [3] Turing test. For most conversations, the local LLM will come off as at least as smart as a fifth grader, and in many cases as smart or smarter than an average adult. Remember, this is running on your local computer (disconnect the network if you need to prove that to yourself). LLMs are not optimal for many tasks, so this is not the ceiling for what software on an ordinary PC can do.
Too many computers can do AGI
As the experiment above with an LLM on your own computer may have convinced you, ordinary personal computers act in ways that we considered intelligent, such as holding a conversation, or playing a game of chess or arithmetic (or at least things we used to think required intelligence, until we got computers to do them). There are plenty of estimates that a simulating a human brain is not possible on an ordinary personal computer, but that doesn't really tell us what is required for human equivalent intelligence. As Steve Byrnes pointed out [4] , performing AGI by simulating a human brain’s neurons is similar to multiplying a number by “do[ing] a transistor-by-transistor simulation of a pocket calculator microcontroller chip, which in turn is multiplying the numbers.” Byrnes estimated that AGI could be probably be done on a NVIDIA RTX 4090 and 16 GiB of RAM [4:1] . Eliezer Yudkowsky estimated that AGI can be done on a home computer from 1995 [5] . As for myself, I think Yudkowsky and Barnes are making reasonable claims, and you might have to restrict hardware to circa 1985 home computer levels to be sure that AGI can't run on it [6] . So if these estimates are correct, computers capable of running AGI are widely available, and people are running AI programs on computers much more powerful than needed to run AGI.
Preventing AGI requires restricting both Research and Computers
If the hardware to create AGI is widely available, the alternative is preventing the knowledge of AI techniques from being available and being used. Restricting hardware is hard, restricting information is harder.
Large numbers of unknowns
One final challenge is the unknowns. How dangerous is AGI? How close are we to creating the software? How much hardware is needed for AGI? How dangerous is ASI? How close are we to ASI? Does AGI go foom into ASI quickly? Acting under uncertainty is vastly more challenging than if we knew answers to these questions.
Personally, stopping AGI looks to be near impossible, stopping ASI [7] from happening in the near term looks like it would be the hardest thing humans have ever accomplished.
I am curious what others think however. How do we make it thru the next few years either to the other side of having AGI, or how do we prevent creating AGI( or ASI) if needed?
So far as the "should" is concerned, see https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-problem ↩︎
I recommend trying Ollama: https://github.com/ollama/ollama or llama.cpp: https://github.com/ggerganov/llama.cpp on your own computer with Gemma3:1b or gpt-oss-20b (gemma3 requires about 4 GiB, gpt-oss about 16 GiB). If you are using llama.cpp, the 4 Bit quantized versions (such as gemma-3-1b-it-Q4_K_M.gguf from https://huggingface.co/unsloth/gemma-3-1b-it-GGUF ) usually work pretty well. Another fun thing to try is compare the difference between system prompts like "You are a helpful assistant" versus "You are a chatty emotionally supportive AI". ↩︎
The standard form of the Turing test requires the computer to decieve a human that the computer is a human. A non-deception version just requires the computer to convince a human that the computer is at least as smart as a human. And as Turing himself said at a AT&T cafeteria: "No, I'm not interested in developing a powerful brain. All I'm after is just a mediocre brain, something like the President of the American Telephone and Telegraph Company." (From Alan Turing: The Enigma, by Andrew Hodges, pg 316) ↩︎
https://www.alignmentforum.org/posts/LY7rovMiJ4FhHxmH5/thoughts-on-hardware-compute-requirements-for-agi ↩︎ ↩︎
https://intelligence.org/2022/03/01/ngo-and-yudkowsky-on-scientific-reasoning-and-pivotal-acts/ ↩︎
https://www.researchgate.net/publication/388398902_Memory_and_FLOPS_Hardware_limits_to_Prevent_AGI ↩︎
The proposed treaty at ifanyonebuildsit is a useful look at what might be needed to prevent ASI. Remember, this needs to be a more or less worldwide treaty, and in almost all ways would be harder to do than global nuclear weapon disarmament: https://ifanyonebuildsit.com/treaty ↩︎