LESSWRONG
LW

AI
Personal Blog

-3

[ Question ]

Implication of Uncomputable Problems

by Nathan1123
30th Jan 2025
1 min read
A
0
3

-3

AI
Personal Blog

-3

Implication of Uncomputable Problems
2Viliam
1Nathan1123
2Viliam
New Answer
New Comment
3 comments, sorted by
top scoring
Click to highlight new comments since: Today at 9:35 PM
[-]Viliam7mo20

The number of particles in the universe may limit AI from figuring out BB(6), but it limits the humans, too.

Whether we can shut down the AI by telling it "hey, you can't calculate BB(6), why don't you kill yourself?" probably depends on the specific architecture, but seems to me that the AI probably just won't care.

Reply
[-]Nathan11237mo10

I didn't mean it to be so simplistic. I am just considering that if there is a known limitation of AI, no matter how powerful it is, that could be used as the basis of a system an AI could not circumvent. For example, if there was a shutdown system where the only way to disable it would require solving the halting problem.

Reply
[-]Viliam7mo20

If you knew how to build such shutdown system, you could probably also build one that cannot be disabled at all (e.g. would require solving a literally impossible problem, like proving that 1 = 0).

Reply
Moderation Log
More from Nathan1123
View more
Curated and popular this week
A
0
3

Some problems of mathematics like the Halting Problem and the Busy Beaver Problem are uncomputable, meaning that it is mathematically proven that any Turing-complete computer is physically incapable of solving the problem no matter how sophisticated its hardware or software is. Some algorithms on a Turing machine can be used to solve special cases of these problems, but the general case is known to be uncomputable. In the case of Busy Beavers, a computer program could check each combination of programs one-by-one, but this wouldn't work for BB(6) or higher because that is believed to be larger than the number of particles in the Universe.

Wikipedia has a full list of Undecidable problems.

To what extent does this suggest that a GAI will be limited in its capabilities, even in the most optimistic scenario of AI "takeoff"? Could this present an exploitable weakness that could be used to keep an uncooperative AI contained or shut down?