[ Question ]

The universality of computation and mind design space

by alanf1 min read12th Sep 20207 comments

1

Computer ScienceAI
Frontpage

A Turing machine is a universal computer: it can compute anything that any other computer can compute. A human being can specify a Turing machine and the data it's acting on and carry out the steps that the machine would execute. Human beings have also constructed computers with the same repertoire as a Turing machine, such as the computer on which I am writing this question. There are articles on Less Wrong about mind design space, such as this one:

https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general

in which the author writes:

The main reason you could find yourself thinking that you know what a fully generic mind will (won't) do, is if you put yourself in that mind's shoes - imagine what you would do in that mind's place - and get back a generally wrong, anthropomorphic answer.

But a person thinking about what an AI would do needn't imagine what he would do in that other mind's place. He can simulate that mind with a universal computer.

So what is the Less Wrong position on whether we could understand AIs and how is that claim compatible with the universality of computation?

New Answer
Ask Related Question
New Comment

4 Answers

But a person thinking about what an AI would do needn't imagine what he would do in that other mind's place. He can simulate that mind with a universal computer.

This is straightforwardly incorrect. Humans (in 2020) reasoning about what future AIs will do, do not have the source code or full details of those AIs, because they are hypothetical constructs. Therefore we can't simulate them. This is the same as why we can't predict what another human would do by simulating them; we don't have a full-fidelity scan of their brain, or a detailed-enough model of what to do with such a scan, or a computer fast enough to run it.

That I can run or emulate a program usually doesn't much imply that I understand it very much. If I have a exe I need to decompile it or have its source provided to me and even then need to study it quite a bit. If I run it through pen and paper I am not guaranteed to gain more insight than running it via an external computer.

There is also the distinction of a specific program or what a program could be. For example "programs will halt" is wrong althought the question of this or that program haling can right or wrong. There are not many properties that you can deduce from a program from it simply being a program. "Programs have loops" can be a good inductive generalization about programs "found in the wild" but it is a terrible description of a general program.

This is mostly a quantitative issue.

If you define a UTM as having infinite capacity, then a human is not a UTM.

If you are talking about finite TMs, then a smaller finite TM cannot emulate a larger one. A larger finite TM might be able to emulate a smaller one, but cannot necessarily . A human cannot necessarily emulate a TM with less total processing power than a human brain, because a human cannot devote 100% of cognitive resources to the emulation. Your brain is mostly devoted to keeping your body going.

This can easily be seen from the history and methodology of programming. Humans have a very limited ability to devoted their cognitive resources to emulating low level computation, so programmers found it necessary to invent high level languages and tools to minimise their disadvantages and maximise their advantages in terms of higher level thought and pattern recognition.

Humans are so bad at emulating a processor executing billions of low level instructions per second that our chances of being able to predict an AI using that technique in real time are zero.

The "Less Wrong position"? Are we all supposed to have 1 position here? Or did you mean to ask what EY's position is?

I don't think I understand your statement/question (?) - In order to know what an AI would do, you just need to simulate it with an AI?

I think you're saying that you could simulate what an AGI would do via any computer. If you're simulating an AGI, are you not building an AGI?