Does building a computer count as explaining something to a rock?

 

(If we still had open threads, I would have posted this there. As it is, I figure this is better than not saying anything.)

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 9:15 PM

Does killing you and building a computer out of your atoms count as explaining something to you?

Only if the computer then runs an instance of me that understands something the most recent instance didn't.

This question needs to be dissolved. It leads to arguing over definition of "you" otherwise.

Or killing you and writing a message with your guts?

I ask the question so many people here hate to be asked, why is this being downvoted?


What does it mean to explain something, what does it mean to understand something?

If you explain something to a human being, you actively alter the chemical/atomic configuration of that human. The sort of change caused by nurture becomes even more apparent if you consider how explanations can alter the course of a human life. If you were going to explain the benefits of daily physical exercise to a teenager, it will change the bodily, including neurological, makeup of that human being dramatically.

Where do you draw the line? If you "explain" an AI how to self-improve, the outcome will be a more dramatic change as that of a rock being converted into a Turing-machine.

What would happen to a human being trying to understand as much as possible over the next 10^50 years? The upper limit on the information that can be contained within a human body is 2.5072178×10^38 megabytes, if it was used as a perfect data storage. Consequently, any human living for a really long time would have to turn "rock" into memory to understand.

Similarly, imagine I would be able to use a brain-computer-interface to integrate an external memory storage into your long-term-memory. If this external memory does comprise certain patterns, that you could use to recognize new objects, why wouldn't this count as an explanation of what constitutes those objects?


ETA

Before someone misinterprets what I mean: explaining friendliness to an AI obviously isn't enough to make it behave friendly. But one can explain friendliness, even to a paperclip-maximizer, by implementing it directly or by writing it down onto paper and making it read it. Of course, that doesn't automatically make it a part of its utility-function.

It is not a "sequence of justifications," so I wouldn't call it explaining.