[anonymous]8y-1

Minds are not chronologically commutative with respect to input data. Reading libertarian philosophy followed by Marxist philosophy will give you a different connectome than vice versa. As a result, you will have distinct values in each scenario and act accordingly. Put another way, human values are extremely dependent on initial input parameters (your early social and educational history). Childhood brainwashing can give the resulting adult arbitrary values (as evinced by such quirks like suicide bombers and voluntary eunuchs). However, by providing such a malleable organism, evolution found a very cute trick by which it allowed for seemingly impossible computation. (development of mathematics, science, etc.)

I assume that in the definition of GAI, it is implicit that the AI can do mathematics and science as good or better than humans can, as to achieve its goals that require a physical restructuring of reality. Since the only example of a computational process that is capable of generating these things (humans) is so malleable in its values, what basis (mathematical or otherwise) does the SIAI have for assuming that Friendliness is achievable? Keep in mind that a GAI should be able to think and comprehend all things humans can and have thought (including the architectural problems in Friendliness), or at least something functionally isomorphic.

[This comment is no longer endorsed by its author]Reply

Q&A #2 with Singularity Institute Executive Director

by lukeprog 1 min read13th Dec 201148 comments

9


Just over a month ago I posted a call for questions about the Singularity Institute. The reaction to my video response was positive enough that I'd like to do another one — though I can't promise video this time. I think that the Singularity Institute has a lot of transparency "catching up" to do.

 

The Rules (same as before)

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about the Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov and in Eliezer's Singularity Summit 2011 talk.

4) Please provides links to things referenced by your question.

5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin preparing responses to.

 

I might respond to certain questions within the comments thread; for example, when there is a one-word answer to the question.

You may repeat questions that I did not answer in the first round, and you may ask follow-up questions to the answers I gave in round one.