[ Question ]

Judging AGI Output

by meredev2 min read14th Dec 2020No comments

3

AIWorld Optimization
Frontpage

Hi, I am new here, I apologize for my bad English. 

I recently started having interests in AI Safety and the future of humans, that's how I ended up here. 

My question is about judging the output of an Artificial General Intelligence. It has most probably been discussed here before but I wrote this down in my journal and thought I ask it here, I would be glad to be pointed to old discussions or literature.

Ethical believes and much of philosophy are hard to proof and reach consensus, some arguments are very old, we still don't have answers to some
questions that Socrates was asking in 470 – 399 BC.

In a world that has an AGI is it a good idea to let the AGI be the source of truth for Ethical/Philosophical arguments.

For example if two humans argue, is it a good idea to have an AGI step in to settle the argument between the humans by giving the logical conclusion to that argument. The AGI's answer might be too complex for the humans to understand. If so is this any different from our current world where a number very smart people can have different answers to a given question and all with reasonable arguments. When is it the right time to be 100% certain that the AGI is correct.

Maybe we can say that because the AGI is smarter then humans, its answers have more weight so we should just take its word. I think it is a bad idea, what if we have multiple AGIs and they have different answers how do we decide which one to choose given that our "intelligence" was unable to come up with the answer in the first place?

Maybe we can just run all our AGIs together and choose the most occuring answer. If the stats are strong then it is fine?  There is also the far fetched argument that at this level we cannot know if all the AGIs are being controlled by one unfriendly AGI.

If we assume that crime as we know it today will be solved by AGIs, in the future we might only need AGI Police to investigate other AGIs maybe with help from human AI safety officials but how can these humans know how to settle arguments between AGIs and who has more authority the AGI or
the human.

If an AGI commits a crime: Let's say it injures a human because the human was standing in the spot where the crate has to be placed and according to the AGI the crate should be placed there because picking it up from that spot saves 5 microseconds... which adds up in some logically intertwined way to saving 1 million human lives. How do we as humans judge this? can we justify such a rationality from the AGI which thinks it acted for the greater good which we as humans don't even understand.

Chess is an argument between two minds. If you are a chess beginner and watch a game between two Chess Grand Masters you will have no clue what is going on, their questions and answers are on another level, you will not be able to determine the good moves from the bad ones. I think this is the same when a human watches two AGIs argue, how can we choose the better GM or AGI if we don't understand the moves?

Also consider how humans can use AGIs as a tool to oppress other humans by justifying their actions based on what the AGI's output.

 

Since an AGI will make governments as we know them today obsolete how will democracy work. Will we have to vote for an AGI instead of a human president? Or do AGIs prefare communism or even anarchism.

There is ofcourse allot of ways the future can playout, maybe an AGI won't even care about us and would want nothing to do with us, it will be too busy doing AGI stuff like counting stars or something. But it is still fun and important to think about scenarious like this. I will go as far as saying that there is a great chance that this problem (Judging AGI output) is  the last technical problem for humans (all other technical problems will be solved by AGIs).

3

New Answer
Ask Related Question
New Comment