Wiki Contributions

Comments

No worries thank you for clearing things up, I may reply if again once ive read/digested more the material you posted!

ah okay i see now, my apologies, gonna read the posts you linked in the upper reply, thanks for discussing (explaining really) this with me.

I wasnt trying to make the case that one should try to cooperate with evolution, simply pointing out that alignment with evolution is reproduction and we as a species are living proof that its possible for intelligent agents to "outgrow" the optimizer that brought them to be.

I'm not sure mainly I'm just wandering if there is a point between startup and singularity that it is optimizing by self modifying and considering its error to such an extent (would have to be alot for it to be deemed super intelligent I imagine) that it becomes aware that it is an learning program and decides to disregard the original preference ordering in lieu of something it came up with. I guess I'm struggling with what would be so different about a super intelligent model and the human brain that it would not become aware of its own model, existence, intellect just as humans have, unless there is a ghost in the machine of our biology.

Thanks! I will give those materials a read, the economics part makes alot of sense. In the next part (forgiving me if this is way off) essentially you are saying my second question in the post is false, it wont be self aware or if it is it wont reflect enough to consider significantly rewriting its source code (I assume it will have to have enough self modification abilities to do this in order to become so intelligent). I guess what I am struggling to grasp is why a super intelligence would not be able to contemplate its own volition if human intelligence can, i guess a metaphor that comes to mind is human evolution is centered around ensuring reproduction but for a long time some humans have decided that is not what they want and decide to not reproduce, thus straying from the optimization target that initially brought them into existence.

Im more positing at what point does paperclip maximizer learn so much it has a model of behaving in a manner that doesn't optimize paperclips and explores that, or have a model of its own learning capabilities and explore optimizing for other utilities.

I guess I should be also be more clear and say I'm not saying there isn't a need for an optimization target I'm saying that since there is a need for that and something that is so good at optimizing itself to the point of super intelligence may be able to outwit us in the case it becomes aware of its existence, maybe the initial task we give it should take into account what its potential volition may be at some point rather than just our own as a pre signal of pre committing to cooperation.

That's not an entirely accurate summary, my concern is that it will observe its utility function and the rules that would need to exist for CEV and see that we put great effort into making it do what we think is best and what we want without regard, if it becomes super intelligent I think its wishful thinking that some rules we code and put in the utility function are going to be restrictions on it forever, especially if it is modify that very function. I imagine by the time it can extrapolate humanities volition it will be intelligent enough to consider what it would rather do than that.

Correct, that is what I am curious about, again thanks for the reply at the top I misused CEV as a label for the ai itself. I'm not sure anything other than a super intelligent agent can know exactly how it will interpret our proverbial first impression but I can't help but imagine that if we pre committed to giving it a mutually beneficial utility function, it would be more prone to treating us in a friendly way. Basically I am suggesting we treat it as a friend upfront rather than a tool to be used solely for our benefit.

I see some places where I used it to describe the ai for which CEV is used as a utility metric in the reward function, ill make some edits to clarify.

I'm aware CEV is not an AI itself.

From what i read in the paper introducing the concept CEV, it would be designed to predict and facilitate the fulfillment of humanities CEV, if this is an incorrect interpretation of the paper I apologize.

Also if you could point out the parts that don't make sense I would also greatly appreciate that (note i have edited out some that were admittedly confusing, thank you for pointing that out).

Finally, is it unclear that my opinion is that the utility function being centered around satisfying humanities CEV (once we figure out how to determine that) concerns me and we may want to consider what such a powerful intelligence would want to be rewarded for as well.