Posts

Sorted by New

Wiki Contributions

Comments

an16y10

Ian C. is implying an AI design where the AI would look at it's programmers and determine what they really had wanted to program the AI to do - and would have if they were perfect programmers - and then do that.

But that very function would have to be programmed into the AI. My head, for one, spins in self-refential confusion over this idea.

an16y30

I didn't know that EY's purpose with this blog was to recruit future AI researchers, but since it is, I for one am one on who he has succeeded.

an16y20

One very funny consecuence of defining "fair" as "that which everyone agrees to be "fair"" is that if you indeed could convince everyone of the correctness of that definition, nobody could ever know what IS "fair", since they would look at their definition of "fair", which is "that which everyone agrees to be "fair"", then they would look at what everyone does agree to be fair, and conclude that "that which everyone agrees to be "fair" is "that which everyone agrees to be "fair""", and so on!

However I think that in this post you are spilling too much ink over a trivial thing - you are too attached to the word "fair". One of my favourite rationalist's techniques is to not be attached to particular symbols at all, but only to referents. You could answer Zaire simply by saying, "Alright, I accept that your way is "fair", however I propose a better way that we can call "riaf"", and then explain your referent of the symbol "fair" and why it is better than Zaire's way.

an16y00

I stopped to answer your definitional questions while reading and defined "arbitrary" as "some variable in a system of justifications where the variable could be anything and be equally justified regardless of what it is" and "justification" as "the belief that the action that is justified will directly on indirectly further the cause of the utility function in the terms of which it is defined and does it more effectively than any other action; for beliefs, the belief that the belief that is justified will reflect the territory in the most accurate way possible (I hope I'm not passing the buck here)"

an16y30

When you dream about an apple, though, can you be said to recognize anything? No external stimulus triggers the apple recognition program; it just happens to be triggered by unpredictable, tired firings of the brain and you starting to dream about an apple is the result of it being triggered in the first place, not the other way around.

an16y00

Confusing - now the central question of rationality is no longer "why do you believe what you believe?"

an16y10

The point is: even in a moralless meaningless nihilistic universe, it all adds up to normality.

an16y10

Pablo Stafforini A brief note to the (surprisingly numerous) egoists/moral nihilists who commented so far. Can't you folks see that virtually all the reasons to be skeptical about morality are also reasons to be skeptical about practical rationality? Don't you folks realize that the argument that begins questioning whether one should care about others naturally leads to the question of whether one should care about oneself? Whenever I read commenters here proudly voicing that they are concerned with nothing but their own "persistence odds", or that they would willingly torture others to avoid a minor discomfort to themselves, I am reminded of Kieran Healy's remarks about Mensa, "the organization for highly intelligent people who are nevertheless not quite intelligent enough not to belong to it." If you are so smart that you can see through the illusion that is morality, don't be so stupid to take for granted the validity of practical rationality. Others may not matter, but if so you probably don't either.

Morality is a tool for self-interest. Acting cooperatively was good for you in the ancestral enviroment, so people who had strong moral feelings did better. People who are under the illusion that action "should" have a rational basis construct rationalizations for morality, because they want to act morally for reasons that have nothing to do with rationality.

Self-interest is no more rational that moral behaviour. People also seek self-interest because that's just how their genes have wired their monkey brains to work.

A being of pure rationality and no desires would do nothing. Many apparently people think that it could come to a conclusion of what to do by discovering some universal "should" by rational deliberation, but that's wrong.

This is existentialism 101, I know, but it's also true.

On the other hand,I can't imagine what would make me skeptical about practical rationality. The point of it is that it works in predicting my experience, and I seem to desire to know about that which determines my experience. Showing that practical rationality is wrong is an empirical matter, showing that it doesn't work.

an16y00

'You can't rationally choose your utility function.' - I'm actually excepting that Eliezer writes a post on this, it's a core thing when thinking about morality etc

an16y00

James Andrix 'Doing nothing or picking randomly are also choices, you would need a reason for them to be the correct rational choice. 'Doing nothing' in particular is the kind of thing we would design into an agent as a safe default, but 'set all motors to 0' is as much a choice as 'set all motors to 1'. Doing at random is no more correct than doing each potential option sequentially.'

Doing nothing or picking randomly are no less rationally justified than acting by some arbitrary moral system. There is no rationally justifiable way that any rational being "should" act. You can't rationally choose your utility function.

Load More