Sure. But would you still feel the need to replace it if you lived in a world where it wasn't taught in school in the first place? Would you yearn for something like it?
But you do live in a universe that is partly random! The universe of perceptions of a non omniscient being
By independent I don't mean bearing no relationship with each other whatsoever, but simply that pairs of instants that are closer to each other are not more correlated than those that are more distant. "But what does closer mean?" For you to entertain the hypothesis that life is an iid stream of sense data, you have to take the basic sense that "things are perceived by you one after another" at face value. "But a fundamental part of our experience of time is the higher correlation of closer instants. If this turned out to be an illusion, then shouldn't we ...
I mean, yeah, it depends, but I guess I worded my question poorly. You might notice I start by talking about the rationality of suicide. Likewise, I'm not really interested in what the ai will actually do, but in what it should rationally do given the reward structure of a simple rl environment like cartpole. And now you might say, "well, it's ambiguous what's the right way to generalize from the rewards of the simple game to the expected reward of actually being shut down in the real world" and that's my point. This is what I find so confusing. Because th...
"If the survival of the AGI is part of the utility function"
If. By default, it isn't: https://www.lesswrong.com/posts/Z9K3enK5qPoteNBFz/confused-thoughts-on-ai-afterlife-seriously "What if we start designing very powerful boxes?" A very powerful box would be very useless. Either you leave enough of an opening for a human to be taught valuable information that only the ai knows, or you don't and then it's useless, but, if the ai can teach the human something useful, it can also persuade him to do something bad.
"human pain aversion to the point of preferring death is not rational" A straightforward denial of the orthogonality thesis? "Your question is tangled up between 'rational' and 'want/feel's framings" Rationality is a tool to get what you want.
I see the Nash equilibrium as rationally justified in a limit-like sort of way. I see it as what you get if you get arbitrarily close to perfect rationality. Having a good enough model of another's preferences is something you can actually achieve or almost achieve, but you can't really have a good enough grasp of your opponent's source code to acausally coerce him into cooperating with you unless you really have God-like knowledge (or maybe if you are in a very particular situation such as something involving AI and literal source codes). In proportion as...
Thanks. I now see my mistake. I shouldn't have subtracted the expected utility of the current state from the expected utility of the next.
shooting while opponent blocks should yield u(0,0), right?
Well, I could make a table for the state where no one has any bullets, but it would just have one cell: both players reload and they go back to having one bullet each. In fact, the game actually starts with no one having any bullets, but I omitted this step.
Also, in both suggestions, you are telling me that the action that leads to state x should yield the expected utility of state x, which is correct, but my function u(x,y) yields the expected utility of the resulting state assuming that you'...
you almost certainly won't exist in the next instant anyway
Maybe I won't exist as Epirito, the guy who is writing this right now, who was born in Lisbon and so on. Or rather I should say, maybe I won't exist as the guy who remembers having been born in Lisbon, since Lisbon and any concept that refers to the external world is illegitimate in BLEAK.
But if the external world is illegitimate, why do you say that "I probably won't exist in the next instant anyway"? When I say that each instant is independent (BLEAK), do you imagine that each instant all the mat...
"Do you really want to live in a world without Coca Cola?"
I don't really care about sports, but I imagine better athletes must be more entertaining to watch for people who do care. Even if you were to work in an important problem, you wouldn't do it alone. You would probably be one more person contributing to it among many. So you can also look at each celebrity as one more person working at the problem of creating entertainment. Imagine if all music were wiped out of the world by magic. Wouldn't that suck?