If you assume.... [y]ou are, in effect, stipulating that that outcome actually has a lower utility than it's stated to have.
Thanks, that focuses the argument for me a bit.
So if we assume those curves represent actual utility functions, he seems to be saying that the shape of curve B, relative to A makes A better (because A is bounded in how bad it could be, but unbounded in how good it could be). But since the curves are supposed to quantify betterness, I am attracted to the conclusion that curve B hasn't been correctly drawn. If B is worse than A, how ...
Sure, I used that as what I take to be the case where the argument would be most easily recognized as valid.
One generalization might be something like, "losing makes it harder to continue playing competitively." But if it becomes harder to play, then I have lost something useful, i.e. my stock of utility has gone down, perhaps by an amount not reflected in the inferred utility functions. My feeling is that this must be the case, by definition (if the assumed functions have the same expectation), but I'll continue to ponder.
The problem feels related to Pascal's wager - how to deal with the low-probability disaster.
Thanks very much for the taking the time to explain this.
It seems like the argument (very crudely) is that, "if I lose this game, that's it, I won't get a chance to play again, which makes this game a bad option." If so, again, I wonder if our measure of utility has been properly calibrated.
It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, b...
I think that international relations is a simple extension of social-contract-like considerations.
If nations cooperate, it is because it is believed to be in their interest to do so. Social-contract-like considerations form the basis for that belief. (The social contract is simply that which makes it useful to cooperate.) "Clearly isn't responsible for," is a phrase you should be careful before using.
You seem to be suggesting that [government] enables [cooperation]
I guess you mean that I'm saying cooperation is impossible without government. ...
Values start to have costs only when they are realized or implemented.
How? Are you saying that I might hold legitimate value in something, but be worse off if I get it?
Costlessly increasing the welfare of strangers doesn't sound like altruism to me.
OK, so we are having a dictionary writers' dispute - one I don't especially care to continue. So every place I used 'altruism,' substitute 'being decent' or 'being a good egg,' or whatever. (Please check, though, that your usage is somewhat consistent.)
But your initial claim (the one that I initially challenged) was that rationality has nothing to do with value, and is manifestly false.
If you look closely, I think you should find that legitimacy of government & legal systems comes from the same mechanism as everything I talked about.
You don't need it to have media of exchange, nor cooperation between individuals, nor specialization
Actually, the whole point of governments and legal systems (legitimate ones) is to encourage cooperation between individuals, so that's a bit of a weird comment. (Where do you think the legitimacy comes from?) And specialization trivially depends upon cooperation.
Yes, these things can exist to a smal...
Value is something that exists in a decision-making mind. Real value (as opposed to fictional value) can only derive from the causal influences of the thing being valued on the valuing agent. This is just a fact, I can't think of a way to make it clearer.
Maybe ponder this:
How could my quality of life be affected by something with no causal influence on me?
Why does it seem false?
If welfare of strangers is something you value, then it is not a net cost.
Yes, there is an old-fashioned definition of altruism that assumes the action must be non-self-serving, but this doesn't match common contemporary usage (terms like effective altruism and reciprocal altruism would be meaningless), doesn't match your usage, and is based on a gross misunderstanding of how morality comes about (if written about this misunderstanding here - see section 4, "Honesty as meta-virtue," for the most relevant part).
Under that...
The question is not one of your goals being 50% fulfilled
If I'm talking about a goal actually being 50% fulfilled, then it is.
"Risk avoidance" and "value" are not synonyms.
Really?
I consider risk to be the possibility of losing or not gaining (essentially the same) something of value. I don't know much about economics, but if somebody could help avoid that, would people be willing to pay for such a service?
If I'm terrified of spiders, then that is something that must be reflected in my utility function, right? My payoff from bei...
Apologies if my point wasn't clear.
If altruism entails a cost to the self, then your claim that altruism is all about values seems false. I assumed we are using similar enough definitions of altruism to understand each other.
We can treat the social contract as a belief, a fact, an obligation, or goodness knows what, but it won't affect my argument. If the social contract requires being nice to people, and if the social contract is useful, then there are often cases when being nice is rational.
Furthermore, being nice in a way the exposes me to undue risk i...
Point 1:
my goals may be fulfilled to some degree
If option 1 leads only to a goal being 50% fulfilled, and option 2 leads only to the same goal being 51% fulfilled, then there is a sub-goal that option 2 satisfies (ie 51% fulfillment) but option 1 doesn't, but not vice versa. Thus option 2 is better under any reasonable attitude. The payoff is the goal, by definition. The greater the payoff, the more goals are fulfilled.
The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals.
But risk is i...
I did mean after controlling for an ability to have impact
Strikes me as a bit like saying "once we forget about all the differences, everything is the same." Is there a valid purpose to this indifference principle?
Don't get me wrong, I can see that quasi-general principles of equality are worth establishing and defending, but here we are usually talking about something like equality in the eyes of the state, ie equality of all people, in the collective eyes of all people, which has a (different) sound basis.
I would call it a bias because it is irrational.
It (as I described it - my understanding of the terminology might not be standard) involves choosing an option that is not the one most likely to lead to one's goals being fulfilled (this is the definition of 'payoff', right?).
Or, as I understand it, risk aversion may amount to consistently identifying one alternative as better when there is no rational difference between them. This is also an irrational bias.
Rationality is about implementing your goals
That's what I meant.
An interesting claim :-) Want to unroll it?
Altruism is also about implementing your goals (via the agency of the social contract), so rationality and altruism (depending how you define it) are not orthogonal.
Lets define altruism as being nice to other people. Lets describe the social contract as a mutually held belief that being nice to other people improves society. If this belief is useful, then being nice to other people is useful, i.e furthers one's goals, i.e. it is rational. I kno...
Yes, non-rational (perhaps empathy-based) altruism is possible. This is connected to the point I made elsewhere that consequentialism does not axiomatically depend on others having value.
empathy is not [one level removed from terminal values]
Not sure what you mean here. Empathy may be a gazillion levels removed from the terminal level. Experiencing an emotion does not guarantee that that emotion is a faithful representation of a true value held. Otherwise "do exactly as you feel immediately inclined, at all times," would be all we needed to know about morality.
I see Sniffnoy also raised the same point.
I understood risk aversion to be a tendency to prefer a relatively certain payoff, to one that comes with a wider probability distribution, but has higher expectation. In which case, I would call it a bias.
A couple of points:
(1) You (and possibly others you refer to) seem to use the word 'consequentialism' to point to something more specific, e.g. classic utilitarianism, or some other variant. For example you say
[Yvain] argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value"
Actually, consequentialism follows independently of "others have non zero value." Hence, classic utilitarianism's axiomatic call to maximize the good for the greatest numb...
I think that the communication goals of the OP were not to tell us something about a hand of cards, but rather to demonstrate that certain forms of misunderstanding are common, and that this maybe tells us something about the way our brains work.
The problem quoted unambiguously precludes the possibility of an ace, yet many of us seem to incorrectly assume that the statement is equivalent to something like, 'One of the following describes the criterion used to select a hand of cards.....,' under which, an ace is likely. The interesting question is, why?
In order to see the question as interesting, though, I first have to see the effect as real.