Appeals to legitimacy of your own judgment don't really help, because what is your own judgment? It's like AIs that are just statistics, but what is statistics[1]? You still need to understand the whole thing to designate something as your own judgment, to place the boundaries of legitimacy on justifications. Causally there are no boundaries to speak of, a mind is full of details that clearly come from outside any reasonable boundaries and should have no legitimacy as moral judgments.
Thus the right thing to do is not what is pleasurable, and not what humans prefer. Not even what you yourself endorse, because considering the question should often shift what you yourself endorse, including based on things with no clear legitimacy as moral judgments in their own right. What is pleasurable, and what humans prefer, and even what you yourself endorse seem exactly like this kind of relevant data, with no fundamentally significant legitimacy, but that is often useful to take into account, even if it's as lessons learned about the world and not as direct votes.
Ilya Sutskever (at 7:32 on the Dwarkesh Podcast):
If you think about it, what does it mean to predict the next token well enough? What does it mean actually? It's a deeper question than it seems. Predicting the next token well means that you understand the underlying reality that led to the creation of that token.
↩︎It's not statistics, like, it is statistics, but what is statistics? In order to understand those statistics, to compress them, you need to understand what is it about the world that creates those statistics.
At one level, "what I prefer" is information - it's a sample of one but still the most detailed insight in a human mind I'll ever have. In that sense, my preferences feed, together with other inputs, in an algorithm that outputs predictions and recommended actions.
But at a higher level, "what I prefer" is also the stuff that the algorithm itself is made of. Because ultimately everything always comes from me. Even if it's something that I'm trying my very best to gather as empirical evidence from the world around me, it's filtered by me. If I am King Solomon and must do the good thing when two women claim to be mothers of the same baby, I still need to have some way to judge and be convinced of which woman is lying and which is telling the truth. And whatever my process is, it may be inferred from my past experience, but still filtered through my own judgement, etcetera. Just like with scientific and empirical matters - I can try to update my beliefs to best approximate some ideal truth, but I can never state with 100% certainty that I have reached it.
On the other hand, it's possible that objective morality exists but is not empirically obtainable knowledge in nature. If that was the case, the only other way I can imagine for that knowledge to work is by some kind of enlightenment or grace - an inherent inner knowledge that we all possess, or can possess if we achieve the right state of mind, not from observation of the outside world, but by introspection.
Mathematical knowledge is not empirical. By your reasoning, does mathematical knowledge therefore “work by some kind of enlightenment or grace”?
I guess my take might be somewhat warm to downright scalding hot, but I believe mathematical knowledge to be empirical, either in the sense that we acquire it from direct observation, or that it has been preprogrammed into our brains by evolution (which I distinguish as a case from truly transcendent knowledge, though I guess you can call that non empirical - it's just labeling at that point, I agree that's how math in our brain works).
I think it may help your argument to use the term “Good” instead of something about “God’s will.” “God” in the common sense has a bunch of other associations which could be confusing, particularly to non Judeo-Christians.
You've essentially rediscovered selfish egoism. 'Egoism' because you decide what is right and wrong, and 'selfish', because in the end it comes down to your self-interests. It's been written about a few times on LessWrong (1, 2, 3), but very rarely upvoted and often massively downvoted for offending the utilitarian cult sensibilities. As Wesley says in the third link,
The reason why I would prefer that more people explicitly acknowledge the egoist foundations of their moral theory is that I believe moral judgment of others does great harm to our society. Utilitarianism dresses itself up as objective, and therefore leaves room to decide that other people have moral obligations, and that we are free (or even obligated) to judge and/or punish them for their moral failings.
...
If, instead, we acknowledge that our moral beliefs are merely preferences for how we would like the world to work, we will inflict less useless suffering. If we acknowledge that attempting to force our morality on someone else is inherently coercive, we will use it only in circumstances where we feel that coercion is justified.
...
I have a preference for less suffering in the world. If you share that preference, consider adopting an explicitly egoist morality and encouraging others with similar preferences to do the same.
I appreciate him doing the work to appeal to the people who need a pathological argument, because I find myself more oftern turning around and muttering "...idiots" under my breath when people respond viscerally rather than rationally.
I very explicitly said it's not about self-interest, but rather about our epistemological relation to the world. That even if we have an idealised notion of what constitutes "good", we can still only judge that good (and its eventual outcomes) from our own limited perspective. Even if our principle was just "listen to others and do for them what they ask", we're still having to do the parsing and interpretation of their words, modelling of how our actions will impact their utility, etc.
Usually, I use the term "explicitly said" to mean, "I can quote myself saying verbatim..." But sure, you did say something pretty similar a couple times:
Am I selfish for doing good just because it fits my own view of what is good? Does it mean I'm using others just to satisfy my own sense of being a good person? I think that kind of thing exists, but what I'm pointing at here is a far deeper level. You could do good in utterly sincere abnegation of your own self and you would still be doing it on your own judgement.
What do you mean by that last sentence? Do you want to do good in utterly sincere abnegation of your own self? If so, how is that an utterly sincere abnegation? Can you constructively define what you are pointing at, or do I need to have faith that it exists?
I suppose, abnegation of anything that can be construed as actually benefitting my utility function in ways other than the most abstract level of "I wanted X to happen and made it happen". And I agree it can't be any less than that.
Consider the extreme case of someone who sacrifices their life to save another. Even though they may derive serenity or satisfaction from that in their very last moments, it's hard to construe that as "selfish" in any but the broadest sense, given they don't even get to experience that for long. You can't escape that broadest sense, I agree, but it's so broad as to render the qualification essentially meaningless, especially compared to the usual understood meaning of "selfish".
Why does it matter how long they get to experience the self-satisfaction after the action was performed? I can see five scenarios where people would self-destruct in this manner:
They prefer the world to look a certain way, more than they prefer their continued existence in the world. Think of all the people who fall into a depression after a loved one dies and says, "I wish it were me, not them."
They care a lot about their self-perception, so they precommitted to sacrifice if this scenario ever showed up. When it actually does, they wish they hadn't made that precommitment, but they wouldn't have gained that self-satisfaction for all those years if they knew they were the kind of person who could easily renege on their commitments.
They were brainwashed by larger society, so they don't even consider the costs or the benefits, they just take an action. Consider how military basic training breaks down people's egos and builds them up into unquestioning tools for their superiors to use.
They believe they will be better off, but reality does not conform to their beliefs. Maybe they were promised a mansion in the afterlife, or (if they happen to survive) a medal of honor, money, and respect, but the promises never materialize.
They just failed at analyzing what they want, and whether this helps them achieve that goal. Maybe they were short on time and didn't think ahead, or maybe they're just a useful idiot.
In the first two scenarios, people are being selfish. Not everyone has the same wants and desires, so an action that you wouldn't prefer may be preferable to someone else. The last three scenarios are due to either anti-epistemology or irrationality. If you don't want something to happen, then you should never be intentionally making it happen. If you do, you're just serving your own self-interests. As every rational agent does.
You can't escape that broadest sense, I agree, but it's so broad as to render the qualification essentially meaningless, especially compared to the usual understood meaning of "selfish".
Sure. The usual connotation around 'selfish' is serving your self-interests at the expense of others. In my original comment, I just used the deontation
'selfish', because in the end it comes down to your self-interests
not the connotation.
Why does it matter how long they get to experience the self-satisfaction after the action was performed?
Generally speaking I'd say utility is somewhat weighted by duration. I'd be suspicious of a utility function that says that one year of atrocious pain is as bad as one minute, for example.
Other than that, sure, I think it's fair to say it's selfish in that very broad sense. I guess my point is that I want to remark that it's something that is ill-captured intuitively by terms like "self-interest". To me interest implies some kind of objective direct benefit to my utility, as opposed to a more general goal/want that only implies my aim for something regardless of the reason for it. I'm not sure what a good term for this sort of want-for-want's sake would be, to distinguish it for the more straightforward wanting something because it brings me pleasure, enjoyment, safety etc.
Epistemic status: I'm sure most of my arguments have already been made and argued to death by philosophers that I'm not knowledgeable enough to name. I'm only writing this to outline what I genuinely, spontaneously have come to think on the matter, regardless of how many wheels I may be reinventing. I'm of course interested in knowing about any related debates or arguments I may have missed, so feel free to bring them up in comments if something comes to mind.
Context: This is written as a sort of long-form response for an argument I've been having on a rationalist chat. The argument stemmed from discussions of post-rationalism vs rationalism, and whether there could be technically false things that make you more moral if believed, specifically re: the existence of some objective criterion for morality outside of ourselves. The text will make it clear enough on which side I stand.
I do rape all I want. And the amount I want is zero. And I do murder all I want, and the amount I want is zero.
Penn Jillette
Imagine being locked inside a closed trolley. The trolley has no windows, only a bunch of monitors reporting various proximity sensor, radar, sonar etc. warnings giving you a faint impression of what's outside. The trolley is running at full speed on a railway, and sometimes, you get a red light or a warning that suggests a fuzzy non-zero amount of people may be on one or both of the tracks past the fork just before you, and you get a chance to push a Big Red Button that will make your trolley go one way instead of another. With limited time, limited knowledge, and no certain guarantee of what the button will even actually do, you have to make your call, over and over again.
Sounds insane? You best start believing in insane moral philosophy thought experiments, Ms. Turner. You're in one.
A few days ago, in a discussion about rationalism and ethics and false but useful beliefs, I had to answer the question of why did I think I should do what I consider good.
"Because I like it so," I said.
For personal pleasure, then?, was the follow-up question[1].
"Well, not quite. Pleasure feels like a baser kind of thing, you know? I'd say this is more of a rational-mind level 'like'. It's what I think I should do, even if on some simpler level it does not produce an actual dopaminergic response within my brain."
I then argued my position, but what I came to realise and argue throughout the discussion was in fact how much I believe this is not just the way it is for me - this is the way it is for everyone, and even more importantly, the way it has to be, without escape, in any possible world that even faintly resembles our own, and all we can do is either accept it and find a way to make the best of it, or delude ourselves on the matter and be much more prone to critical mistakes. I'll go on to explain why.
I'll make my case starting with a classic mathematician's sleight of hand: assume the contrary, take the logic consequences, and reach a contradiction.
Suppose there is a God. Suppose we concede that in some deep, ontological sense, God's will is one and the same with the definition of Moral Good. Not just because God is a big powerful guy who will impose His law by force (because that's just "might is right"), but because He's so deeply and fundamentally woven into the fabric of reality that to do as God pleases is good, and to do good pleases God. This, as far as I can tell, is roughly the vision of Christian theology, at the very least.
Is my problem solved? Well, not quite. Now I need to know what is God's will, precisely. This hardly seems easy! How could one go about it?
On one hand, there is the empirical road. If morality was both objective and empirically verifiable, then we ought to be able to measure the signs. Holy texts like the Ten Commandments would be an obvious example of empirical evidence, as direct interventions of God in the world literally spelling out His will. Even more direct evidence would be if for example some form of karma was enacted on people; you could then run large statistical studies to determine to some degree of confidence which sorts of people are more likely to be struck down by divine misfortune than others, and thus trace the way to moral goodness with science.
But if this world resembles in any way ours, that is not going to be easy. A Holy Text is on paper clearly no different from the words of any random charlatan. You do not generally feel any immediate wafting of grace at the mere proximity with the Correct Holy Text (or we would have identified it already and simplified our life quite a lot). Let alone being able to spot copying errors, intentional distortions or translation inaccuracies that may warp its meaning. And large statistical studies? Please. We know all full well how messy they can get. We already have trouble determining if red wine causes cancer, let alone if sinning of pride does.
Ultimately, all you'll have is yourself, sitting in front of a mountain of potential evidence, some true, some spurious, and no way to make sense of which is which other than your own judgement.
Put these foolish ambitions to rest.
Margit, the Fell Omen
On the other hand, it's possible[2] that objective morality exists but is not empirically obtainable knowledge in nature. If that was the case, the only other way I can imagine for that knowledge to work is by some kind of enlightenment or grace - an inherent inner knowledge that we all possess, or can possess if we achieve the right state of mind, not from observation of the outside world, but by introspection. The kind of thing you would just "feel" was correct all along, without need for any further evidence.
And to be sure, we do have moral instincts. But this affair is tremendously complicated by the fact that empirical feedback loops still interfere with that. After all, those instincts can and in fact probably have been shaped by evolution, via positive selection of those human groups that followed good game-theoretical principles enabling cooperation against less successful groups that simply couldn't be as functional. Not killing others generally has actual practical benefits, so it's rewarded. But it's also clear that obviously "just follow what your heart tells you" isn't great ethical advice because it turns out a lot of people's hearts also tell them to murder their spouse that cheated on them, or to go to war with that other tribe who are all inferior barbarians anyway. So obviously the soul-searching process is more complex than that.
So how does that look like? You'll have to sift through your own feelings. You'll have to sort out which of them stem from one of these genuine innate insights about morality and which are spurious - either the result of evolutionary feedback loops or your own biases creeping in. And you're the only one in the chamber of your mind to make the ultimate call. In the end, you'll pick some as the true and correct Voice Of God That Whispers To You, discard the others, and go with that.
In other words, you'll do what you like.
I must be without remorse or regrets as I am without excuse; for from the instant of my upsurge into being, I carry the weight of the world by myself alone without help, engaged in a world for which I bear the whole responsibility without being able, whatever I do, to tear myself away from this responsibility for an instant.
Jean Paul-Sartre
Do what thou wilt shall be the whole of the Law.
Aleister Crowley
What I am driving at is this: the necessity for finding the origin of morality in the self - for having everything start out of a conscious choice of "I want it to be so" - is not ethical, it's epistemological.
If there is no objective morality out there, you must choose your own way. If there is objective morality, you must still choose your own way, because just because objective truth exists doesn't mean it's handed to you on a silver platter. Do you want to follow God? Which God do you think is the right one? Why is he even worth following? Does he send signs? How do you know which are the true signs?
Am I selfish for doing good just because it fits my own view of what is good? Does it mean I'm using others just to satisfy my own sense of being a good person? I think that kind of thing exists, but what I'm pointing at here is a far deeper level. You could do good in utterly sincere abnegation of your own self and you would still be doing it on your own judgement. Suppose the most extreme case - suppose I have a definition of "good" that relies heavily on some utility function that accounts for everyone else in the world, but not me[3]. It would still be up to me to assess whether that utility was fulfilled, it up to my perceptions to decide who those others are, how they were impacted. Sometimes it'll seem pretty obvious, others it won't. I see people-shaped things squirming on the tracks, through the instruments of the trolley. I decide they are most likely people like me. That's the sane choice - elsewhere lies solipsism. But it's a choice, built on top of an inference, plagued by the same fundamental flaws that plague all our knowledge.
The only world in which you could truly and completely abdicate your responsibility for your own moral choices, ground them in anything else than your own will and judgement, is one which this knowledge was given to us unequivocally and clearly, and if there's one thing we can be sure of is that we don't live in such a world.
This, to be clear, is not an invitation to be a selfish jerk.
Level 1 selfishness is "I just do what pleases me, directly." Level 2 selfishness is "I do what is good because it makes me feel more important, gives me social status, or other benefits".
Level 3 selfishness is what I advocate - "I genuinely try to do good, even at the expense of myself if necessary, and acknowledge that others have their own inner lives, wants, needs, and goals, which I must respect to some extent, but also accept that ultimately the final decision on any of this will always be mine, done by my judgement, based on my perspective, and that can never be not so; I am at peace with that and do not desperately seek to ground it instead in any outside validation that would just be a fun-house mirror in which I see myself reflected anyway".
I do agree there is something somewhat fucked up with this - because the inevitable consequence is that even a lot of entirely well-meaning persons can still do bad stuff to each other. But that's not news, it's been a feature of humanity since forever. I still think in practical terms you can minimise that pretty well. I don't think holding a belief in the existence of a shared objective moral system does that much to improve this situation. It applies social pressure but that's always present one way or another anyway. Meanwhile it also encourages hand-waving away complexity that instead requires some critical thinking (a classic example of a consequence of this is the just world fallacy: believing there must be a moral order to the world makes you more likely to believe that those who are hit by misfortune deserve it). Well known as it is in these parts, it's still worth closing this post with the Litany of Gendlin, because I think it is most appropriate to the circumstances:
What is true is already so.
Owning up to it doesn’t make it worse.
Not being open about it doesn’t make it go away.
And because it’s true, it is what is there to be interacted with.
Anything untrue isn’t there to be lived.
People can stand what is true,
for they are already enduring it.
Eugene Gendlin
I paraphrase, but that was the spirit of the objection. I'll still limit any dialogic components from now on because I only want to present my view, not accidentally misrepresent the other person's.
As in, theoretically we need to entertain the possibility - I can't say this actually holds a lot of weight in my epistemology.
I was about to say that this would probably end with such a person instantly committing suicide to donate their organs anyway, and then remembered that this is the exact plot of the Will Smith movie "Seven Pounds". Which I just spoiled for you, but that's all right, it's god-awful. And part of that is exactly that it tries to portray as noble and admirable the actions of a main character who explicitly seeks out seven "worthy" people to whom donate his various organs so that he can do the most good with his own death, which ends up looking manipulative as hell because in trying to be so selfless he comes off as more selfish, as his own personal judgement overrides virtually anyone else's.