Those are mostly "analytical" reasons. I'd say sometimes people just have a psychology that is drawn to monogamy as an ideal (for reasons deeper than just struggling with jealousy otherwise), which makes them poorly suited for polyamory.
It's said that love has three components, intimacy/romance, passion/lust/attraction, and commitment. I would say that the people to whom monogamy feels like the obviously right choice have a psychology that's adapted towards various facets of valuing commitment. So commitment is not something that they enter if they've analytically gone through the pros and cons and decided that it's net beneficial for them. Instead, it's something they actively long for that gives purpose to their existence. Yes, it comes with tradeoffs, but that contributes to the meaning of it and they regard committedness as a highly desirable state.
If someone('s psychology) values commitment in that way, it's an unnatural thought to want to commit to more than one person. Commitment is about turning the relationship into a joint life goal -- but then it's not in line with your current life goals to add more goals/commitments that distract from it.
I don't mean to say that polyamrous couples cannot also regard commitment as a desirable state (say, if they're particularly committed to their primary relationship). If anyone poly is reading this and valuing commitment is ~their primary motivation in life, I'd be curious to learn about how this manifests. To me, it feels in tension with having romantically meaningful relationships with multiple people because it sounds like sharing your resources instead of devoting them all towards the one most important thing. But I haven't talked to polyamorous people about this topic and I might be missing something. (For instance, in my case I also happen to be somewhat off-the-charts introverted, which means I see various social things differently from others.)
I'm not well-positioned to think about your prioritization, for all I know you're probably prioritizing well! I didn't mean to suggest otherwise.
And I guess you're making the general point that I shouldn't put too much stake into "my sequence hasn't gotten much in terms of concrete pushback," because it could well be that there are people who would have concrete pushback but don't think it's worth commenting since it's not clear if many people other than myself would be interested. That's fair!
(But then, probably more people than just me would be interested in a post or sequence on why moral realism is true, for reasons other than deferring, so those object-level arguments should better be put online somewhere!)
In the future we'll have so much better tools for exploring reasoning including various assumptions. [...] Why lock in these assumptions now instead of later, after we can answer such questions (and maybe much better ones people think up in the future) and let the answers inform our choices?
>What if once we become superintelligent (and don't mess up our philosophical abilities), philosophy in general will become much clearer, similar to how a IQ 100 person is way worse at philosophy compared to you? Would you advise the IQ 100 person to "locking in some basic assumptions about how to do reasoning" based on his current intuitions?
If it is as you describe, and if the area of philosophy in question didn’t already seem clear to me, then that would indeed convince me in favor of waiting/deferring.
However, my disagreement is that there are a bunch of areas of philosophy (by no means all of them -- I have a really poor grasp of decision theory and anthropics and infinite ethics!) that already seem clear to me and it's hard to conceive of how things could get muddled again (or become clear in a different form).
Also, I can turn the question around on you:
What if the IQ 200 version of yourself sees more considerations, but overall their picture is still disorienting? What if their subjective sense of “oh no, so many considerations, what if I pick the wrong way of going about it?” never goes away? What about the opportunity costs, then?
You might say, "So what, it was worth waiting in case it gave me more clarity; nothing of much value was lost in the meantime, since, in the meantime, we can just focus on robustly beneficial interventions under uncertainty."
I have sympathies for that but I think it's easier said than done. Sure, we should all move towards a world where EA would focus on outlining a joint package of interventions that captures low-hanging fruit from all plausible value angles, suffering-focused ethics as well (here's my take). We'd discourage too much emphasis on personal values and instead encourage a framing of "all these things are important according to plausible value systems, let's allocate talent according to individual comparative advantages."
If you think I'm against that sort of approach, you're wrong. I've written this introduction to multiverse-wide cooperation and I have myself worked on things like arguing for better compute governance and temporarily pausing AI, even though I think those things are kind of unimportant from an s-risk perspective. (It's a bit more confounded now because I got married and feel more invested in my own life for that reason, but still...)
The reason I still talk a lot about my own values is because:
(1) I'm reacting to what I perceive to be other people unfairly dismissing my values (based on, e.g., metaethical assumptions I don't share);
(2) I'm reacting to other people spreading metaethical takes that indirectly undermine my values (e.g., saying things like we should update to the values of smart EAs, who happen to have founder effects around certain values, or making the "option value argument" in a very misleading way;
(3) because I think unless some people are viscerally invested in doing the thing that is best according to some specific values, it's quite likely that the "portfolio-based," cooperative approach of "let's together make sure that all the time-sensitive and important EA stuff gets done" will predictably miss specific interventions, particulary ones that don't turn out to be highly regarded in the community, or are otherwise difficult or taxing to think about). (I've written here about ways in which people interested in multiverse-wide cooperation might fail to do it well for reasons related to this point.)
Overall, I don't think I (and others) would have come up with nearly the same number of important considerations if I had been much more uncertain about my values and ways of thinking, and I think EA would be epistemically worse off if it had been otherwise.
The way I see it, this proves that there were opportunity costs compared to the counterfactual where I had just stayed uncertain.
Maybe that's a human limitation, maybe there are beings who can be arbitrarily intellectually productive and focused on what matters most given their exact uncertainty distribution over plausible values, that they get up in the morning and do ambitious and targeted things (and make sacrifices) even if they don't know much about exactly what their motivation ultimately points to. In my case at least, I would have been less motivated to make use of any potential that I had if I hadn't known that the reason I got up in the morning was to reduce suffering.
One thing I've been wondering:
When you discuss these things with me, are you taking care to imagine yourself as someone who has strong object-level intuitions and attachments about what to value? I feel like, if you don't do that, then you'll continue to find my position puzzling in the same way John Wentworth found other people's attitudes about romantic relationships puzzling, before he figured out he lacked a functioning oxytocin receptor. Maybe we just have different psychologies? I'm not trying to get you to adopt a specific theory of population ethics if you don't already feel like doing that. But you're trying to get me to go back to a state of uncertainty, even though, to me, that feels wrong. Are you putting yourself in my shoes enough when you give me the advice that I should go back to the state of uncertainty?
One of the most important points I make in the last post in my sequence is that forming convictions may not feel like a careful choice, but rather more like a discovery about who we are.
I'll now add a bunch of quotes from various points in my sequence to illustrate what I mean by "more like a discovery about who we are":
For moral reflection to move from an abstract hobby to something that guides us, we have to move beyond contemplating how strangers should behave in thought experiments. At some point, we also have to envision ourselves adopting an identity of “wanting to do good.”
[...]
[...] “forming convictions” is not an entirely voluntary process – sometimes, we can’t help but feel confident about something after learning the details of a particular debate.
[...]
Arguably, we are closer (in the sense of our intuitions being more accustomed and therefore, arguably, more reliable) to many of the fundamental issues in moral philosophy than to matters like “carefully setting up a sequence of virtual reality thought experiments to aid an open-minded process of moral reflection.” Therefore, it seems reasonable/defensible to think of oneself as better positioned to form convictions about object-level morality (in places where we deem it safe enough).
In my sequence's last post I have a whole list of "Pitfalls of reflection procedures" about things that can go badly wrong, and a list on how "Reflection strategies require judgment calls," where the meaning of "judgment call" is not that making the "wrong" decision would necessarily be catastrophic, but rather just that the outcome of our reflection might very well be heavily influenced by unavoidable early decisions that seem kind of arbitrary, and if that is the case and if we actually realize that we feel more confident about some first-order moral intuition than about which way to lean regarding the judgment calls of setting up moral reflection procedures ("how to get to IQ200" in your example), then it actually starts to seem risky and imprudent to defer to the reflection.
A few more quotes:
As Carslmith describes it, one has to – at some point – “actively create oneself.”
On why there's not always a wager for naturalist moral realism (the wager applies only to people like you who start the process without object-level moral convictions).
Whether a person’s moral convictions describe the “true moral reality” [...] or “one well-specified morality out of several defensible options” [...] comes down to other people’s thinking. As far as that single person is concerned, the “stuff” moral convictions are made from remains the same. That “stuff,” the common currency, consists of features in the moral option space that the person considers to be the most appealing systematization of “altruism/doing good,” so much so that they deem them worthy of orienting their lives around. If everyone else has that attitude about the same exact features, then [naturalist] moral realism is true. Otherwise, moral anti-realism is true. The common currency – the stuff moral convictions are made from – matters in both cases.
[...]Anticipating objections (dialogue):
Critic: Why would moral anti-realists bother to form well-specified moral views? If they know that their motivation to act morally points in an arbitrary direction, shouldn’t they remain indifferent about the more contested aspects of morality? It seems that it’s part of the meaning of “morality” that this sort of arbitrariness shouldn’t happen.
[...]Critic: I understand being indifferent in the light of indefinability. If the true morality is under-defined, so be it. That part seems clear. What I don’t understand is favoring one of the options. Can you explain to me the thinking of someone who self-identifies as a moral anti-realist yet has moral convictions in domains where they think that other philosophically sophisticated reasoners won’t come to share them?
Me: I suspect that your beliefs about morality are too primed by moral realist ways of thinking. If you internalized moral anti-realism more, your intuitions about how morality needs to function could change.
Consider the concept of “athletic fitness.” Suppose many people grew up with a deep-seated need to study it to become ideally athletically fit. At some point in their studies, they discover that there are multiple options to cash out athletic fitness, e.g., the difference between marathon running vs. 100m-sprints. They may feel drawn to one of those options, or they may be indifferent.
Likewise, imagine that you became interested in moral philosophy after reading some moral arguments, such as Singer’s drowning child argument in Famine, Affluence and Morality. You developed the motivation to act morally as it became clear to you that, e.g., spending money on poverty reduction ranks “morally better” (in a sense that you care about) than spending money on a luxury watch. You continue to study morality. You become interested in contested subdomains of morality, like theories of well-being or population ethics. You experience some inner pressure to form opinions in those areas because when you think about various options and their implications, your mind goes, “Wow, these considerations matter.” As you learn more about metaethics and the option space for how to reason about morality, you begin to think that moral anti-realism is most likely true. In other words, you come to believe that there are likely different systematizations of “altruism/doing good impartially” that individual philosophically sophisticated reasoners will deem defensible. At this point, there are two options for how you might feel: either you’ll be undecided between theories, or you find that a specific moral view deeply appeals to you.
In the story I just described, your motivation to act morally comes from things that are very “emotionally and epistemically close” to you, such as the features of Peter Singer’s drowning child argument. Your moral motivation doesn’t come from conceptual analysis about “morality” as an irreducibly normative concept. (Some people do think that way, but this isn’t the story here!) It also doesn’t come from wanting other philosophical reasoners to necessarily share your motivation. Because we’re discussing a naturalist picture of morality, morality tangibly connects to your motivations. You want to act morally not “because it’s moral,” but because it relates to concrete things like helping people, etc. Once you find yourself with a moral conviction about something tangible, you don’t care whether others would form it as well.
I mean, you would care if you thought others not sharing your particular conviction was evidence that you’re making a mistake. If moral realism was true, it would be evidence of that. However, if anti-realism is indeed correct, then it wouldn’t have to weaken your conviction.
Critic: Why do some people form convictions and not others?
Me: It no longer feels like a choice when you see the option space clearly. You either find yourself having strong opinions on what to value (or how to morally reason), or you don’t.
--
I don't think you should trust your object-level intuitions, because we don't have a good enough metaethical or metaphilosophical theory that says that's a good idea. If you think you do have one, aren't you worried that it doesn't convince a majority of philosophers, or the majority of some other set of people you trust and respect? Or worried that human brains are so limited and we've explored a tiny fraction of all possible arguments and ideas?
So far, I genuinely have not gotten much object-level pushback on the most load-bearing points of my sequence, so, I'm not that worried. I do respect your reasoning a great deal, but I find it hard what to do with your advice, since you're not proposing some concrete alternative and aren't putting your finger on some concrete part of my framework that you think is clearly wrong -- you just say I should be less certain about everything, but I wouldn't know how to do that and it feels like I don't want to do it.
FWIW, I would consider it relevant if people I intellectually respect were to disagree strongly and concretely with my thoughts on how to go about moral reasoning. (I agree it's more relevant if they push back against my reasoning about values rather than just disagreeing with my specific values.)
Thanks! That makes sense, and I should have said earlier that I already suspected I likely understood your point and you expressed yourself well – it’s just that (1) I’m always hesitant to put words in people’s mouths, so I didn’t want to say I was confident I could paraphrase your position, and (2) whenever you make posts about metaethics, I’m wondering “oh no, does this apply to me, am I one of the people who is doing the thing he says one shouldn’t do?,” and so I was interested in prompting you to be more concrete about what level of detailedness someone’s confident opinion in that area would have to be before you think they reveal themselves as overconfident.
By "metaethics" I mean "the nature of values/morality", which I think is how it's used in academic philosophy.
Yeah, makes sense. I think academic use is basically that with some added baggage that adds mostly confusion. If I were to sum up what I think the use is in academic philosophy, I would say "the nature of values/morality, at a very abstract level and looked at from the lens of analyzing language." For some reason, academic philosophy is oddly focused on the nature of moral language rather than morality/values directly. (I find it a confusing/unhelpful tradition of, “Language comes first, then comes the territory.”) As a result, classical metaethical positions at best say pretty abstract things about what values are. They might say things like "Values are irreducible (nonnaturalism)" or "Values can be reduced to nonmoral terminology like desires/goals, conscious states, etc. (naturalism)," but without actually telling us the specifics of that connection/reduction. If we were to ask, "Well, how can we know what the right values are?" -- then it's not the case that most metaethicists would consider themselves obviously responsible for answering it! Sure, they might have a personal take, but they may write about their personal take in a way that doesn't connect their answer to why they endorse a high-level metaethical theory like nonnaturalist moral realism.
Basically, there are (at least) two ways to do metaethics, metaethics via analysis of moral language and metaethics via observation of how people do normative ethics in applied contexts like EA/rationality/longtermism. Academic philosophy does one while LW does the other. And so, to academic philosophers, if they read a comment like the one Jan Kulveit left here about metaethics, my guess is that they would think he's confusing metaethics for something else entirely (like maybe, "applied ethics but done in a circumspect way, with awareness of the contested and possibly under-defined nature of what we're even trying to do").
Your model sounds right to me but I think there are benefits of oxytocin receptivity that go not just to the individual's satisfication rating when they're in a (not-awful) relationship, but also to their surroundings. For instance, I would guess that teams/organizations where many people value interpersonal relationships for their own sake can be stronger and more stable in a way that enables them to -- ironically -- be more ambitious and fight corruptive influences a lot better. (E.g., having high internal trust enables orgs to have flatter hierarchies and fewer levels of secrecy, which is good for team culture and epistemics. Also, if people value their social connections a lot, you have to worry less about important people leaving the org in an uncoordinated fashion if things get difficult in some way, which can be good or bad for impact depending on the specifics.) Relatedly, I think dialing up ambitiousness can go poorly, especially if you remove the safeguarding effects of certain prosocial drives or emotions.
I want to caveat the above by adding that cognitive diversity is almost certainly good for teams and I could well imagine that the perfect mix at any org includes several people to whom "work is everything" (even if that's driven by "ambitiousness" rather than autistic hyperfixation on their special interest). Also, orgs fill different niches and there are different equilibria for stable org cultures where talented people like to work at, etc.
So, my point is really just "I'm pretty sure there are upsides you haven't yet listed," rather than, "humans with oxytocin receptors are for sure the better building blocks for forming impactful teams."
Lastly, while I agree that, directionally, oxytocin sensitivity makes people less ambitious, I want to flag that I know many relationship-oriented but still "hardcore" effective altruistics who have found partners who are similarly ambitious or at least support their ambition (rather than just supporting them as a partner without properly respecting their ambition), or EAs who had a strong desire for finding a partner but deliberately didn't invest much into finding someone because they saw many examples of relationships going badly and they didn't want to jeopardize their productivity. Which is to underscore that things like oxytocin sensitivity are still only one component to the overall orientation of one's personality (even though I agree with you that it might be the biggest individual factor).
By "metaethics," do you mean something like "a theory of how humans should think about their values"?
I feel like I've seen that kind of usage on LW a bunch, but it's atypical. In philosophy, "metaethics" has a thinner, less ambitious interpretation of answering something like, "What even are values, are they stance-independent, yes/no?"
And yeah, there is often a bit more nuance than that as you dive deeper into what philosophers in the various camps are exactly saying, but my point is that it's not that common, and certainly not necessary, that "having confident metaethical views," on the academic philosophy reading of "metaethics," means something like "having strong and detailed opinions on how AI should go about figuring out human values."
(And maybe you'd count this against academia, which would be somewhat fair, to be honest, because parts of "metaethics" in philosophy are even further removed from practicality, as they concern the analysis of the language behind moral claims, which, if we compare it to claims about the Biblical God and miracles, it would be like focusing way too much on whether the people who wrote the Bible thought they were describing real things or just metaphores, without directly trying to answer burning questions like "Does God exist?" or "Did Jesus live and perform miracles?")
Anyway, I'm asking about this because I found the following paragraph hard to understand:
Behind a veil of ignorance, wouldn't you want everyone to be less confident in their own ideas? Or think "This isn't likely to be a subjective question like morality/values might be, and what are the chances that I'm right and they're all wrong? If I'm truly right why can't I convince most others of this? Is there a reason or evidence that I'm much more rational or philosophically competent than they are?"
My best guess of what you might mean (low confidence) is the following:
You're conceding that morality/values might be (to some degree) subjective, but you're cautioning people from having strong views about "metaethics," which you take to be the question of not just what morality/values even are, but also a bit more ambitiously: how to best reason about them and how to (e.g.) have AI help us think about what we'd want for ourselves and others.
Is that roughly correct?
Because if one goes with the "thin" interpretation of metaethics, then "having one's own metaethics" could be as simple as believing some flavor of "morality/values are subjective," and it feels like you, in the part I quoted, don't sound like you're too strongly opposed to just that stance in itself, necessarily.
(Sorry for the late reply.)
If you don't think moral realism is worth getting hang up on for AI and are interested in implications on AI development and AIs steering the future, then I'd recommend the last two posts in my sequence (8 and 9 in the summary I linked above). And maybe the 7th post as well (it's quite short) since it serves as a recap of some things that will help you better follow the reasoning in post 9.
What you write in that "Moral realism" section of your other post seems reasonable! I was surprised to read that you don't necessarily expect moral realism to apply because I thought the framing at the start of your post here ("Morality is unsolved") suggested a moral realist outlook (because it read to me as though written by someone who expects there to be a solution). I would have maybe added something like "... and we don't even seem to agree on solution criteria," after the message "Morality is unsolved," to highight the additional uncertainty about metaethics.
That said, after re-reading, I see now that, later on, you say "We also don't understand metaethics well".
This illustrates the phenomenon I talked about in my draft, where people in AI safety would confidently state "I am X" or "As an X" where X is some controversial meta-ethical position that they shouldn't be very confident in, whereas they're more likely to avoid overconfidence in other areas of philosophy like normative ethics.
We’ve had a couple of back-and-forth comments about this elsewhere, so I won’t go into it much, but briefly: I don’t agree with the “that they shouldn’t be confident in” part because metaphilosophy being about how to reason philosophically, we have to do something when we're doing that, so there is no obviously "neutral" stance and it's not at all obviously the safe option to remain open-endedly uncertain about it (about how to reason). If you're widely uncertain about what counts as solid reasoning in metaethics, your conclusions might remain under-defined -- up until the point where you decide to allow yourself to pick one or the other of some fundamental commitments about how you think concepts work. Things like whether it's possible for there to be nonnatural "facts" of an elusive nature that we cannot pin down in non-question-begging terminology. (One of my sequence's posts that I'm most proud of is this one that argues that realists and anti-realists are operating within different frameworks, that they have adopted different ways of reasoning about philosophy at a very fundamental level and that basically drives all their deeply-rooted disagreements.) Without locking in some basic assumptions about how to do reasoning, your reasoning won't terminate, so you'll eventually have to decide to become certain about something. It feels arbitrary how and when you're going to do that, so it seems legitimate if someone (like me) trusts their object-level intuitions already now more than they trust the safety of some AI-aided reflection procedure (and you of all people are probably sympathetic to that part at least, since you've written much about how reflection can go astray).
I'm pretty worried that if their assumed solution is wrong, they're likely to contribute to making the problem worse instead of better.
Yeah, I mean that does seem like fair criticism if we think about things like "Moral subjectivists (which on some counts is a form of anti-realism) simply assuming that CEV is a well-defined thing for everyone, not having a backup plan when it turns out it isn't, and generally just not thinking much about or seemingly not caring about the possibility that it isn't." But that's not my view. I find it hard to see how we can do better than what I tried to get started on in my sequence (and specifically the last two posts, life-goals framework and moral reflection), which is to figure out how people actually make up their minds about their goals in ways that they will reflectively endorse and that we'd come to regard as wise/prudent also from the outside, and then create good conditions for that while avoiding failure modes. And figuring out satisfying and fair ways of addressing cases where someone's reflection doesn't get off the ground or keeps getting pulled into weird directions because they lack some of the commitments to reasoning frameworks that Lesswrongers take for granted. If I were in charge of orchestrating moral reflection, I would understand it if moral realists or people like you who have wide uncertainty bars were unhappy about it, because our differences indeed seem large. But at the same time, I think my approach to reflection would leave enough room for people to do their own thing and I think apart from maybe you and maybe Christiano in the context of his thinking about HCH and related things, I might by now be the person who has thought the most about how philosophical reflection could go wrong or why it may or may not terminate or converge (since my post on moral reflection took forever to write and I had a bunch of new insights in the process), and I think in a way that should upshift the probability that I'm good at this type of philosophy, since it led me to have a bunch of new gears-level takes on philosophical reflection that would be relevant when it comes to designing actual reflection procedures.
BTW, are you actually a full-on anti-realist, or actually take one of the intermediate positions between realism and anti-realism? (See my old post Six Plausible Meta-Ethical Alternatives for a quick intro/explanation.)
I've come up with the slogan "morality is real but under-defined" to describe my position -- this is to distinguish it from forms of anti-realism that are more like "anything goes; we're never making mistakes about anything; not even the 14yo who adopts Ayn Rand's philosophy after reading just that one book is ever making a philosophical mistake."
I see things that I like about many of the numbers in your classification, but I think it's all gradual because there's a sense in which even 6. has a good point. But it would be going too far if someone would say "6. is 100% right and everything else is 0% right and therefore we shouldn't bother to have any object-level discussions about metaethics, morality, rationality, at all."
To say a bit more:
I confidently rule out the existence of "nonnatural" moral facts -- the elusive ones that some philosophers talk about -- because they just are not part of my reasoning toolbox. I don't understand how they're supposed to work and they seem to violate some pillars of my analytical, reductionist mindset about how concepts get their meaning. (This already puts me at odds with some types of moral realists.)
However, not all talk of "moral facts" is nonnaturalist in nature, so there are some types of "moral facts" that I'm more open to at least conceptually. But those facts have to be tied to concrete identifying criteria like "intelligent evolved minds will be receptive to discovering these facts." Meaning, once we identify those facts, we should be able to gain confidence that we've identified the right ones, as opposed to remaining forever uncertain due to the Open Question Argument.
I think moral motivation is a separate issue and I'm actually happy to call something a "moral fact" even if it is not motivating. As long as intelligent reasoners would agree that it's "the other-regarding/altruistic answer," that would count for me, even if not all of the intelligent reasoners will be interested in doing the thing that it recommends. (But note that I'm already baking in a strong assumption here, namely that my interest in morality makes it synonymous with wanting to do "the most altruistic thing" -- other people may think about morality more like a social contract of self-oriented agents, who, for example, don't have much interest in including powerless nonhuman animals in that contract, so they would be after a different version of "morality" than I'm after. And FWIW, I think the social contract facet of "morality" is real in the same under-defined way as the maximal altruism facet is real, and the reason I'm less interested in it is just because I think the maximum altruism one, for me, is one step further rather than an entirely different thing. That's why I very much don't view it as the "maximally altruistic thing" to just follow negative utilitarianism, because that would go against the social contract facet of morality, and at least insofar as there are people who have explicit life goals that they care about more than their own suffering, who am I to override their agency on that.)
Anyway, so, are there these naturalist moral facts that intelligent reasoners will converge on as being "the right way to do the most altruistic thing"?
My view is there's no single correct answer, but some answers are plausible (and individually persuasive in some cases, while in other cases someone like you might remain undecided between many different plausible answers), while other answers are clearly wrong, so there's "something there," meaning there's a sense in which morality is "real."
In the same way, I think there's "something there" about many of the proposed facts in your ladder of meta-ethical alternatives, even though, like I said, there's a sense in which even position 6. kind of has a point.
Trust the last person because the thing they're doing isn't the best thing anyone could do in their opinion?
Many of these can complement a romantic relationship (people are often attracted to someone's having passions/ambitions, and having a job provides stability). By contrast, dating multiple people is competing over largely similar resources, as you say. For example, you can only sleep in one person's bed at night, can only put yourself in danger for the sake of others so many times before you might die, etc.
Just knowing that you're splitting resources at all will be somewhat unsatisfying for some psychologies, if people emotionally value the security of commitment. I guess that's a similar category to jealousy and the poly stance here is probably that you can train yourself to feel emotionally secure if trust is genuinely justified. But can one disentangle romance/intimacy from wanting to commit to the person your romantically into? In myself, I feel like those feelings are very intertwined. "Commitment," then, is just the conscious decision to let yourself do what your romantic feelings already want you to do.
That said, maybe people vary in all the ways of how much these things can be decoupled. Like, some people have a signficant link between having sex and pair bonding, whereas others don't. Maybe poly people can disentangle "wanting commitment" from romantic love in a way that I can't? When I read the OP I was thrown off by this part: "You + your partner are capable of allowing cuddling with friends and friendship with exes without needing to make everything allowed." To me, cuddling is very much something that falls under romantic love, and there's a distinct ickiness of imagining cuddling with anyone who isn't in that category. Probably relatedly, as a kid I didn't want to be touched by anyone, not hug relatives ever, etc. I'm pretty sure that part is idiosyncratic because there's no logical reason why cuddling has to be linked to romantic love and commitment, as opposed to it functioning more like sex in people in whom sex is not particularly linked to pair bonding. But what about the thing where the feelings of romantic love also evokes a desire to join your life together with the other person? Do other people not have that? Clearly romantic love is about being drawn to someone, wanting to be physically and emotionally close to them. I find that this naturally extends to the rest of "wanting commitment," but maybe other people are more content with just enjoying the part of being drawn to someone without then wanting to plan their future together?
Anyway, the tl;dr of my main point is that psychologies differ and some people appear to be better psychologically adapted for monogamy than you might think if you just read the OP. (Edit: deleted a sentence here.) Actually point 10 in Elizabeth's list is similar to what I've been saying, but I feel like it can be said in a stronger way.