Maybe you're just jokingly pointing out that there's an apparent tension in the sentiment, which is fine.
But someone strong-downvoted my above comment, which suggests that at least one person thinks I have said something that is bad or shouldn't be said?
Is it the inclusion of animal rights (btw I should have said rights for sentient AIs too) or would people react the same way if I pointed out that an interpretation of a democratic process where every person alive at the Singularity gets one planet to themselves if they want it wouldn't be ideal if it means that some sadists could choose to create new sentient minds so they can torture them? I'm just saying, "can we please prevent that?" (Or, maybe, if that were this sadistic person's genuine greatest wish, could we at least compromise around it somehow so that the minds only appear to be sentient but aren't, and maybe, if it's absolutely necessary, once every year, on the sadist's birthday, a handful of the minds actually become sentient for a few hours, but only for levels of torment that are like a strong headache, and not anything anywhere close to mind-breaking torture?)
Liberty is not the only moral dimension that matters with a global scope, there's also care/harm prevention at the very least, so we shouldn't be surprised if we got a weird result if we try to optimize "do the most liberty thing" without paying any attention at all to care/harm prevention.
That said, if someone insisted on seeing it that way, I certainly wouldn't object that people who actually save the lightcone (not that I'm one of them, and not that I think we are currently on track of getting much control over outcomes anyway -- unfortunately I'm not encouraged by Dario Amodei repeatedly strawmanning opposing arguments) should get some kind of benefit or extra perk out of it if they really want that. If someone brings about a utopia-worthy future with a well-crafted process with democratic spirit, that's awesome and for all I care, if they want to add some idiosyncratic thing like that we should use the color green a lot in the future or whatever, they should get it because it's nice of them to not have gone (more) control mode on everyone else when they had the chance. (Of course, in reality I object that "let's respect animal rights" is at all like imposing extra bits of the color green on people. In our current world, not harming animals is quite hard because of the way things are set up and where we get food from, but in a future world, people may not even need food anymore, and if they do still need it, one could create it artificially. But more importantly, it's not in the spirit of "liberty" if you use it to impose on someone else's freedom.)
Taking a step back, I wonder if people really care about the moral object level here (like they would actually pay a lot of their precious resources for the difference between my democratic proposal with added safeguards and their own 100% democratic proposal?), or whether this is more about just taking down people who seem to have strong moral commitments, because of maybe an inner impulse to take down virtue signallers? Maybe I just don't empathize enough with people whose moral foundations are very different from mine, but to me, it's strange to be very invested about the maximum democraticness of a process, but then care not much about the prospect of torture of of innocents. Why have moral motivation and involvement for one but not the other?
Sure, maybe you could ask, why do you (Lukas) care about only liberty and harm prevention, but not about, say, authority or purity (other moral foundations according to Haidt)? Well, I genuinely think that authority or purity are more "narrow-scope" and more "personal" possible moral concerns that people can have for themselves and their smaller communities. In a utopia I would want anyone who cares about these things get them in their local surroundings, but it would be too imposing to put them on everyone and everything. By contrast, the logic of harm prevention works the other way because it's a concern that every moral patient benefits from.
You right that you could vote on whether to have any safeguards (and their contents if yes) instead of installing them top-down. But then who is it that frames the matter in that way (the question of safeguards getting voted on first before everyone gets some resources/influence allocated, versus just starting with the second part without the safeguards)? Who sets up the voting mechanism (e.g., if there's disagreement, is it just majority wins or should there be some Archipelago-style split in case a significant minority wants things some other way)?
My point is that terms like "democratic" (or "libertarian," for the Archipelago vision) are under-defined. To specify processes that capture the spirit behind these terms as ideals, we have to make some judgment calls. You might think that having democratic ideals also means everyone voting democratically on all these judgment calls, but I don't think that this changes the dynamic because there's an infinite regress where you need certain judgement calls for that, too.
And at this point I feel like asking, if we have to lock in some decisions anyway to get any democratic process off the ground, we may as well pick a setup top-down where the most terrible outcomes (involuntary torture) are less likely to happen for "accidental" reasons that weren't even necessarily "the will of the people." Sure, maybe you could have a phase where you gather inputs and objections to the initial setup, and vote on changes if there's a concrete counterproposal that gains enough traction via legitimate channels. Still, I'd very much would want to start by setting a well-thought-out default top-down rather than leaving everything up to chance.
It's not "more democratic" to leave the process underspecified. If you just put 8 billion people in a chat forum without too many rules hooked up to the AGI sovereign that controls the future, it'll get really messy and the result, whatever it is, may not reflect "the will of the people" any better than if we had started out with something already more guided and structured.
Giving everyone a say could lead to some terrible things because there are a lot of messed up people and messed up ideologies. At a minimum, there should be some safeguards imposed from top down. For instance, "give everyone a say but only if their say complies with human and animal rights." Someone has to make sure those safeguards are in there, so the vision cannot be 100% spread out to everyone.
I think if they sponsored Cotra's work and cited it, this reflects badly on them.
I find that position weirdly harsh. Sure, if you're just answering anaguma's question as a binary ("does it reflect well or poorly, regardless of magnitude?"), that could make sense. (Note to readers: This would mean that the quote I started this comment with should be regarded as taken out of context!) But seeing it as reflecting badly at a high magnitude is the judgment I'd consider weirdly harsh.
I'm saying that as someone who has very little epistemic respect for people who think AI ruin is only about 10% likely -- I consider people who think that biased beyond hope.
But back to the timelines point:
It's not like Bioanchors was claiming high confidence in its modelling assumptions or resultant timelines. At the time, a bunch of people in the broader EA ecosystem had even longer timelines, and Bioanchors IIRC took a somewhat strong stance against assigning significant probability mass for >2100, which some EAs at least considered non-obvious. Seen in that context, it contributed to people updating in the right direction. The report also contained footnotes pointing out that advisors held in high regard by Ajeya had shorter timelines based on specific thoughts on horizon lengths or whatever, so the report was hedging towards shorter timelines. Factoring that in, it aged less poorly than it would have if we weren't counting those footnotes. Ajeya also posted an update 2 years later where she shortened her timelines a bunch. If it takes orgs only 2 years to update significantly in the right direction, are they really hopelessly broken?
FWIW, I'm leaning towards you having been right about the critique (credit for sticking your neck out). But why is sponsoring or citing work like that such a bad sign? Sure, if they cited it as particularly authoritative, that would be different. But I don't feel like Open Phil did that. (This seems like a crux based on your questions in the OP and your comments here; my sense from reading other people's replies, and also my less informed impressions I got from interacting with some Open Phil staff at very few short occasions, is it that you were overestimating the degree to which Open Phil was attached to specific views.)
For comparison, I think Carlsmith's report on power-seeking was a lot worse in terms of where its predictions landed, so I'd have more sympathy if you pointed to it as an example of what reflects poorly on Open Phil (just want to flag that Carlsmith is my favorite philosophy writer in all of EA). However, also there, I doubt the report was particularly influential within Open Phil, and I don't remember it being promoted as such. Also, I would guess that the pushback it received from many sides would have changed their evaluation of the report after it was written, if they had initially been more inclined to update on it. I mean, that's part of the point of writing/publishing reports like that.
Sure, maybe Open Phil was doing a bunch of work directed more towards convincing outside skeptics that what they're doing is legitimate/okay rather than doing the work "for themselves"? If so, that's a strategic choice... I can see it leading to biased epistemics, but in a world where things had gone better, maybe it would have gotten further billionaires on board with their mission of giving? And it's not like doing the insular MIRI thing that you all had been doing before the recent change to get into public comms was risk-free for internal epistemics either. There are risks on both ends of the spectrum, outward-looking and deferring to many experts or at least "caring whether you can convince them", and inward looking with a small/shrinking circle of people whose research opinions you respect.
On whether some orgs are/were hopelessly broken: it's possible. I feel sad about many things having aged poorly and I feel like the EA movement has done disappointingly poorly. I also feel like I've heard once or twice Open Phil staff saying disappointingly dismissive things about MIRI (even though many of the research directions there didn't age well either).
I don't have a strong view on Open Phil anymore -- it used to be that I had one (and it was positive), so I have became more skeptical. Maybe you're picking up a real thing about Open Phil's x-risk-focused teams having been irredeemably biased or clouded in their approaches. But insofar as you are, I feel like you've started with unfortunate examples that, at least to me, don't ring super true. (I felt prompted to comment because I feel like I should be well-dispositioned to sympathize with your takes given how disappointed I am at the people who still think there's only a 10% AI ruin chance.)
Ah, right. It's not been that long yet, IMO, but if this continues for (say) 2ys in that no one changes their mind but also ~no one engages with the arguments directly and substantively, that would be disappointing.
In your case, the arguments seem more radical, unlike with arguing for anti-realism where one commonly available reason for not engaging much would be people thinking "I probably have similar enough views already."
For me, epistemology was never my special interest, so I'm not that well-positioned to dive into the topic and try writing a critique or commentary, but I hope that someone else ends up doing it.
If we consider the views of Oxford's EA philosophers as also having some founding influence on LW and the broader adjacent communities, then it becomes a bit less clear how strongly the founding effect is pointing in the direction of anti-realism.
In any case, I should flag that I no longer think "no object level pushback, therefore I'm not that worried" is a good way of putting it. Instead, I would now put it as follows: "No object level pushback, therefore the burden of proof is no longer on me, and anyone who claims I'm being too confident in my views is on no firmer ground with their position than I am."
(On whether LW is too stuck within founder effects in general: Your example would be pretty telling and damning if we assume that you're correct, but my guess is that most readers here will assume you're wrong about it. Someone in your position could still be right, of course; I'm just saying that this wouldn't yet be apparent to readers.)
Many of these can complement a romantic relationship (people are often attracted to someone's having passions/ambitions, and having a job provides stability). By contrast, dating multiple people is competing over largely similar resources, as you say. For example, you can only sleep in one person's bed at night, can only put yourself in danger for the sake of others so many times before you might die, etc.
Just knowing that you're splitting resources at all will be somewhat unsatisfying for some psychologies, if people emotionally value the security of commitment. I guess that's a similar category to jealousy and the poly stance here is probably that you can train yourself to feel emotionally secure if trust is genuinely justified. But can one disentangle romance/intimacy from wanting to commit to the person your romantically into? In myself, I feel like those feelings are very intertwined. "Commitment," then, is just the conscious decision to let yourself do what your romantic feelings already want you to do.
That said, maybe people vary in all the ways of how much these things can be decoupled. Like, some people have a signficant link between having sex and pair bonding, whereas others don't. Maybe poly people can disentangle "wanting commitment" from romantic love in a way that I can't? When I read the OP I was thrown off by this part: "You + your partner are capable of allowing cuddling with friends and friendship with exes without needing to make everything allowed." To me, cuddling is very much something that falls under romantic love, and there's a distinct ickiness of imagining cuddling with anyone who isn't in that category. Probably relatedly, as a kid I didn't want to be touched by anyone, not hug relatives ever, etc. I'm pretty sure that part is idiosyncratic because there's no logical reason why cuddling has to be linked to romantic love and commitment, as opposed to it functioning more like sex in people in whom sex is not particularly linked to pair bonding. But what about the thing where the feelings of romantic love also evokes a desire to join your life together with the other person? Do other people not have that? Clearly romantic love is about being drawn to someone, wanting to be physically and emotionally close to them. I find that this naturally extends to the rest of "wanting commitment," but maybe other people are more content with just enjoying the part of being drawn to someone without then wanting to plan their future together?
Anyway, the tl;dr of my main point is that psychologies differ and some people appear to be better psychologically adapted for monogamy than you might think if you just read the OP. (Edit: deleted a sentence here.) Actually point 10 in Elizabeth's list is similar to what I've been saying, but I feel like it can be said in a stronger way.
Those are mostly "analytical" reasons. I'd say sometimes people just have a psychology that is drawn to monogamy as an ideal (for reasons deeper than just struggling with jealousy otherwise), which makes them poorly suited for polyamory.
It's said that love has three components, intimacy/romance, passion/lust/attraction, and commitment. I would say that the people to whom monogamy feels like the obviously right choice have a psychology that's adapted towards various facets of valuing commitment. So commitment is not something that they enter if they've analytically gone through the pros and cons and decided that it's net beneficial for them. Instead, it's something they actively long for that gives purpose to their existence. Yes, it comes with tradeoffs, but that contributes to the meaning of it and they regard committedness as a highly desirable state.
If someone('s psychology) values commitment in that way, it's an unnatural thought to want to commit to more than one person. Commitment is about turning the relationship into a joint life goal -- but then it's not in line with your current life goals to add more goals/commitments that distract from it.
I don't mean to say that polyamrous couples cannot also regard commitment as a desirable state (say, if they're particularly committed to their primary relationship). If anyone poly is reading this and valuing commitment is ~their primary motivation in life, I'd be curious to learn about how this manifests. To me, it feels in tension with having romantically meaningful relationships with multiple people because it sounds like sharing your resources instead of devoting them all towards the one most important thing. But I haven't talked to polyamorous people about this topic and I might be missing something. (For instance, in my case I also happen to be somewhat off-the-charts introverted, which means I see various social things differently from others.)
I'm not well-positioned to think about your prioritization, for all I know you're probably prioritizing well! I didn't mean to suggest otherwise.
And I guess you're making the general point that I shouldn't put too much stake into "my sequence hasn't gotten much in terms of concrete pushback," because it could well be that there are people who would have concrete pushback but don't think it's worth commenting since it's not clear if many people other than myself would be interested. That's fair!
(But then, probably more people than just me would be interested in a post or sequence on why moral realism is true, for reasons other than deferring, so those object-level arguments should better be put online somewhere!)
That makes sense. I was never assuming a context where having to bargain for anything is the default, so the coalition doesn't have to be fair to everyone, since it's not a "coalition" at all but rather most people would be given stuff for free because the group that builds aligned AI has democracy as one of their values.
Sure, it's not 100% for free because there are certain expectations, and the public can put pressure on companies that appear to be planning things that are unilateral and selfish. Legally, I would hope companies are at least bound to the values in their country's constitution. More importantly, morally, it would be quite bad to not share what you have and try to make things nice for everyone (worldwide), with constraints/safeguards. Still, as I've said, I think it would be really strange and irresponsible if someone thought that a group or coalition that brought about a Singularity that actually goes well somehow owes a share of influence to every person on the planet without any vetting or safeguards.