Posts

Sorted by New

Wiki Contributions

Comments

That gains in power for another group that wants to destroy you is necessarily worse for you.

Yes. In many real-life scenarios, this is true. In small games where the rules are blatant, it's easier to tell if someone is breaking an agreement or trying to subvert you, so model games aren't necessarily indicative of real-world conditions. For a real life example, look at the US's decision to fund religious groups to fight communists in the middle east. If someone wants to destroy you, during the alliance they'll work secretly to subvert you, and after the alliance is over, they'll use whatever new powers they have gained to try to destroy you.

People make compromises that sacrifice things intrinsic to their stated beliefs when they believe it is inevitable they'll lose — by making the "best bet" they were revealing that they weren't trying to win, that they've utterly given up on winning. The point of anarchy is that there is no king. For an anarchist to be a kingmaker is for an anarchist to give up on anarchy.

And from a moral standpoint, what about the situation where someone is asked to work with a rapist, pedophile, or serial killer? We're talking about heinous beliefs/actions here, things that would make someone a monster, not mundane "this person uses ruby and I use python," disagreements. What if working with a {rapist,pedo,serial killer} means they live to injure and kill another day? If that's the outcome, by working with them you're enabling that outcome by enabling them.

I think you're confusing rationality for plain self-interest. If you have something to protect, then it may be reasonable to sacrifice personal comfort or even your life to protect it. 

Also, you comment implies that the only reason you'd fight for something other than yourself is out of "liking the idea of yourself as some sort of social justice warrior," as opposed to caring about something and believing you can win by applying some strategy. And saying you'd "ruin your life" implies a set of values by which a life would count as "ruined."

true zero-sum games are ridiculously rare in the real world. There's always some way to achieve mutual gains - even if it's just "avoid mutual losses"

I disagree.

I think you're underestimating how deep value differences can be, and how those values play into everything a person does. Countries with nuclear weapons who have opposing interests are actively trying to destroy each other without destroying themselves in the process, and if you're curious about the failures of MAD, I'd suggest reading The Doomsday Machine, by Daniel Ellsberg. If that book is to be taken as mostly true, and the MWI is to be taken as true, then I suspect that many, many worlds were destroyed by nuclear missiles. When I found this unintuitive, I spent a day thinking about quantum suicide to build that intuition: most instances of all of us are dead because we relied on MAD. We're having this experience now where I'm writing this comment and you're reading it because everything that can happen will happen in some branch of the multiverse, meaning our existance is only weak evidence for the efficacy of MAD, and all of those very close calls are stronger evidence for our destruction in other branches. This doesn't mean we're in the magic branch where MAD works, it means we've gotten lucky so far. Our futures are infinite split branches of parallel mes and yous, and in most of those where we rely on strategies like MAD, we die.

...

Scissor statements reveal pre-existing differences in values, they don't create them. There really are people out there who have values that result in them doing terrible things. Furthermore, beliefs and values aren't just clothes we wear -- we act on them, and live by them. So it's reasonable to assume that if someone has a particularly heinous belief, and particularly heinous values, that they act on those beliefs and values.

In the ssc short story, scissor statements are used to tear apart mozambique, and in real life, we see propagandists using scissor statements to split up activist coalitions. It's not hypothetical, divide and conquer is a useful strategy that has been used probably since the dawn of time. But not all divides are created equal.

In the 1300s in rural France, peasants revolted against the enclosure of the commons, and since many of these revolts were led by women, the nascent state officials focused their efforts on driving a (false) wedge between men and women, accusing those women of being witches & followers of satan. Scissor statements (from what I can tell) are similar in that they're a tactic used to split up a coalition, but different in that they're not inventing conflict. It doesn't seem to make much of a difference in terms of outcome (conflict) once people have sorted themselves into opposing groups, but equating the two is a mistake. You're losing something real if you ally yourself with someone you're not value-aligned with, and you're not losing something real if you're allying yourself with someone you are value-aligned with, but mistakenly think is your enemy. The amount of power people like you with your value has loses strength because now another group that wants to destroy you has more power.

If two groups form a coalition, and gorup_A values "biscuits for all," and group_B values "cookies for all," and someone tries to start a fight between them based on this language difference, it would be tragic for them to fight. Because it should be obvious that what they want is the same thing, they're just using different language to talk about it. And if they team up, group_A won't be tempted to deny group_B cookies, because they deep-down value cookies for all, including group_B. It's baked into their decision making process.

(And if they decide that what they want to spend all their time doing is argue over whether they should call their baked food product "cookies" or "biscuits," then what they actually value is arguing about pedantry, not "cookies for all.")

But in a counter example, if group_A values "biscuits for all" and group_B values "all biscuits for group_B," then group_B will find it very available and easy to think of strategies which result in biscuits for group_B and not group_A. If someone is having trouble imagining this, that may be because it's difficult to imagine someone only wanting the cookies for themselves, so they assume the other group wouldn't defect, because "cookies for all? What's so controversial about that?" Except group_B fundamentally doesn't want group_A getting their biscuits, so any attempt at cooperation is going to be a mess, because group_A has to keep double-checking to make sure group_B is really cooperating, because it's just so intuitive to group_B not to that they'll have trouble avoiding it. And so giving group_B power is like giving someone power when you know they're later going to use it to hurt you and take your biscuits.

And group_B will, because they value group_B having all the biscuits, and have a hard time imagining that anyone would actually want everyone to have all the biscuits, unless they're lying or virtue signalling or something. And they'll push and push because it'll seem like you're just faking.

...

I find the way people respond to scissor statements ("don't bring that up, it's a scissor statement/divisive!") benefits only the status quo. And if the status quo benefits some group of people, then of course that group is going to eschew divisiveness. 

...

To bring it back to the Spanish Civil War, the communists were willing to ally themselves with big businesses, businesses who were also funding the fascists. They may have told themselves it was a means to an end, and for all I know (because my knowledge of the Spanish Civil War is limited only to a couple books,) the communists may have been planning to betray those big business interests, in the end. But in the mean time, they advanced the causes of those big business interests, and undermined the people who stood against everything the fascists fought for. It's difficult to say what would've happened if the anarchists had tried a gambit to force the hand of big business to pick a side (communist or fascist) or simply ignored the communists' demands. But big business interests were more supportive of Franco winning (because he was good for business), and their demands of the communists in exchange for money weakened the communists' position, and because the communists twisted the arms of the anarchists & the anarchists went along with it, this weakened their position, too. And in the end, the only groups that benefitted from that sacrifice were big business interests and Franco's fascists.

...

whether the deal can plausibly be kept secret.

That's a crapshoot, especially in the modern day. Creating situations where groups need to keep secrets in order to function is the kind of strategy Julian Assange used to cripple government efficiency. The correct tactic is to keep as few secrets from your allies as you can, because if you're actually allies, then you'll benefit from the shared information. 

This is about a disagreement so heinous that you can't countenance others living according to a different belief than your own.

 

Beliefs and values aren't just clothes we wear -- we act on them, and live by them. (And don't confuse me for talking about what people say their values are, vs what they act on. Someone can say they value "liberation for all," for example, but in practice they behave in accordance with the value "might makes right." Even if someone feels bad about it, if that's what they're acting out, over and over again, then that's their revealed preference. In my model, what people do in practice & their intent are what is worth tracking.) So it's reasonable to assume that if someone has a particularly heinous belief, and particularly heinous values, that they act on those beliefs and values.

I read JMH as arguing for a humility that sometimes looks like deferring to the social norm

Why should that particular humility be privileged? In choosing to privilege deference to a social norm or humility over $heinous_thing, one is saying that a {sense of humility|social norm} is more important than the $heinous_thing, and that is a value judgment.

I suspect they'd still want to live their life according to their best (flawed) judgment... just with an ever-present awareness that they are almost certainly wrong about some of it, and possibly wrong in monstrous ways.

If you think your judgment is wrong, you always have the option to learn more and get better judgment. Being so afraid of being wrong that a person will refuse to act is a kind of trap, and I don't think people are acting that way in the rest of their lives. If you're wiring an electrical system for your house, and you have an ever-present awareness that you're almost certainly wrong about some of it, you're not going to keep doing what you're doing. You'll crack open a text book, because dying of electrocution or setting your house on fire is an especially bad outcome, and one you sincerely care about not happening to you. Likewise, if you care about some moral value, if it feels real to you, then you'll act on it.

I think it can be both, but I don't have the sense that something being a scissors statement means that one should automatically ignore the scissors statement and strike solely at the person making the statement. Scissors statement or not, if a law is grossly unjust, then resist it.

Scissor statements reveal pre-existing differences in values, they don't create them. There really are people out there who have values that result in them doing terrible things. 

That's an artificially narrow example. You can have...

a good person with good methodology
a good person with bad methodology
a bad person with good methodology
a bad person with bad methodology

A question to ask is, when someone aligns an AGI with some approximation of "good values," whose approximation are we using? 

If you follow a law that is grossly unjust because it's a law, or follow a social convention that is grossly unjust because it is a social convention, you would be actively choosing to contribute to that injustice. Sticking out your neck & going against the grain based on your best judgment is (I thought) a kind of rationalist virtue. 

A while ago, someone encouraged me read Homage to Catalonia, citing it as a book that’d dissuade people from revolutionary justice. And in particular, dissuade people from the notion that they should carefully guard who they work with, like a blue working with a green.

In fact, I found the book had the opposite effect. It describes a what amounts to a three-way war between anarchists, communists, and fascists during the Spanish Civil War. During that war, foreign communists and capitalists both benefited from continuing top-down company-owned business models in certain countries, and so strongly dissuaded a Spanish worker’s revolution, an agenda which Spanish stalinists cooperated with to continue receiving funding. The anarchists wanted that revolution, but were willing to team up with the stalinist bloc against the fascists, it seems, because they couldn’t fight both, and they saw the fascists as a greater threat. The stalinists (who did not want revolution) took advantage of the anarchists comparatively worse position to neuter them, rolling back worker-controlled factories and local-run governments, which were a threat to foreign interests.

The stalinist block would frame “winning the war” as a means to get the anarchists to surrender on all their hard won progress, saying, “well, we can fight over worker owned factories, or we can fight together against the fascists,” essentially holding the country hostage, using the fascists as a threat to get what they wanted. And in the end, they both lost to Franco. 

This example seems to be a primary reason for not working with people who aren’t value-aligned: they’ll undermine your position, using the excuse of “unity against the enemy.” Once you give ground on local worker-led patrols instead of police, the non-value-aligned group will start pressing for a return to centralized government, imperially-owned factories, and worker exploitation. Give them an inch, they take a mile.  

Moloch says, "throw what you love into the fire and I will grant you victory," but any such bargain is made under false pretenses. In making the deal, you've already lost. 

My model is that a blue and green working together would constantly undermine the other's cause, and when that cause is life and death, this is tantamount to working with someone towards your own end. Some things matter enough that you shouldn't capitulate, where capitulation is the admission that you don't really hold the values you claim to hold -- it would be like saying you believe in gravity, while stepping off a cliff with the expectation that you'll float.

From what I understand, some of their funders were convinced MIRI would never pay out, and were quite upset to learn they did. For example, one of the people quoted in that open letter was Paul Crowley, a long time supporter who has donated almost $50k. Several donors were so upset they staged a protest.

I still think all of this, even if it's true (to any significant extent), isn't an overwhelming reason not to support MIRI (at all), given that they do seem to be doing good technical work.

I disagree. I've written a bit about why here.

I know a few people involved, and I trust that they're not lying, especially given that some of my own experiences overlap. I lived in the Bay for a couple years, and saw how people acted, so I'm fairly confident that the main claims in the open letter are true.

I've written myself a bit about why the payout was so bad here, which the author of the open letter appears to reference. 

MIRI wrote this paper: https://arxiv.org/abs/1710.05060 The paper is pretty clear that it's bad decision theory to pay out to extortion. I agree with the paper's reasoning, and independently came to a similar conclusion, myself. MIRI paying out means MIRI isn't willing to put their money where their mouth is. Your ability to actually follow through on what you believe is necessary when doing high-stakes work.

Like, a lot of MIRI's research is built around this claim about decision theory. It's fundamental to MIRI's approach. If one buys that FDT is correct, then MIRI's failure to consistently implement it here undermines one's trust in them as an institution. They folded like wet cardboard. If one doesn't buy FDT, or if one generally thinks paying out to extortionists isn't a big deal, then it wouldn't appear to be a big deal that they did. But a big part of the draw towards rationalist spaces and MIRI is that they claim to take ideas seriously. This behaviour indicates (to me) that they don't, not where it counts.

As for Ziz, from what I understand she's been the victim of a rather vicious defamation campaign chiefly organized by a determined stalker who is angry with her for not sleeping with him. If you reach out to some rationalist discord mods, you should be able to get a hold of sufficient evidence to back the claims in that post.

Load More