I think you're confusing rationality for plain self-interest. If you have something to protect, then it may be reasonable to sacrifice personal comfort or even your life to protect it.
Also, you comment implies that the only reason you'd fight for something other than yourself is out of "liking the idea of yourself as some sort of social justice warrior," as opposed to caring about something and believing you can win by applying some strategy. And saying you'd "ruin your life" implies a set of values by which a life would count as "ruined."
true zero-sum games are ridiculously rare in the real world. There's always some way to achieve mutual gains - even if it's just "avoid mutual losses"
I disagree.
I think you're underestimating how deep value differences can be, and how those values play into everything a person does. Countries with nuclear weapons who have opposing interests are actively trying to destroy each other without destroying themselves in the process, and if you're curious about the failures of MAD, I'd suggest reading The Doomsday Machine, by Daniel Ellsberg. If that book is to b...
This is about a disagreement so heinous that you can't countenance others living according to a different belief than your own.
Beliefs and values aren't just clothes we wear -- we act on them, and live by them. (And don't confuse me for talking about what people say their values are, vs what they act on. Someone can say they value "liberation for all," for example, but in practice they behave in accordance with the value "might makes right." Even if someone feels bad about it, if that's what they're acting out, over and over again, then that's their ...
I think it can be both, but I don't have the sense that something being a scissors statement means that one should automatically ignore the scissors statement and strike solely at the person making the statement. Scissors statement or not, if a law is grossly unjust, then resist it.
Scissor statements reveal pre-existing differences in values, they don't create them. There really are people out there who have values that result in them doing terrible things.
That's an artificially narrow example. You can have...
a good person with good methodology
a good person with bad methodology
a bad person with good methodology
a bad person with bad methodology
A question to ask is, when someone aligns an AGI with some approximation of "good values," whose approximation are we using?
If you follow a law that is grossly unjust because it's a law, or follow a social convention that is grossly unjust because it is a social convention, you would be actively choosing to contribute to that injustice. Sticking out your neck & going against the grain based on your best judgment is (I thought) a kind of rationalist virtue.
A while ago, someone encouraged me read Homage to Catalonia, citing it as a book that’d dissuade people from revolutionary justice. And in particular, dissuade people from the notion that they should carefully guard who they work with, like a blue working with a green.
In fact, I found the book had the opposite effect. It describes a what amounts to a three-way war between anarchists, communists, and fascists during the Spanish Civil War. During that war, foreign communists and capitalists both benefited from continuing top-down company-owned business model...
From what I understand, some of their funders were convinced MIRI would never pay out, and were quite upset to learn they did. For example, one of the people quoted in that open letter was Paul Crowley, a long time supporter who has donated almost $50k. Several donors were so upset they staged a protest.
I still think all of this, even if it's true (to any significant extent), isn't an overwhelming reason not to support MIRI (at all), given that they do seem to be doing good technical work.
I disagree. I've written a bit about why here.
I know a few people involved, and I trust that they're not lying, especially given that some of my own experiences overlap. I lived in the Bay for a couple years, and saw how people acted, so I'm fairly confident that the main claims in the open letter are true.
I've written myself a bit about why the payout was so bad here, which the author of the open letter appears to reference.
MIRI wrote this paper: https://arxiv.org/abs/1710.05060 The paper is pretty clear that it's bad decision theory to pay out to extortion. I agree with the paper's reasoning, ...
You're assuming that success will be apparent enough to the market that it will notice & respond to an AGI research group that is about to succeed. You may want to clarify for yourself why you believe you're in the world where a signal would be apparent (in general), able to be received, and visible to the market at all.
edit to add: my own assumption is that money would have ?? value after a successful FAI, so I don't think this is worth optimizing for, with regards to AGI stocks or whatever.
I've purposefully worked with smart-but-low-conscientious people on a few projects because I've found they'll come up with "lazy" but workable solutions for problems I would've approached with, "apply more grit." They also benefited, because I will sometimes course-correct when their "lazy" solution is simply not workable. The result was greater efficiency for both of us. This is merely anecdata, though, and I wonder if this holds up in other cases.
Yes. In many real-life scenarios, this is true. In small games where the rules are blatant, it's easier to tell if someone is breaking an agreement or trying to subvert you, so model games aren't necessarily indicative of real-world conditions. For a real life example, look at the US's decision to fund religious groups to fight communists in the middle east. If someone wants to destroy you, during the alliance they'll work secretly to subvert you, and after the all... (read more)