Logic and reason indicate the robustness of a claim, but you can have lots of robust, mutually-contradictory claims. A robust claim is one that contradicts neither itself nor other claims it associates with. The other half is how well it resonates with people. Resonance indicates how attractive a claim is through authority, consensus, scarcity, poetry, or whatever else.
Survive and spread through robustness and resonance. That's what a strong claim does. You can state that you'll only let a claim spread into your mind if it's true, but the fact that it's so common for two such people to hold contradictory claims indicates that their real metric is much weaker than truth. I'll posit that the real metric in such scenarios is robustness.
Not all disagreements will separate cleanly into true/false categorizations. Godel proved that one.
That was a fascinating post about the relationship with Berkeley. I wonder how the situation has changed in the last two years since people became more cognizant of the problem. Note that some of the comments there refute your idea that the community never had enough people for multiple hubs. NYC and Melbourne in particular seemed to have plenty of people, but they dissipated after core members repeatedly got recruited by Berkeley.
It seems like Berkeley was overtly trying to eat other communities, but EA did it just by being better at a thing many Rationalists hoped the Rationality Community would be. The "competition" with EA seems healthy, so perhaps that one should be encouraged more explicitly.
I'll note that for all the criticisms leveled at Berkeley in that post, I get the same impression of LW that Evan_Gaensbauer had of Berkeley. The sensible posts here (per my arrogant perspective) are much more life- and community-oriented. Jan_Kulveit in your link gave a tidy explanation of why that is, and I think it's close to spot-on. Your observations about practical plans for secondary hubs are exactly what I'd expect.
Your understanding is correct. Your Petrov Day strategy is the only thing I believe causes harm in your post.
I'll see if I can figure out what exactly was frustrating about the post, but I can't make promises on my ability to introspect to that level or on my ability to remember the origins of my feelings last night.
These are the things I can say with high certainty:
This is a best-guess as to why the post feels frustrating:
This is a weak best-guess, which I could probably improve on if I spent an hour or so thinking about it:
I did -2. It wasn't punishment, and definitely not for saying social penalty. I think social penalties are perfectly fine approaches for some problems, particularly ones where fuzzy coordination yields value greater than the complexity it entails.
I do feel frustration, but definitely not anger. The frustration is over the tenuous connection, which in my mind leads to a false sense of understanding.
I feel relatively new to LW so I'm still trying to figure out when I give a -1 and when I give a -2. I felt that the tenuous connection in combination with the net-negative advice warranted a -2.
EDIT: I undid my -2 in light of this comment thread.
Do you think it makes more sense for you to punish the perpetrator after you're dead or after they're dead?
Replication is a decent strategy until secrets get involved, and this world runs on a lot of secrets that people will not back up. Even when it comes to publicly accessible things, there's a very thick and very ambiguous line between private data and public data. See, for example, the EU's right to be forgotten. This is a minor issue post-nuke, but it means gathering support for a backup effort will be difficult.
Access control is a decent strategy once you manage to set it up and figure out how to appropriately distribute trust. Trusting "your friends" is not a good strategy for exactly the reason evident today: even if they're benign, they can be compromised.
Punishing attackers just flat out doesn't work. That random person in China doesn't care if the US government says hacking is bad. Hackers don't care if selling credit card data is bad. Not even academic researchers care that reverse-engineering is illegal. You're not going to convince the world that your punishments are good, and everyone unconvinced will let it slide. All you'll do is alienate the people most capable of identifying flaws in your strategy. There are a lot of very intelligent people out there that care more about their freedom to explore and act than about net utility. They will build out the plans and infrastructure necessary for the real baddies to do their work. Please do not alienate them by telling them that their moral sensibilities are bad.
Some lessons from a decade in software security.
I like your backup strategies for LessWrong. The connection to nukes is tenuous. I think your Petrov Day strategy does more harm than good.
In light of some of the comments on the supposed impossibility of relocating a hub, I figured I'd suggest a strategy. This post says nothing about the optimality of creating/relocating a hub, it only suggests a method for doing so. I'm obviously not an experienced hub relocator in real life, but evidently, I'll play one on the internet for the sake of discussion. Please read these as an invitation to brainstorm.
We could pick a second hub instead of a new first hub. We don't need consensus or even a plurality. We just need critical mass in a location other than Berkeley. Preferably that new location would cater to a group that's not well-served by Berkeley so we can get more total people into a hub. If we're being careful, we should worry about Berkeley losing its critical mass as a result of the second hub, however, I don't think that's a likely outcome.
There's some loss from splitting people across two hubs rather than getting everyone into one hub. However, I suspect indecision is causing way more long-term loss than the split would. I would recommend first trying to get more people into some hub, then worry about consolidation later.
Understood. I do think it's significant though (and worth pointing out) that a much simpler definition yields all of the same interesting consequences. I didn't intend to just disagree for the sake of getting clearer terminology. I wanted to point out that there seems to be a simpler path to the same answers, and that simpler path provides a new concept that seems to be quite useful.
This can turn into a very long discussion. I'm okay with that, but let me know if you're not so I can probe only the points that are likely to resolve. I'll raise the contentious points regardless, but I don't want to draw focus on them if there's little motivation to discuss them in depth.
I agree that a split in terminology is warranted, and that "defect" and "cooperate" are poor choices. How about this:
The "expected coalition strategy" is, let's say, "no one gets any". By this definition, is it a defection to then propose an even allocation of resources (a Pareto improvement)?
In my view, yes. If we agreed that no one should get any resources, then it's a violation for you to get resources or for you to deceive me into getting resources.
I think the difference is in how the two of us view a strategy. In my view, it's perfectly acceptable for the coalition strategy to include a clause like "it's okay to do X if it's a pareto improvement for our coalition." If that's part of the coalition strategy we agree to, then pareto improvements are never defections. If our coalition strategy does exclude unilateral actions that are pareto improvements, then it is a defection to take such actions.
Another question: how does this idea differ from the core in cooperative game theory?
I'm not a mathematician or an economist, my knowledge on this hasn't been tested, and I just discovered the concept from your reply. Please read the following with a lot of skepticism because I don't know how correct it is.
Some type differences:
As far as the relationship between the two:
I could be wrong about core allocations being about only refinements. I think I'm safe in saying though that core allocations are robust against some (maybe all) defections.