sen

Comments

Doing discourse better: Stuff I wish I knew

Logic and reason indicate the robustness of a claim, but you can have lots of robust, mutually-contradictory claims. A robust claim is one that contradicts neither itself nor other claims it associates with. The other half is how well it resonates with people. Resonance indicates how attractive a claim is through authority, consensus, scarcity, poetry, or whatever else.

Survive and spread through robustness and resonance. That's what a strong claim does. You can state that you'll only let a claim spread into your mind if it's true, but the fact that it's so common for two such people to hold contradictory claims indicates that their real metric is much weaker than truth. I'll posit that the real metric in such scenarios is robustness.

Not all disagreements will separate cleanly into true/false categorizations. Godel proved that one.

The rationalist community's location problem

That was a fascinating post about the relationship with Berkeley. I wonder how the situation has changed in the last two years since people became more cognizant of the problem. Note that some of the comments there refute your idea that the community never had enough people for multiple hubs. NYC and Melbourne in particular seemed to have plenty of people, but they dissipated after core members repeatedly got recruited by Berkeley.

It seems like Berkeley was overtly trying to eat other communities, but EA did it just by being better at a thing many Rationalists hoped the Rationality Community would be. The "competition" with EA seems healthy, so perhaps that one should be encouraged more explicitly.

I'll note that for all the criticisms leveled at Berkeley in that post, I get the same impression of LW that Evan_Gaensbauer had of Berkeley. The sensible posts here (per my arrogant perspective) are much more life- and community-oriented. Jan_Kulveit in your link gave a tidy explanation of why that is, and I think it's close to spot-on. Your observations about practical plans for secondary hubs are exactly what I'd expect.

Surviving Petrov Day

Your understanding is correct. Your Petrov Day strategy is the only thing I believe causes harm in your post.

I'll see if I can figure out what exactly was frustrating about the post, but I can't make promises on my ability to introspect to that level or on my ability to remember the origins of my feelings last night.

These are the things I can say with high certainty:

  • I read this post more like a list of serious suggestions interspersed with playful bits. Minus the opener and the Information Flow section, the contents here are all legit.
  • If you put way more puns into the section contents, it would feel less frustrating.

This is a best-guess as to why the post feels frustrating:

  • It feels like you draw a sharp delineation between playful bits and serious suggestions. The opener is all playful. The section headers are all serious. Minus the Information Flow section, the section contents are all serious. The "Metaphor For" lines are all playful.
  • The sharp delineation makes it feel like the playful bits were tossed in to defend the serious suggestions against critical thinking.

This is a weak best-guess, which I could probably improve on if I spent an hour or so thinking about it:

  • I'd guess that puns would help because they would blur the line between serious suggestions and playful bits. This would force the reader to think more about what you're saying for validity. With that, it wouldn't feel like the post is trying to defend itself against critical thinking.
Surviving Petrov Day

I did -2. It wasn't punishment, and definitely not for saying social penalty. I think social penalties are perfectly fine approaches for some problems, particularly ones where fuzzy coordination yields value greater than the complexity it entails.

I do feel frustration, but definitely not anger. The frustration is over the tenuous connection, which in my mind leads to a false sense of understanding.

I feel relatively new to LW so I'm still trying to figure out when I give a -1 and when I give a -2. I felt that the tenuous connection in combination with the net-negative advice warranted a -2.

EDIT: I undid my -2 in light of this comment thread.

Surviving Petrov Day

Do you think it makes more sense for you to punish the perpetrator after you're dead or after they're dead?

Replication is a decent strategy until secrets get involved, and this world runs on a lot of secrets that people will not back up. Even when it comes to publicly accessible things, there's a very thick and very ambiguous line between private data and public data. See, for example, the EU's right to be forgotten. This is a minor issue post-nuke, but it means gathering support for a backup effort will be difficult.

Access control is a decent strategy once you manage to set it up and figure out how to appropriately distribute trust. Trusting "your friends" is not a good strategy for exactly the reason evident today: even if they're benign, they can be compromised.

Punishing attackers just flat out doesn't work. That random person in China doesn't care if the US government says hacking is bad. Hackers don't care if selling credit card data is bad. Not even academic researchers care that reverse-engineering is illegal. You're not going to convince the world that your punishments are good, and everyone unconvinced will let it slide. All you'll do is alienate the people most capable of identifying flaws in your strategy. There are a lot of very intelligent people out there that care more about their freedom to explore and act than about net utility. They will build out the plans and infrastructure necessary for the real baddies to do their work. Please do not alienate them by telling them that their moral sensibilities are bad.

Some lessons from a decade in software security.

I like your backup strategies for LessWrong. The connection to nukes is tenuous. I think your Petrov Day strategy does more harm than good.

The rationalist community's location problem

In light of some of the comments on the supposed impossibility of relocating a hub, I figured I'd suggest a strategy. This post says nothing about the optimality of creating/relocating a hub, it only suggests a method for doing so. I'm obviously not an experienced hub relocator in real life, but evidently, I'll play one on the internet for the sake of discussion. Please read these as an invitation to brainstorm.

Make the new location a natural choice.

  • Host events in the new location. If people feel a desire to spend their holidays and time off in the new location, that's a great start.
  • Pick a good common hotel. For people that visit regularly, this hotel should feel almost like a second home. Rationalists can bump into each other in the hotel, and they can carpool or get meals together.
  • Identify people that can give an "open invitation" for others to visit any time. These people are basically the ones openly ready to make friends with new rationalists. The hope is that, eventually, rationalists start coming into the area to meet up with friends.

Create opportunities to move.

  • Invite rationalists to interview for jobs in the new location. This would directly target people that choose to move for work reasons.
  • Make the new location more homely for people that have had trouble adjusting to their current location. I've known people (especially married people) to move for social and comfort reasons, especially ones that have had difficulty making friends in a new location. Make it easy to socialize in the new location, and make leisure-time activities more accessible, either with good information or social events.
  • Keep track of cheap/shared housing opportunities near the new location. Sometimes people really do move to save money. If people know where the cheap housing is, that's one less excuse not to move. Such a list might even encourage people to get a second home in the new location.
  • Create guides to help people discuss remote work options with managers & HR.

Reinforce every move.

  • Make sure the work situation is stable: support people career-wise in the area. I don't have ideas on how to do this, but if it's a common reason for people moving, then it should be a common reason for people staying.
  • Make the new location homely. Keep track of good leisure-time activities and locations, help stabilize travel by keeping track of transit options, and make sure people moving have chances to socialize and make friends in the area.
  • Keep track of housing opportunities for people to move to increasingly-stable locations. For people that want cheap, keep track of cheap housing. For people that want social, keep track of group housing. For people that want a family, keep track of good neighborhoods and school districts.

Use every move to encourage further relocation.

  • Keep a counter of the number of people that have moved into the area (but not the number of people that have left). There's something oddly satisfying about making/seeing numbers go up.
  • Help new movers host/support events and get started socializing with incoming visitors. Try to get them to do the same things for others that others did to encourage them.
  • Encourage people in the area to spread out work-wise to create more interview opportunities for rationalists not in the area.
The rationalist community's location problem

We could pick a second hub instead of a new first hub. We don't need consensus or even a plurality. We just need critical mass in a location other than Berkeley. Preferably that new location would cater to a group that's not well-served by Berkeley so we can get more total people into a hub. If we're being careful, we should worry about Berkeley losing its critical mass as a result of the second hub, however, I don't think that's a likely outcome.

There's some loss from splitting people across two hubs rather than getting everyone into one hub. However, I suspect indecision is causing way more long-term loss than the split would. I would recommend first trying to get more people into some hub, then worry about consolidation later.

What counts as defection?

Understood. I do think it's significant though (and worth pointing out) that a much simpler definition yields all of the same interesting consequences. I didn't intend to just disagree for the sake of getting clearer terminology. I wanted to point out that there seems to be a simpler path to the same answers, and that simpler path provides a new concept that seems to be quite useful.

What counts as defection?

This can turn into a very long discussion. I'm okay with that, but let me know if you're not so I can probe only the points that are likely to resolve. I'll raise the contentious points regardless, but I don't want to draw focus on them if there's little motivation to discuss them in depth.

I agree that a split in terminology is warranted, and that "defect" and "cooperate" are poor choices. How about this:

  • Coalition members may form consensus on the coalition strategy. Members of a coalition may follow the consensus coalition strategy or violate the consensus coalition strategy.
  • Members of a coalition may benefit the coalition or hurt the coalition.
  • Benefiting the coalition means raising its payoff regardless of consensus. Hurting the coalition means reducing its payoff regardless of consensus. A coalition may form consensus on the coalition strategy regardless of the optimality of that strategy.

Contentious points:

  • I expect that treating utility so generally will lead to paradoxes, particularly when utility functions are defined in terms of other utility functions. I think this is an extremely important case, particularly when strategies take trust into account. As a result, I expect that such a general notion of utility will lead to paradoxes when using it to reason about trust.
  • "Utility is not a resource." I think this is a useful distinction when trying to clarify goals, but not a useful distinction when trying to make decisions given a set of goals. In particular, once the payoff tables are defined for a game, the goals must already have been defined, and so utility can be treated as a resource in that game.
What counts as defection?
The "expected coalition strategy" is, let's say, "no one gets any". By this definition, is it a defection to then propose an even allocation of resources (a Pareto improvement)?

In my view, yes. If we agreed that no one should get any resources, then it's a violation for you to get resources or for you to deceive me into getting resources.

I think the difference is in how the two of us view a strategy. In my view, it's perfectly acceptable for the coalition strategy to include a clause like "it's okay to do X if it's a pareto improvement for our coalition." If that's part of the coalition strategy we agree to, then pareto improvements are never defections. If our coalition strategy does exclude unilateral actions that are pareto improvements, then it is a defection to take such actions.

Another question: how does this idea differ from the core in cooperative game theory?

I'm not a mathematician or an economist, my knowledge on this hasn't been tested, and I just discovered the concept from your reply. Please read the following with a lot of skepticism because I don't know how correct it is.

Some type differences:

  • A core is a set of allocations. I'm going to call it core allocations so it's less confusing.
  • A defection is a change in strategy (per both of our definitions).

As far as the relationship between the two:

  • A core allocation satisfies a particular robustness property: it's stable under coalition refinements. A "coalition refinement" here is an operation with a coalition is replaced by a partition of that coalition. Being stable under coalition refinements, the coalition will not partition itself for rational reasons. So if you have coalitions {A, B} and {C}, then every core allocation is robust against {A, B} splitting up into {A}, {B}.
  • Defections (per my definition) don't deal strictly with coalition refinements. If one member leaves a coalition to join another, that's still a defection. In this scenario, {A, B}, {C} is replaced with {A}, {B, C}. Core allocations don't deal with this scenario since {A}, {B, C} is not a refinement of {A, B}, {C}. As a result, core allocations are not necessarily robust to defections.

I could be wrong about core allocations being about only refinements. I think I'm safe in saying though that core allocations are robust against some (maybe all) defections.

Load More