Zvi

Sequences

Immoral Mazes
Slack and the Sabbath
The Darwin Game

Comments

Do you fear the rock or the hard place?

Building off Raemon's review, this feels like it is an attempt to make a 101-style point that everyone needs to understand if they don't already (not as rationalists, but as people in general) but that seems to me like it fails because those reading it will fall into the categories of (1) those who already got it and (2) those who need to get it but won't. 

The Costs of Reliability

This is a very important point to have intuitively integrated into one's model, and I charge a huge premium to activities that require this kind of reliability. I hope it makes the cut.

I also note that someone needs to write The Costs of Unreliability and I authorize reminding me in 3 months that I need to do this.

Approval Extraction Advertised as Production

This was a great idea, but I think the spreadsheet fails on two fronts - first, it's measuring the end product rather than the founders and how they operate and attempt to scale, which is the primary thing Benquo is talking about here I believe, and two is that if I ranked these companies I don't think there would be that much correlation with these rankings. 

In the examples from the comment, and judging purely on nature of product since I don't know the founders or early histories much, I'd have had Twitch as positive while I had Doordash as negative, I'd agree with Dropbox and Gusto, and Scale is a weird case where we think the product is bad if it is real but that's orthogonal to the main point here. 

Looking at the S&P 500 I see the same thing. Amazon at 0 seems insane to me (I'd be +lots) and McDonalds at -2 even more so especially in its early days (The Founder is a very good movie about its origins). 

Partial summary of debate with Benquo and Jessicata [pt 1]

As someone who was involved in the conversations, and who cares about and focuses on such things frequently, this continues to feel important to me, and seems like one of the best examples of an actual attempt to do the thing being done, which is itself (at least partly) an example of the thing everyone is trying to figure out how to do. 

What I can't tell is whether anyone who wasn't involved is able to extract the value. So in a sense, I "trust the vote" on this so long as people read it first, or at least give it a chance, because if that doesn't convince them it's worthwhile, then it didn't work. Whereas if it does convince them, it's great and we should include it.

Make more land

This idea seems obviously correct, all the responses to objections seem correct, and the chance of this happening any time soon is about epsilon. 

In some sense I wish the reasons it will never happen were less obvious than they are, so it would be a better example of our inability to do things that are obviously correct. 

The question is, how much does this add to the collection. Do we want to use a slot on practical good ideas that we could totally do if we could do things, and used to do? I'm not sure. 

Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on?

One factor no one mentions here is the changing nature of our ability to coordinate at all. If our ability to coordinate in general is breaking down rapidly, which seems at least highly plausible, then that will likely carry over to AGI, and until that reverses it will continuously make coordination on AGI harder same as everything else. 

In general, this post and the answers felt strangely non-"messy" in that sense, although there's also something to be said for the abstract view. 

In terms of inclusion, I think it's a question that deserves more thought, but I didn't feel like the answers here (in OP and below) were enlightening enough to merit inclusion. 

Some Ways Coordination is Hard

I just reviewed the OP this post responds to, and sounds like we're thinking along similar lines in many ways - I'd like to see a Big Book of Coordination at some point, and hold both posts back until then, or if people like both enough include both.

Here is my offer: Cherry-picking examples is bad, and worry that someone cherry-picked them is also bad. Appearance of impropriety, register your experiments, other neat stuff like that. So if Raemon or someone else compiles a list of the concrete examples, I'll make at least an ordinary effort to do a post about them, intended to be similar to things like Simple Rules of Law. 

The Schelling Choice is "Rabbit", not "Stag"

So I reread this post, found I hadn't commented... and got a strong desire to write a response post until I realized I'd already written it, and it was even nominated. I'd be fine with including this if my response also gets included, but very worried about including this without the response. 

In particular, I felt the need to emphasize the idea that Stag Hunts frame coordination problems as going against incentive gradients and as being maximally fragile and punishing, by default. 

If even one person doesn't get with the program, for any reason, a Stag Hunt fails, and everyone reveals their choices at the same time. Which everyone abstractly knows is the ultimate nightmare scenario and not the baseline, but a lot of the time gets treated (I believe) as the baseline, And That's Terrible.

I don't know what exactly to do about it, introducing yet another framework/name seems expensive at this point, but I think the right baseline coordination is that various people get positive payoffs for an activity that rise with the number of people who do it, and a lot of the time this gets automatically modeled as a strict stag hunt instead, and people throw up their hands and say 'whelp, coordination hard, what you gonna do.' 

What are the open problems in Human Rationality?

These are good lists of open problems, although as Ben notes are bad lists if they are to be considered all the open problems. I don't think that is the fault of the post, and it's easy enough to make clear the lists are not meant to be complete. 

This seems like a spot where a good list of open problems is a good idea, but here we're mostly going to be taking a few comments. I think that's still a reasonable use of space, but not exciting enough to think of this as important.

The Great Karma Reckoning

Going to take John's comment and this reckoning as a good time to say that while 10x is a large multiplier on top-level front-page posts, 1x is not a large multiplier, and to the extent that karma matters comments are getting too large a share.

Load More