Taran

Wiki Contributions

Comments

Speaking of Stag Hunts

Fair enough!  My claim is that you zoomed out too far: the quadrilemma you quoted is neither good nor evil, and it occurs in both healthy threads and unhealthy ones.  

(Which means that, if you want to have a norm about calling out fucky dynamics, you also need a norm in which people can call each others' posts "bullshit" without getting too worked up or disrupting the overall social order.  I've been in communities that worked that way but it seemed to just be a founder effect, I'm not sure how you'd create that norm in a group with a strong existing culture).

Speaking of Stag Hunts

I want to reinforce the norm of pointing out fucky dynamics when they occur...

Calling this subthread part of a fucky dynamic is begging the question a bit, I think.

If I post something that's wrong, I'll get a lot of replies pushing back.  It'll be hard for me to write persuasive responses, since I'll have to work around the holes in my post and won't be able to engage the strongest counterarguments directly.  I'll face the exact quadrilemma you quoted, and if I don't admit my mistake, it'll be unpleasant for me!  But, there's nothing fucky happening: that's just how it goes when you're wrong in a place where lots of bored people can see.

When the replies are arrant, bad faith nonsense, it becomes fucky.  But the structure is the same either way: if you were reading a thread you knew nothing about on an object level, you wouldn't be able to tell whether you were looking at a good dynamic or a bad one.

So, calling this "fucky" is calling JenniferRM's post "bullshit".  Maybe that's your model of JenniferRM's post, in which case I guess I just wasted your time, sorry about that.  If not, I hope this was a helpful refinement.

Speaking of Stag Hunts

I expect that many of the people who are giving out party invites and job interviews are strongly influenced by LW.

The influence can't be too strong, or they'd be influenced by the zeitgeist's willingness to welcome pro-Leverage perspectives, right?  Or maybe you disagree with that characterization of LW-the-site?

Speaking of Stag Hunts

When it comes to the real-life consequences I think we're on the same page: I think it's plausible that they'd face consequences for speaking up and I don't think they're crazy to weigh it in their decision-making (I do note, for example, that none of the people who put their names on their positive Leverage accounts seem to live in California, except for the ones who still work there).  I am not that attached to any of these beliefs since all my data is second- and third-hand, but within those limitations I agree.

But again, the things they're worried about are not happening on Less Wrong.  Bringing up their plight here, in the context of curating Less Wrong, is not Lawful: it cannot help anybody think about Less Wrong, only hurt and distract.  If they need help, we can't help them by changing Less Wrong; we have to change the people who are giving out party invites and job interviews.

Speaking of Stag Hunts

But it sure is damning that they feel that way, and that I can't exactly tell them that they're wrong.

You could have, though.  You could have shown them the many highly-upvoted personal accounts from former Leverage staff and other Leverage-adjacent people.   You could have pointed out that there aren't any positive personal Leverage accounts, any at all, that were downvoted on net.  0 and 1 are not probabilities, but the evidence here is extremely one-sided: the LW zeitgeist approves of positive personal accounts about Leverage.  It won't ostracize you for posting them.

But my guess is that this fear isn't about Less Wrong the forum at all, it's about their and your real-world social scene.  If that's true then it makes a lot more sense for them to be worried (or so I infer, I don't live in California).  But it makes a lot less to bring to bring it up here, in a discussion about changing LW culture: getting rid of the posts and posters you disapprove of won't make them go away in real life.  Talking about it here, as though it were an argument in any direction at all about LW standards, is just a non sequitur.

Zoe Curzi's Experience with Leverage Research

Even if all you have is a bunch of stuff and learned heuristics, you should be able to make testable predictions with them.  Otherwise, how can you tell whether they're any good or not?

Whether the evidence that persuaded you is sharable or not doesn't affect this.  For example, you might have a prior that a new psychotherapy technique won't outperform a control because you've read like 30 different cases where a leading psychiatrist invented a new therapy technique, reported great results, and then couldn't train anyone else to get the same results he did.  That's my prior, and I suspect it's Eliezer's, but if I wanted to convince you of it I'd have a tough time because there's not really a single crux, just those 30 different cases that slowly accumulated.  And yet, even though I can't share the source of my belief, I can use it to make concrete testable predictions: when they do an RCT for the 31st therapy technique, it won't outperform the control.

Geoff-in-Eliezer's-ancedote has not reached this point.  This is especially bad for a developing theory: if Geoff makes a change to CT, how will he tell if the new CT is better or worse than the old one?  Geoff-replying-to-Eliezer takes this criticism seriously, and says he can make concrete, if narrow, predictions about specific people he's charted.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

They're suggesting that you should have written "...this is an accurate how-level description of things like..."  It's a minor point but I guess I agree.

How to think about and deal with OpenAI

A related discussion from 5 years ago: https://www.lesswrong.com/posts/Nqn2tkAHbejXTDKuW/openai-makes-humanity-less-safe

Cheap food causes cooperative ethics

Republican Rome is the example I know best, and...it sorta fits?

Rome fought a lot of wars, and they were usually pretty extractive: sometimes total wars in which the entire losing side was killed or enslaved, other times wars of conquest in which the losing states were basically left intact but made to give tribute (usually money and/or soldiers for the legions).  They definitely relied on  captured foreigners to work their farms, especially in Sicily where it was hard to escape, and they got so rich from tribute that they eliminated most taxes on citizens in the 160s BC.

It's not clear that Rome was short of food and slaves when it started those wars, though.  If anything, they sometimes had the opposite problem: around 50 BC so many farmers and farmers' sons were being recruited into the legions that Italian farmland wasn't being used well.  I think the popular consensus is that a lot of warfare and especially enslavement was a principal-agent issue: Roman generals were required by custom to split any captured booty with their soldiers, but were allowed to keep all the profits from slave-trading for themselves.  Enslaving a tribe of defeated Gauls was a great way to get rich, and you needed to be rich to advance in Roman politics.

To summarize, Roman warfare during the republic was definitely essential to Roman food security, but they got into a lot more wars than you'd predict from that factor alone.

Clear exceptions to the rule include the Social war (basically an Italian civil war), the third Punic war (eliminating the existential threat of Carthage), and some of Caesar's post-dictatorship adventures (civil war again).

Dominic Cummings : Regime Change #2: A plea to Silicon Valley

The original startup analogy might be a useful intuition pump here.  Most attempts to displace entrenched incumbents fail, even when those incumbents aren't good and ultimately are displaced.  The challengers aren't random in the monkeys-using-keyboard sense, but if you sample the space of challengers you will probably pick a loser.  This is especially true of the challengers who don't have a concrete, specific thesis of what their competitors are doing wrong and how they'll improve on it -- without that, VCs mostly won't even talk to you.  

But this isn't a general argument against startups, just an argument against your ability to figure out in advance which ones will work.  The standard solution, which I expect will apply to transhumanism as to everything else, is to try lots of different things, compare them, and keep the winners.  If you are upstream of that process, deciding which projects to fund, then you are out of luck: you are going to fund a bunch of losers, and you can't do anything about it.

If you can't do that, the other common strategy is to generate a detailed model of both the problem space and your proposed improvement, and use those models to iterate in hypothesis space instead of in real life.  Sometimes this is relatively straightforward: if you want the slaves to be free, you can issue a proclamation that frees them and have high confidence that they won't be slaves afterward (though note that the real plan was much more detailed than that, and didn't really work out as expected).  Other times it looks straightforward but isn't: sparrows are pests, but you can't improve your rice yields by getting rid of them.  Here, to me the plan does not even look straightforward: the Pentagon does a lot of different things and some of them are existentially important to keep around.  If we draw one sample from the space of possible successors, as Cummings suggests, I don't think we'll get what we want.

Load More