Against Rationalization II

Wiki Contributions


I object to the term "non-magical".  Bribes and intimidation are not magic.

The most obvious conspiracy, what I would consider the null hypothesis, involves one rich influential person who was worried about getting ratted on, one professional hitman, and one or two jailers who were willing to take bribes.  And from the perspective of a jailer being offered a bribe, with vague threats if he refuses, someone who has gone yachting with a bunch of highly placed politicians is scary even if the jailer can't fill in the details of the threat.

None of that is magic.  None of it involves a vast web.  None of it requires extraordinary sophistication.  None of it requires implausible levels of loyalty to an org chart (unless you're going to argue that the very existence of hitmen is implausible).

Somehow, whenever I hear the phrase "conspiracy theory", out come the strawmen.

1.5 The officer within the CIA who investigated Epstein knew, but he got promoted based on how many agents he had and how useful they were, so he kept quiet.  Had he turned Epstein in, he'd have gotten some kudos for that, but it wouldn't have been as good a career move.  Had he reported up the chain,  his commanding officer might have decided to sacrifice the original officer's career for greater justice, so he didn't do that either.  Whoever set up this incentive system didn't anticipate this particular scenario.

This is the thing about conspiracy theories: they usually don't require very much actual conspiring.

I suspect this is a lack of flexibility in Stockfish.  It was designed (trained?) for normal equal-forces chess and can't step back to think "How do I best work around this disadvantage I've been given?"  I suspect something like AlphaZero, given time to play itself at a disadvantage, would do better.  As would a true AGI.

That is literally true.  The old HPMOR site was just there to host the book as cleanly as possible.  Lesswrong is a discussion forum with a lot of functionality.  You can host a book on a discussion forum, but it'll never be as smooth.

I propose that "I don't know" between fully co-operative rationalists is shorthand for "my knowledge is so weak that I expect you would find negative value in listening to it."  Note that this means whether I say "I don't know" depends in part on my model of you.

For example, if someone who rarely dabbles in medicine asks me if I think a cure works, and I've only skimmed the paper that proposes it, I might well explain how low the prior is and how shaky this sort of research tends to be.  If an expert asked me the same question, I'd say "I don't know" because they already know all that and are asking if I have any unique insight, which I don't.

Similarly, if someone asks how much a box weighs, and I'm 95% confident it's between 10 and 50 pounds, I'll say "I don't know", because that range is too wide to be useful for most purposes.  But if they follow up with "I'm thinking of shipping it fedex which has a 70 pound maximum", then I can answer "I'm 95% confident it's less than 70 pounds."  Though if they also say that if the shipment doesn't go smoothly the mafia will kill them, my answer is "the scale's in the bathroom", because now 95% confidence isn't good enough.

This does mean that "I don't know" is a valid answer if my knowledge is so uncompressible that it cannot be transmitted within your patience.  I don't have a good example for this, but I don't see it as a problem.

New York / East Coast


December 10th, 6:15pm
Bruno Walter Auditorium, 111 Amsterdam Ave (between 64th and 65th streets, near the Lincoln Center stop on the 1 train)
Registration: https://forms.gle/fAFLWFCLm1pS1Hra7
Facebook Event: https://facebook.com/events/557544469714744


December 9-12
Registration: https://rationalistmegameetup.com/
Facebook Event: https://www.facebook.com/events/1468622393619899

How big of a subunit were you able to get?  Last I looked at mail-order dna, the affordable stuff was only a few hundred bases.

It is not clear to me what point you're making with your examples.  Have you written an object-level analysis of a failed LW conversation?  I realize that doing that in the straightforward way would antagonize a lot of people, and I recognize that might not be worth it, but maybe there's some clever workaround?  Perhaps you could create a role account for your dark side, post the sort of things you think are welcomed here but shouldn't be, confirm empirically that they are, then write a condemnation of those?

Less of a constraint if matters are arranged such that living in NYC is practical.  Expensive, of course, but no worse than the Bay.  It's a long-ish commute, but not too terrible by mostly-empty train (the full trains will be running the opposite direction).  Easier still if WFH a few days a week is supported.

This seems like a very confused way of thinking about earthquakes.  

In the past month, there were 4 earthquakes associated with the Juan del Fuca subduction.  All were around Richter 2.5 and no one cared.

While I suppose it's possible for a fault to produce small and large earthquakes both more often than in between, this strikes me as rather unlikely.  Generally an analysis of earthquake risk should begin be deciding what magnitude earthquakes to care about, and then calculate probabilities.

(When we say that the Seattle area is particularly at-risk, that's because architecture standards there contain very little earthquake-resilience.  Which may not be relevant here.  The actual fault line is among the less active on the west coast of North America.)

Load More