All of Ezekiel's Comments + Replies

Oh, sorry, neither did I. I'm not trying to accuse Raemon of deliberate brainwashing. But getting together every year to sings songs about, say, existential risk will make people more likely to disregard evidence showing that X-risk is lower than previously thought. Same for every other subject.

Ah, I guess it was the use of "deliberately" that confused me. Now I come to think of it, this is mentioned as a possible risk in the article, and dismissed as much less powerful than, y'know, talking about it all the damn time.

Who said anything about mindhacking?

Raemon did. It's a ritual, deliberately styled after religious rituals, some of the most powerful mindhacks known.

I ... didn't get the impression that this was intended to mindhack people into moving closer to LessWrong consensus.

As of now, there is no evidence that the average LessWronger is more rational than the average smart, educated person (see the LW poll). Therefore, a lot of LWers thinking something is not any stronger evidence for its truth than any other similarly-sized group of smart, educated people thinking it. Therefore, until we get way better at this, I think we should be humble in our certainty estimates, and not do mindhacky things to cement the beliefs we currently hold.

Who said anything about mindhacking? I'm just saying that we should expect rationalists to believe some of the same things, even if nonrationalists generally don't believe these things. Considering the whole point of this site is to help people become more rational, recognize and overcome their biases etc. I'm not sure what you're doing here if you don't think that actually, y'know, happens.

The line that people tend to quote there is "מנהג ישראל דין הוא" (the custom of Israel is law), but most people have never looked up its formal definition. Its actual halachic bearing is much too narrow to justify (for example) making kids sing Shabbat meal songs.

Correct me if I'm wrong, but it looks like you're talking about anti-deathism (weak or strong) as if it was a defining value of the LessWrong community. This bothers me.

If you're successful, these rituals will become part of the community identity, and I personally would rather LW tried to be about rationality and just that as much as it can. Everything else that correlates with membership - transhumanism, nerdiness, thinking Eliezer is awesome - I would urge you not to include in the rituals. It's inevitable that they'd turn up, but I wouldn't give them e... (read more)

Well, it depends what you mean by "defining value". The LW community includes all sorts of stuff that simply becomes much more convincing/obvious/likely when you're, well, more rational. Atheism, polyamory, cryonics ... there's quite a few of these beliefs floating around. That seems like it's as it should be; if rationality didn't cause you to change your beliefs, it would be meaningless, and if those beliefs weren't better correlated with reality, it would be useless.
I thought there was a rule about not breaking tradition, even if the tradition isn't otherwise supported. No?

What would a ritual that's just about rationality and more complex than a group recitation of the Litany of Tarsky look like?

So everyone in the human-superiority crowd gloating about how they're superior to mere machines and formal systems, because they can see that Godel's Statement is true just by their sacred and mysterious mathematical intuition... "...Is actually committing a horrendous logical fallacy [...] though there's a less stupid version of the same argument which invokes second-order logic."

So... not everyone. In Godel, Escher, Bach, Hofstadter presents the second-order explanation of Godel's Incompleteness Theorem, and then goes on to discuss the &quo... (read more)

I think it's worth addressing that kind of argument because it is fairly well known. Penrose, for example, makes a huge deal over it. Although mostly I think of Penrose as a case study in how being a great mathematician doesn't make you a great philosopher, he's still fairly visible.

The reason it's not random-strawman is that the human-superiority crowd claims we have a mystical ability to see implications that machines can't. If some of them, while making this claim, actually fail at basic logic, the irony is not irrelevant - it illustrates the point, "No, humans really aren't better at Godelian reasoning than machines would be."

I perceive most of signalling as a waste of resources and think that cultivating a community which tried to minimize unnecessary signalling would be good.

Correcting spelling errors doesn't waste many resources. But yeah, the amount of pointless signalling that goes on in the nerd community is kind of worrying.

Why do I do it myself? Force of habit, probably. I was the dumbest person in my peer group throughout high school, so I had to consciously cultivate an image that made me worth their attention, which I craved.

It's kind of saddening that this kind of problem draws my attention much quicker than serious logical problems.

To be fair, they're a hell of a lot easier to notice. Although there's probably a signalling issue involved as well - particular kinds of pedantry are good ways of signalling "nerdiness", and I think most LWers try to cultivate that kind of image.

Wow. This is fascinating. You, Ezekiel, are basically saying 'I'm aware that a behavior expressing pedantry like that is a signalling thing, that it specifically signals "nerdiness", and that such a person is trying to 'cultivate an image'." "Oh, and I just did that" ... Presumably you value signaling and cultivating an image with the aim of belonging in a nerdy LessWrong in-group. facepalm What are we becoming? P.S. On an unrelated topic, I think the site founder is wrong about some things. And I just thought you ought to know that I'm such a contrarian :)
I would specify instead: signalling "I care about good communication and avoiding misunderstandings due to poor use of language and syntactical ambiguities/misinterpretations" I've got this idea from I-don't-know-where that this kind of signaling is a useful, cost-efficient sonar ping that'll publicly filter for certain types of people, notably those who care about grammar and those who care about avoiding ambiguities. I think attracting both of those groups is a suitable compromise when the only obvious alternatives are much costlier.

The founders of Castify are big fans of Less Wrong so their rolling out their beta with some of our content.


But seriously, this is great. I'm trying to get into the habit of using podcasts and recorded lectures to make better use of my time, especially while travelling.


I've been lurking on this site for a few months and seeing this in my RSS feed this morning was surprisingly shocking. I guess I just assumed that people trying to be more logical never made this kind of mistake. It was a good reminder that a mistake only invalidates the conclusions drawn from the mistake, so spelling and grammar errors should be pretty low on the list of offenses. It's kind of saddening that this kind of problem draws my attention much quicker than serious logical problems.

Taxes are an involuntary transfer of wealth made under threat of coercive violence. Theft is an an involuntary transfer of wealth made under threat of coercive violence. Saying that does not mean much until one defines the proper scope of legitimate violence in society. Max Weber made the analytically useful point that one definitional aspect of modern government is its monopoly on legitimate violence.

I took "spiritual" to mean in this context that you don't believe in ontologically basic mental entities, but still embrace feelings of wonder, majesty, euphoria, etc. typically associated with religions when contemplating/experiencing the natural world.

Notice that other people answering my question had different interpretations. I left it blank.

Do you not have a preference for low/high redistribution of wealth because you haven't studied enough economics, or because you have studied economics and haven't found a satisfying answer?

Because ... (read more)

Systematic human irrationality? If learning about economics is something worthwhile for you, then I recommend picking up a good macroeconomics textbook and working through it. A good 101 textbook will outline simple but useful models that can help us understand the economy. (This is like how statistical mechanics can help us understand thermodynamic systems without knowing information about each individual particle.) Alternatively, there should be open online courses from MIT, Harvard, etc. if that is more your style.

Two questions, as I take the survey:

  1. What does "spiritual" mean, in the context of "Atheist [but | and not] spiritual"?
  2. I genuinely have no idea whether I'd prefer low or high redistribution of wealth. What do I tick for my political opinion?
I took "spiritual" to mean in this context that you don't believe in ontologically basic mental entities, but still embrace feelings of wonder, majesty, euphoria, etc. typically associated with religions when contemplating/experiencing the natural world. Do you not have a preference for low/high redistribution of wealth because you haven't studied enough economics, or because you have studied economics and haven't found a satisfying answer? (Alternatively, trying to answer this one question might just not be worth your time. If that's the case, I'd leave it blank. Or if you're otherwise choosing between two positions, flip a coin)
I had both of these questions as well. I've always been confused about the word "spiritual," as some people seem to use it to mean "having feelings of awe or reverence that are cognitively similar to those expressed in religious worship" while others use it to mean "actually believing in spirits." I consider myself spiritual by the first definition, but not the second. On the survey, I described myself as "atheist but spiritual," but now I'm not sure this was the most accurate description, since it falsely implies that I believe in the supernatural. As far as redistribution of wealth goes, I don't know what you should mark. I chose "Libertarian" because I am rather distrustful of centralized government, and redistribution of wealth generally depends on some sort of centralization. But I know very little about what sort of consequences redistribution of wealth would actually have, so my views on the subject are quite tentative. (I recall hearing somewhere that the Scandinavian countries scored highest on a survey of self-reported happiness, which would suggest that redistribution of wealth at least doesn't prevent a society from being largely happy. If anyone can confirm or deny this, I would much appreciate it.)
I'd say, having the alief that, as IIRC someone on LW put it (I can't recall the exact wording and the search engine doesn't seem to help me), you are a timeless optimization process of which your current incarnation is a mere approximation.
For 1, I took it as meaning having a belief in some form of soul, afterlife, or karma.
For 2. - you could fill out the political compass survey (it comes up later in the survey under "unreasonably long and complicated questions"). Alternatively you could pick the political labels that you think might apply to you and then choose one at random.
According to this Google result, "spiritual" in this context seems to allude to a kind of private, iconoclastic, mystical religion, as opposed to public, creedal, classical religions like most sects of Christianity. I hope that helps.

Depends what you mean by "familiar". I'd imagine anyone reading the essay can do algebra, but that they're still likely to be more comfortable when presented with specific numbers. People are weird like that - we can learn general principles from examples more easily than from having the general principles explained to us explicitly.

Exceptions abound, obviously.

Remove from your life everything you forget; what is left is you.

Can we just agree that English doesn't have a working definition for "self", and that different definitions are helpful in different contexts? I don't think there's anything profound in proposing definitions for words that fuzzy.

I think it does. Can't believe I missed that.

Actually, this fits well with my personal experience. I've frequently found it easier to verbalize sophisticated arguments for the other team, since my own opinions just seem self-evident.

I suspect sheep would be less susceptible to this sort of thing than humans.

The study asked people to rate their position on a 9-point scale. People who took more extreme positions, while more likely to detect the reversal, also gave the strongest arguments in favour of the opposite opinion when they failed to detect the reversal.

Also, the poll had two kinds of questions. Some of them were general moral principles, but some of them were specific statements.

Trolley problems are also very specific, but people have great trouble with them. Maybe I should have said "non-familiar" rather than just "general".

"Easy to communicate to other humans", "easy to understand", or "having few parts".

"Having few parts" is what Occam's razor seems to be going for. We can speak specifically of "burdensome details," but I can't think of a one-word replacement for "simple" used in this sense. It is a problem that people tend to use "simple" to mean "intuitive" or "easy to understand," and "complicated" to mean "counterintuitive." Based on the "official" definitions, quantum mechanics and mathematics are extremely simple while human emotions are exceedingly complex. I think human beings have internalized a crude version of Occam's Razor that works for most normal social situations - the absurdity heuristic. We use it to see through elaborate, highly improbable excuses, for example. It just misfires when dealing with deeper physical reality because its focus is on minds and emotions. Hence, two different, nearly opposite meanings of the word "simple."

Am I the only one who thinks we should stop using the word "simple" for Occam's Razor / Solomonoff's Whatever? In 99% of use-cases by actual humans, it doesn't mean Solomonoff induction, so it's confusing.

How would you characterise the in your opinion most prevalent use-cases?
Yeah, various smart people have made that point repeatedly, but Eliezer and Luke aren't listening and most people learn their words from Eliezer and Luke, so the community is still being sorta silly in that regard.

Don't think you can fuck with people a lot more powerful than you are and get away with it.

I'm no expert, but that seems to be the moral of a lot of Greek myths.

Verbatim from the comic:

It is not God who kills the children. Not fate that butchers them or destiny that feeds them to the dogs. It's us.
Only us.

I personally think that Watchmen is a fantastic study* on all the different ways people react to that realisation.

("Study" in the artistic sense rather than the scientific.)

Now someone just has to write a book entitled "The Rationality of Sisyphus", give it a really pretentious-sounding philosophical blurb, and then fill it with Grand Theft Robot.

Rot13'd for minor spoiling potential: Ur'f n jnet / fxvapunatre.

The chance of human augmentation reaching that level within my lifespan (or even within my someone's-looking-after-my-frozen-brain-span) is, by my estimate, vanishingly low. But if you're so sure, could I borrow money from you and pay you back some ludicrously high amount in a million years' time?

More seriously: Seeing as my current brain finds regret unpleasant, that's something that reduces to my current terminal values anyway. I do consider transhuman-me close enough to current-me that I want it to be happy. But where their terminal values actually differ, I'm not so sure - even if I knew I were going to undergo augmentation.

And you only have one thing to give in return: your life.

Also effort, expertise, and insider information on one of the most powerful Houses around. And magic powers.

He has magic powers?

Open question: Do you care about what (your current brain predicts) your transhuman self would want?

If you don't, you're really going to regret it in a million years.

Yes, I think so. It surely depends on exactly how I extrapolate to my "transhuman self," but I suspect that its goals will be like my own goals, writ larger

My brain technically-not-a-lies to me far more than it actually lies to me.

-- Aristosophy (again)

"Wait, Professor... If Sisyphus had to roll the boulder up the hill over and over forever, why didn't he just program robots to roll it for him, and then spend all his time wallowing in hedonism?"
"It's a metaphor for the human struggle."
"I don't see how that changes my point."

I'd say this captures the spirit of Less Wrong perfectly.

Answer: Because the Greek gods are vindictive as fuck, and will fuck you over twice as hard when they find out that you wriggled out of it the first time.

Well, his point only makes any sense when applied to the metaphor since a better answer to the question

"Wait, Professor... If Sisyphus had to roll the boulder up the hill over and over forever, why didn't he just program robots to roll it for him, and then spend all his time wallowing in hedonism?"


"where would Sisyphus get a robot in the middle of Hades?"

Edit: come to think of it, this also works with the metaphor for human struggle.

Read as:

the auction gains even more money from people who have seen it before [and are nevertheless willing to play again] than it does from naive bidders

Right, of course. Selection effect. I think what confused me was that I took that to mean the total amount of money earned, not per-person.

I agree (in general) with Xenophon's advice: Calm down, do whatever you're comfortable with spiritually, and in the worst case scenario call it "God" to keep the peace with whoever you want to keep the peace with.

With that said, if you still want advice, I deconverted myself a year ago and have since successfully corrupted others, and I've been wanting to codify the fallacies I saw anyway. Before I start: bear in mind that you might be wrong. I find it very unlikely that any form of Abrahamic theism is true, but if you care about the truth you ha... (read more)

OTOOH, the people who do figure it out effectively get more power over choosing the result than people who don't. In most democracies, this would be considered a negative. Not that real-life elections are totally fair either, of course.

How'd they react? Did it work?

I'd like to see a Tales from a Rationalist Car Dealership story come out of this...

The salesperson called me back with an acceptable number, and I bought the car. Essentially it was an ultimatum game and I accepted the offer. I think that the salesperson was afraid of losing the sale, and acted accordingly.

I cannot tell you if I actually got an especially good deal, but I would guess I got a better deal than I would have otherwise, because I'd have been far less likely to walk out on an offer once I'd gone to the trouble of starting paperwork at the dealership -- and they knew it.

[Edited to add: I do not think this would have worked as... (read more)

The point I should have made clear was that data-entry clerks don't exist outside of corporations, because in isolation they're useless. More generally, mass production has been made possible by the production-line paradigm: break down the undertaking into tiny discrete jobs and assign a bunch of people to doing each one over and over again.

Once you get that kind of framework, exceptionally good workers aren't very helpful, because the people to either side of them in the production line aren't necessarily going to keep up. You just need to shut up and do ... (read more)

One of the most important social structures of modern society is the corporation - a framework for large groups of people to band together and get absolutely huge projects done. In this framework, the structure itself is more important than individual excellence at most levels. To a lesser extent, the same applies to academia and even "society as a whole".

In that context, I think preferring negative selection to positive makes sense: a genius data-entry clerk is less helpful than an insubordinate data-entry clerk is disruptive.

And remember that w... (read more)

I'm not sure that it's the corporate structure that makes negative selection more useful in the data entry case. It's not the fact that the data-entry clerk is part of a large organisation that means that a slightly incompetent data-entry clerk is more disruptive than a genius-level one is helpful. Rather it's the fact that data-entry is a relatively low skill job and with relatively little room for excelling above mere competence. Leaving the corporation wholly out of it, and imagining a person doing data entry in complete isolation, the most helpful data-entry clerk would still be selected by making sure they weren't terrible, but weren't necessarily brilliant, at typing and remaining attentive etc. I think this idea is supported by the fact that for higher level/skill positions, one probably would want to employ more positive selection. If your point was specifically that insubordination (and not just slight incompetence in general) is more harmful than genius-level work is helpful, then I guess that, in an obvious sense, the harm of insubordination is due to the corporate nature of work (since you can't be insubordinate outside of a group hierarchy). But then I'm not sure that insubordination-worries requires negative selection, or at least not a wide range of negative selection tests. Sure, you might want to include a negative selection test along the lines of 'are they likely to do the opposite of what they're told on a whim occasionally?', but it's an open question whether the rest of your criteria would be negative or positive.

And remember that we have side routes so real geniuses (of some kinds) can still make it: set up their own company, start their own political party, start publishing their work online, design games in their basement, and so on.

This is a really good point. It's good to have low barriers to this sort of thing. For instance, if you need to hire a lawyer and an accountant to set up your own company, then a genius cookie baker can't set up their own cookie shop unless they also have the money or connections to get the help of a lawyer and an accountant.

Out of genuine curiosity, how do you know that? I thought you never went to university.

Personal experience, most likely. What little I've seen / know of his knowledge indicates in-depth mastery of multiple topics that would each have taken five or more years of university courses to learn. Having learned them all from university courses without special exception being made (that is, taking full-term courses without any skipping of courses or taking more than six courses per term) is highly improbable. Many of my thought experiments into forming universities or educational institutions in general more geared towards optimized learning (e.g. open-learning systems where each student is at different levels in different subjects, and takes tests when milestones are reached rather than at specific predefined dates) seem to strongly indicate that while many of them would be much better for making more intelligent individuals or letting people learn much faster, the optimal utility-maximizing situation for the "Institutional Governing Body" is the current system. In other words, the individuals in positions of power to change the institutions have much more to gain (at least in the short term on their personal utility scales) in maintaining the current system. All my calculations, estimates and observations so far have consistently been in agreement with this statement, though I suspect a great deal of personal bias is at work here.

Are these your own estimates, or have you found some objective, accurate test for ranking "Conceptual originality"?

7Eliezer Yudkowsky12y
I put that in because I didn't think any non-trolls would seriously dispute the 99+% part, not because I knew how to measure it down to the sixth decimal place.

What Bill Maher said was that if a person claims that ~Bite is significant evidence for God, they must admit that Bite is significant evidence for ~God. I'm saying I don't think that's accurate.

The sentiment that one should update on the evidence is obviously great, but I think we should keep an eye on the maths.

Fair enough, if the premise is that ¬Bite → God exists.

Sure. But if I handle snakes to prove they won't bite me because God is real, and they don't bite me -- you do the math.

More seriously, though: the sentiment expressed in the quote is flawed, IMHO. Evidence isn't always symmetrical. Any particular transitional fossil is reasonable evidence for evolution; not finding a particular transitional fossil isn't strong evidence against it. A person perjuring themselves once is strong evidence against their honesty; a person once declining to perjure themselves is not strong evidence in favour of their honesty; et ... (read more)

Right. Sensitivity does not equal specificity. Maher makes the mistake of assuming the rate of false positives and false negatives for the 'snakebite test for god' are equal. The transitional fossil test for evolution and the perjury test for honesty both have high false negative rates and low false positive rates.
Hm, I thought that reasoning argued against your own non-serious first paragraph rather than what Bill said. If the idea is "if God is real (and won't let snakes bite me), then they won't bite me", then being bitten shows that the first part is false, but not being bitten doesn't say anything about the first part being true or false. Or if you don't want to get hung up on formal logic, then it's valid but very weak evidence, like a hypothesis not being falsified in a test.
It's more clearly apparent when you

Scientific theories are judged by the coherence they lend to our natural experience and the simplicity with which they do so.


The grand principle of the heavens balances on the razor's edge of truth.


I can see how that second sentence is a bit confusing. FWIW, my interpretation is "Our understanding of the fundamental laws of nature delicately balance on our observations." But in retrospect, I agree it is better without that sentence.

physical contact? karaoke? the outdoors? What does that have to do with rationality?

It's not really any of my business, since I'm not a New Yorker, but I'd be inclined to ask the same question. I understand that you're trying to build a community... I just have no idea why.

This is an extremely important question, and I think Swimmer's answer, while true, is not sufficient. Having additional goals beyond just forming a community will probably make people more attracted/committed to the group. Simply working together towards a common goal brings people closer together.
'cause it's fun?

I understand that you're trying to build a community... I just have no idea why.

Because being part of a community is something that, for most people, is just innately nice...and being in a community of people with particular values and habits makes most people better at living up to those values and building those habits.

One of the nice things about a community is being able to talk to a bunch of people who, although you may not know them personally, are not far from you in inferential distance and share much of the same jargon/vocabulary. Less Wrong h... (read more)

I've never heard the word "simple" used in game-theoretic context either. It just seemed that word was better suited to describe a [do x] strategy than a [do x with probability p and y with probability (1-p)] strategy.

If the word "remember" is bothering you, I've found people tend to be more receptive to explanations if you pretend you're reminding them of something they knew already. And the definition of a Nash equilibrium was in the main post.

Agreed. Your original response was fine as an explanation to Maelin; I singled out 'remember' in an attempt to imply the content of my second post (to Yvain), but did so in a fashion that was probably too obscure.

No simple Nash equilibrium. Both players adopting the mixed (coin-flipping) strategy is the Nash equilibrium in this case. Remember: a Nash equilibrium isn't a specific choice-per-player, but a specific strategy-per-player.

If this is actually an introductory post to game theory, is this really the right approach?

Eliezer's explanation hinges on the MWI being correct, which I understand is currently the minority opinion. Are we to understand that you're with the minority on this one?

Well, yes. But if you don't like MWI, you can postulate that the collapse occurs when the mass of the superposed system grows large enough; in other words, that the explanation is somewhere in the as-yet-unknown unification of QM and GR. Of course, every time someone succeeds in maintaining a superposition of a larger system, you should reduce your probability for this explanation. I think we are now up to objects that are actually visible with the naked eye.

That's really interesting. Thanks for the education.

I've never read Marx, but I don't think Plato's Republic would match most modern definitions of "democracy"; it was made up of predefined castes ruled by an elite minority.

No, Plato sketched out the way that his Republic would gradually deteriorate - one of the inevitable stages was "democracy". He also suggested that once the people began to rule, they would not only enjoy their freedom, they would begin to value freedom for its own sake - and then they would start to do ridiculous things, like free all the slaves, allow women to rule, and even show concern for the rights of animals.
The Republic wasn't democracy, but points along the political cycle he sketched were democratic (though surely Plato wasn't thinking of anything as specific as parliamentary democracy as we know it today.) The young Marx would have said that democracy (though not anything as specific as parliamentary democracy as we know it today - more like free association, cooperation, and individual autonomy) expressed the truth of human nature, while the old Marx would say that human nature plus the path of technological development existing over our whole history implies that at a certain point something like parliamentary democracy would be inevitable (but not irreplacable.)

And stay there, except for occasional digressions.

In other words, assuming I understand the claim: as time approaches infinity, so the probability of a randomly selected country being democratic approaches 1.

In this context, it would mean that those countries that aren't currently democratic will almost certainly adopt democracy at some point in the future.

And stay there? Or visit it as part of, for instance, a random walk?

Quick poll: Who here has actually met someone who thinks democracy arises inevitably from human nature?

Plato thought so. I think Marx did too, for similar reasons. I've met hardcore marxists.
What does "inevitably" mean? Obviously democracy is not universally used; does that make the statement trivially false?
If you delete "inevitably", then I have. Otherwise, I have not.

3) I allot a reasonable-seeming amount of time to think before deciding to drastically change something important. The logic is that the argument isn't evidence in itself - the evidence is the fact that the argument exists, and that you're not aware of any flaws in it. If you haven't thought about it for a while, the probability of having found flaws is low whether or not those flaws exist - so not having found them yet is only weak evidence against your current position.

So basically, "Before you've had time to consider them".

Load More