All of Lawliet's Comments + Replies

Not sure what exactly should count as appropriate, I had assumed that the votes would sort the good from the bad, but maybe people would be less inclined to downvote a book they liked, which could be a problem with a well-liked book.

Is it enough that these comments could serve as a warning, or do you suggest I delete/edit the post?

Yeah, I was thinking that it could turn into a popularity contest. There's certainly no way I would ever downvote this book. Instead of deleting it, why don't you make a case for its inclusion? Vladmir's off to a good start.
4Gordon Seidoh Worley14y
Personally I'm not very impressed with this book. Maybe it's because by the time I read it I already knew too much, but I found it of little use other than to run over the groves in my mind of the most basic aspects of evolution. Maybe it's best for someone who is coming from a strong religious background and needs a starter to break the religion from their mind?
One of my favorite books, but do you think it's appropriate for this list?

Okay, I have to ask: what exactly is so great about GEB? I see it get highly praised, and Eliezer_Yudkowsky goes overboard with praise for it, but I don't understand what's so great. (Yes, the page warns the content may be obsolete, but I think he still stands by that part.)

I've read almost all of it, and while it was enjoyable reading, I don't understand how it's useful as rationalist reading, or for AI. It's just a bunch of neat observations strung together, and a long (but helpful) explanation of Goedel's Theorem. In talking about AI, all I found we... (read more)

Heuristics and Biases, collection edited by Daniel Kahneman, Thomas Gilovich and Dale Griffin

It's an excellent series of papers and an interesting read, but ISTM there's not much of a take-away for practicing rationality compared to what you get in the posts here. It's written to demonstrate a mountain of experimental evidence for irrationality (and to meet academic standards), not to help readers see their own patterns of thought more clearly.
Iff you've purchased the original version of the book, you can get the four extra chapters from the new edition by emailing the very awesome Dan Ariely at

Judgment Under Uncertainty: Heuristics and Biases, collection edited by Daniel Kahneman, Amos Tversky and Paul Slovic

Dont understand the "activity" part, the post implied sleeping was fine, so does breathing count?

It seems hard to get completely away from signaling. There are times when people intentionally slow their breathing to signal that the exercise they just finished wasn't as hard for them as it really was.
Well we don't often reason about whether to breathe, but when we do so reason, yes this could count.

Stop fussing over voting! Now!

It's an important part of the site, and it'll pay off if it's done well.

I believe it is done well already. Even if it isn't, I believe it should be designed by a benevolent dictator instead of a committee. And even if I'm wrong about that, I believe all the good ideas have been discussed to hell and back in the zillion previous threads already.

Requesting a short summary of the current plans for future changes the voting system, preferably from someone in a position to know.

Any actual planned changes are probably listed on the issue tracker []. There don't seem to be many.

It's probably not mentioned enough that cryonics can be justified even if it looks like it probably wont work, as long as it's past some threshold.

While we're talking about getting out of bed, try telling yourself to wiggle your toe rather than to get up completely, gets easier from there.

I usually go with lifting my arms or something like that - basically the simplest motion that still moves me towards my goal, or I think about what I want to do until I can identify the first motion I'm going to make. I use it for other things too.
I do this. I find the hardest thing to do when getting up is raising my head above my body. Anything that doesn't involve that is easy, even up to rolling out of bed, hitting the floor, and doing push ups.

I gave him the benefit of the doubt, the voluntary castration sounds so crazy, but the absurdity heuristic is there for a reason, maybe I gave too much credit for simply being on LW.

It's not that part that's trolling. Look at his recent history. [] Someone like this is just griefing, and is impervious to downvoting. I think we can ban such trolls as this without any danger of evaporative cooling. (Um, is there a moderator with the banhammer?)

I see no reason for this comment other than as some sort of test to see if you get voted down no matter what you say, if that's the case then it's not a very good test. If you absolutely have to do that sort of thing, at least try a new account or something.

Making us reap good feelings from downward social comparison []. Naughty brains, love those tricks.

How is that good?

Well, it makes us feel better about ourselves? Pity about the whole FAI thing though...

Might be easier to manage comments and direct people to it if its a whole post rather than a comment in the may 09 open thread.

According to this post [], doing so would be "against blog guidelines". The suggested approach is to do top-level book review posts. I haven't seen any of these yet, though.

But wouldn't the site's earliest days be the time of least newcomers?

I must have misread, lifetime access to lesswrong isn't worth one cent, but you'll voluntarily spend hours of time on it?

This may or may not have to do with the fact that I am not paid by the hour. My stipend depends on grading papers and doing adequately in school, but if I can accomplish that in ten hours a week, I don't get paid any less than if I accomplish it in forty. Time I spend on Less Wrong isn't time I could be spending earning money, because I have enough on my plate that getting an outside job would be foolish of me. Also, one cent is not just one cent here. If my computer had a coin slot, I'd probably drop in a penny for lifetime access to Less Wrong. But spending time (not happily) wrestling with the transaction itself, and running the risk that something will go wrong and the access to the site won't come immediately after the penny has departed from my end, and wasting brainpower trying to decide whether the site is worth a penny when for all I know it could be gone next week or deteriorate tremendously in quality - that would be too big an intrusion, and that's what it looks like when you have to pay for website access. Additionally, coughing up any amount of money just to access a site sets up an incentive structure I don't care for. If people tolerate a pricetag for the main contents of websites - not just extra things like bonus or premium content, or physical objects from Cafépress, or donations as gratitude or charity - then there is less reason not to attach a pricetag. I visit more than enough different websites (thanks to Stumbleupon) to make a difference in my budget over the course of a month if I had to pay a penny each to see them all. In a nutshell: I can't trade time alone directly for money; I can't trade cash alone directly for website access; and I do not wish to universalize the maxim that paying for website access would endorse.

I would like to see the results made public, as well as seeing more surveys in general.

Don't have a good indicator of how many people would worry about public data, but as the survey-taking group size increases (as I presume will happen over time on LW) it should become easier to remain unidentifiable.

Plenty of people voluntarily fill out surveys about themselves on social networking sites, and those of us concerned with anonymity probably wouldn't be filling them out either way.

Some people are easier to identify than others (for example, if you're female or from a particular country) and any person may feel uncomfortable about a particular question, so that even marginal concern about being identified with an odd view may skew results. Consider making the data public in a way that gives the complete set of answers to each question, but doesn't allow comparison of how one person answered multiple questions. (I'm sure there's an easy way to say this, I don't know it.) So in other words, you can't tell that the person who answered "karma = -16" also answered "yes" to "superstitious". Any cross-correlations, of course, would need to be computed using the original, publicly unavailable data.

CI only offers full-body, but it's cheaper than Alcor's neuro option.

Are you just scared of the idea of evil aliens, or do you actually think that it's a significant risk that cryonicists recklessly ignore?

It's not high on my list of phobias. I don't judge the risk to be very serious. But then, the tiny risk of evil aliens isn't opposed to a great chance of eternal bliss; it's competing with an equally tiny chance of something very nice.

Seems that anybody who talks about being downvoted gets upvoted.

That's my observation as well. Personally I feel the urge to downvote whenever someone complains about being downvoted. I haven't actually done so yet, mostly because I haven't managed to explain the sentiment to myself.

By "extremely risk-averse" do you mean "working hard to maximise persistence odds" or "very scared of scary scenarios"?

You're right that death while signed up for cryonics is still a very bad thing, though. I don't think Eliezer would be fine with deaths if they were signed up, but sometimes he makes it seem that way.

I mean something like the second thing. Basically, I invariably would rather bet one dollar than bet two when the expected utility is identical with both bets - even odds, say. And if you make it a $1000 bet versus $2000, I'll probably prefer the first bet over the second even if the expected utility is strictly worse, simply because I can't tolerate any risk of being out two thousand dollars. (I can't tolerate much risk of being out a thousand either, given my poor-grad-student finances, but this is assuming I have no "don't gamble at all" option.)

Heres what I've gathered from you so far: You have not been more insightful since castration, but you have been calmer, and less influenced by some unspecific bias. You see testosterone as you see blood alcohol, and prefer its absence.

If you're interested in persuading us, stop promoting your brand with single sentences and go in to more detail.

Do you think you could summarise it for everybody in a post?

I'm not confident I could do a good job of it. He proposes that most problems in relationships come from our mythologies about ourselves and others. In order to have good relationships, we have to be able to be honest about what's actually going on underneath those mythologies. Obviously this involves work on ourselves, and we should help our partner to do the same (not by trying to change them, but by assisting them in discovering what is actually going on for them). He calls his approach to this kind of communication the "Real-Time Relationship." To quote from the book: "The Real-Time Relationship (RTR) is based on two core principles, designed to liberate both you and others in your communication with each other: 1. Thoughts precede emotions. 2. Honesty requires that we communicate our thoughts and feelings, not our conclusions." For a shorter read on relationships, you might like to try his "On Truth: The Tyranny of Illusion". Be forewarned that, even if you disagree, you may find either book an uncomfortable read.

I'd be interested in reading (but not writing) a post about rationalist relationships, specifically the interplay of manipulation, honesty and respect.

Seems more like a group chat than a post, but let's see what you all think.

I've found the work of Stefan Molyneux to be very insightful with regards to this (his other work has also been pretty influential for me). You can find his books for free here []. I haven't actually read his book on this specific topic ("Real-Time Relationships: The Logic of Love") since I was following his podcasting and forums pretty closely while he was working up to writing it.
This sounds very interesting, but I don't think I'm qualified to write it either.

I would upvote this because it's important that you answered the question and I don't want to discourage that, but I don't want to imply that I like your honor system solution.

Current drugs will only give you a bit of pleasure before wrecking you in some way or another.

CronoDAS should be doing his best to stay alive, his current pain being a down payment on future real wireheading.

5Paul Crowley14y
Some current drugs, like MDMA, are extremely rewarding at a very low risk.
It depends on what you mean by wrecking. Morphine, for example, is pretty safe. You can take it in useful, increasing amounts for a long time. You just can't ever stop using it after a certain point, or your brain will collapse on itself. This might be a consequence of the bluntness of our chemical instruments, but I don't think so. We now have much more complicated drugs that blunt and control physical withdrawal and dependence, like Subutex and so forth, but the recidivism and addiction numbers are still bad. Directly messing with your reward mechanisms just doesn't leave you a functioning brain afterward, and I doubt wireheading of any sophistication will either.

We talk a lot about bringing new people in to the community, well, here they are.

Not to imply that you're doing it wrong, but has any thought been put in to how to better handle these sorts of situations?

2Paul Crowley14y
I think we've barely started to talk about how to attract new people; there are no top-level posts I can think of about whether we want to attract new people, or what sort if so, or how to go about it. In any case, we definitely don't want just any new people at this stage; we want people who buy into the fundamentals of what we're trying to do here, and we want people who can express themselves clearly.
Fictional beisutsukai would invent it soon enough.

Echoing this, but dont limit your reply to solely card-games, if you have anything else to add.

Mensa themselves say they aim to take the top 2% of the population. This strikes me as too many to be useful.

Useful for what?

Useful as evidence of smarts; useful as a community of smart people. I was a member many years ago, just to see what it was like. Finding insufficient reason to stay, I left. A community has to have some sort of focus, a reason for its members to be there, or it doesn't work as one. Being a bit brighter than the mass, and "enjoying each other's company and participating in a wide range of social and cultural activities" (from their web site) strikes me as rather diffuse. The company was, like Eliezer described, like a small SF convention -- but without the SF to provide the focus. I've been going to cons for a long time, but I only went to a few Mensa meetings. When I was a member, I also went to a couple of AGMs, where intelligence was conspicuously not in evidence.

If I thought my own comment was downvote-worthy, I probably wouldn't have posted it

When downvoted you can hope for an explanation, and you can hate it when people don't give one, but forcing one?

I've offered three bright-line tests [] for when you can feel entitled to an explanation of what is wrong with your comment. Notice how your use of the word comment, as though all comments and explanations are equal, strips out the quantitative aspect. I don't think that you can expect people to put more effort into explaining a down vote than the writer of the original comment put into writing it. If you spend five minutes writing a comment that contributes a tangle of confusions to the discussion you are not entitled to have a down-voter spend half an hour on a comment that untangles it all for you. One the other hand, if you spend some extra time on your post, distinguishing subtle nuances of words, and tagging them, eg free1=gratis, free2=libre, and then make your your point with the tagged words, eg the GNU GPL focuses on free2 and free1 is collateral damage, then you have born much of the burden of untangling the ordinary, boring confusions. It is much less labour for a respondent to explain why he disagrees with you, and he should say.
Right, but that doesn't necessarily mean you should upvote it. Comments like the one above, while they add to the discussion, are not on par with comments that also make good, clear arguments, cite sources, and link to relevant resources. ETA: Yes, I do realize the tension between this and the notion that I wouldn't post something if I didn't think it was upvote-worthy. If everybody agrees with me on this stuff I'm going to have to go back and un-upvote a lot of my own comments.

Since I would not be able to upvote my comment, upvoting someone else's comment would suggest that I think their comment is better than my own

Huh? If you have no ability to upvote yourself, why would upvoting someone else's comment indicate it's better than yours?

Suppose that I don't have the ability to upvote my comment, but I have the ability to upvote someone else's. So even though I can't say that my comment is better-than-average, I can say someone else's is. Thus, if I think my comment is the best, I shouldn't vote anyone else's comments higher. Okay, I was reaching a bit with that one. It could, after all, be argued that whatever concern I have in that situation, is identical to the situation where everyone can upvote their own comments and always does so. However, in the situation where everyone doesn't do so (which would perhaps be more likely if karma was not tied to the auto-upvote), I have the option of re-evaluating my comment in light of other people's, and removing the upvote (or even downvoting!) if my opinion of my own comment changes. In such a situation, upvoting someone else's comment just means, "This comment is as good as mine".

Stop being vague and unhelpful

The word "iffy" in your acronym should be replaced, I think.

if our beloved Omega takes up a job as an oracle for humanity, and we can just ask him any question at any time and be confident in his answer, what should happen to our pursuit of rationality?

dunno, ask Omega

I agree until the last paragraph, I seem to remember thinking that there was a way it could have been done better, and that I could excuse his error because he wasn't overcoming an impossibility.

Unfortunately, I dont remember how I thought to fix it.

I've spent a lot of time scouring for something similar, Code Geass was one of the better ones.

Any particular reason to single those two out? I might give The Dosadi Experiment higher priority.

I don't recommend The Dosadi Experiment as a good example of rationality; I explicitly de-recommend it. The Vor Game, aside from being delightful, can be seen as a wonderful lesson in how setting priorities can be helpful, but it's not about rationality, it's about personal manipulation. One character groks another's motivational structure and creates a situation that will make her "fall off the horse", so to speak. Vorkosigan works primarily through charisma and sub-conscious analysis. He's not a rationalist in any particular sense.

As far as I know, there is no private-message function built in to lesswrong. I prefer to maintain some level of anonymity anyway, and it would hardly be worth creating an account specifically for this purpose. I don't care that much, though a general idea of which character does it or when would be appreciated.

All that aside, reading it made the whole thing move a lot faster, which probably contributed to the enjoyment, but I otherwise I think they are fairly similar.

There actually is a private messaging thing built into LW, but it's not obvious, and there's no direct link to see incoming messages. go to [] to see your inbox (which includes replies to your comments). Also, if you click on someone else's name, that is, so you can see a different person's profile, then there will be a direct link available to message them. But, again, unless they're actively checking, I don't think they'll have any obvious way to know that a message was sent to them. EDIT: when I said there's no direct link, I meant "there's no obvious simple path from just clicking stuff on the front page of LW to get to your inbox"

I've heard that complaint a lot, and I agree in the case of Sherlock Holmes, but death note seemed somehow plausible.

If you can remember it at all, do you think you could tell me specifically which parts you thought were "lucky guesses"? I like to keep those sorts of things in mind when re-reading.

Like I said, I don't plan on rewatching the anime any time soon (and I don't know how the anime differs from the manga). That said, if you're serious about it, send me a private message and I'll send you my MSN account so that you can nag me on there so I don't forget to respond to this. =)

The manga/anime series "Death Note"

It's a long mental battle between two clever people, not much for rationality techniques, but characters think rationally, and the magical parts have well defined rules, similar to Lawrence Watt-Evans' fiction.

I would be terribly thankful to anybody who could reccomend me some more stories involving these sorts of fights. Trickery and betrayal is common enough, but a prolonged fued of this nature is rare.

Death Note is a brilliant anime, but not really a great of an example of rationality. Tvtropes calls it Xanatos Roulette.

First you start with a smart plan. That can be rational. Then you complicate the plan. It makes characters look even smarter, and still quite rational. At some point the plan is so overcomplicated, so many uncertainties are just assumed, that it's no longer rationality but plain omniscience and characters "knowing the script of future episodes". That's what Death Note is. Light and L overplot, and it's really fun to watch, and ... (read more)

1Eliezer Yudkowsky14y [] They specifically recommend Code Geass and The Dosadi Experiment. [] also mentions The Vor Game by Bujold.

I really liked the Death Note anime. However, I think it's much more Sherlock-Holmes-ish than what Eliezer is asking for here. It's been quite a long time since I saw it, but I remember at the time I was annoyed often when both the protagonist and the antagonist would make "very lucky guesses", deducing something which is possible given the evidence at hand, but far from being the only possibility from said evidence. I haven't read much Sherlock, but from what I've heard, Sherlock similarly makes amazingly lucky guesses. Certainly, EY's summary o... (read more)

Ditto for Death Note, though only the first season. The logic of a story is that the good guys will win in the end, which is not what you should necessarily expect in real life. (spoilers) The awesomeness of Death Note's first season was not just in the decent instrumental rationality attributed to the characters (which gave me a very good impression), but also in that you couldn't guess who would win. (Edited for spoilers)
Mina [] is a rather rational magical girl.

If you want to stop someone from reading a book, there's generally better ways than telling them not to do it.

That aside, kids can be surprisingly dumb, I wouldn't rely on them reaching the right conclusions even with assistance.

Parents are dumb.
What are the "better ways" that you allude to? Unless you plan to be around to correct them forever, I think there's a point when you do have to trust the next generation.

It's beside the point, but your idea of torture might be a bit light if you would undergo five minutes out of curiosity.

Maybe he's thinking of water-boading.

I think he is implying that we think we agree when we dont really, in that case he would expect us to vote in agreement with you.

Actually, I'm worried he's having some kind of breakdown. The Eternally Recurring Personal Identity Wars had plenty of arguers on both sides. JKC was there. Him now talking like he's the only one who ever believed that deconstruction and reconstruction using "different atoms" preserves identity, may indicate that the Personal Identity Wars really literally did send him off the edge.

I have to say, this is a failure mode I've never encountered before:

"You won! It's over! Look, we all agree with you!"


When the site crashes it says things like "looks like today isn't your day" or "it's okay to cry".

One of these phrases links you to the reddit blog, another links to the reddit store, leftovers I guess.

Useful ego booster too.

Curious, are you proud of how difficult you find lying?

Yes. (It probably comes from playing Ultima IV during my formative years.) I do admit to being a "truth twister" though - I won't tell false statements, but I am willing to omit relevant information, imply false conclusions, or simply refuse to answer awkward questions. (And yes, I agree that there is a certain degree of hypocrisy involved in this practice, but it serves as a reasonable workaround for my inability to lie the way other people seemingly have no trouble doing.)

I dont know much about charity, but I dont contest that this was made up in a day.

"Never fix the worst problem first, because thats the way skin heals"

You can't fix the worse problem first. You'll get nowhere if you look at this as a collection of individual problems. You won't find a country that has a high standard of living, high employment, and a good educational system, but can't get mosquito nets for their beds.

You can't even begin to think about the issue unless you understand some complex-system domain, preferably economics or ecology. As a crude analogy, an economy is like the framework of a large and complicated tent. If the tent has fallen, you can't pick up individual pieces and put them ... (read more)

Even better to address as many as possible, making them all feel like they are being specifically targeted

"Hey you with the dark hair"

I suspect that would be counterproductive - people would rather hang onto the idea that someone else is being targeted.

Load More