All of Bindbreaker's Comments + Replies

I would rather have examples that better conform to reality than examples that are better characterizations of the principles in question.

Explicitly nonfictional stories would be better, though of course certain concerns apply to posting such information and it might be harder to find good examples.

I'm quite pleased with the way my essay on subduction phrases [] addressed some of the problems with examples head on. I think that similar considerations apply here. Real life is terribly messy and non-fiction examples are likely to be lengthy and ambiguous to the point of obscurity. A better criticism is that the fictional examples need to be followed up by much longer real-life examples that address the practical difficulties. That however is more in the nature of "directions for future research" than an outright criticism.
disagreed. Using fiction to drive a point home works pretty well. And the examples illustrate and aehm illuminate their respective points. There is no need to take real life examples which would not be as illustrative.

Yes, no, yes, yes. This is a very well-written post, incidentally. Good work.

Karma doesn't mean "rationality points," and Aumann rationality has additional prerequisites anyway. My judgement stands, though I of course would revise that opinion if confronted with additional evidence. For reference, I put far more credence to the proposition "Kevin runs Clippy" than to the proposition "Clippy is a real (limited) paperclip-maximizer."

Karma is actually a direct measure of IQ, as defined in the wiki []. ETA: Made you look

To clarify, Eliezer Yudkowsky is working both on a book and on the Harry Potter fanfiction in question. Both pertain to rationality.

Are you joking? Clippy is a gimmick poster on the Internet based on a common (if extreme) example.

You protest, but hopefully you've updated your prior based on the likelihood ratio implied by the belief of a lesswrong user with over 1600 karma. I'm interested to see how many exchanges between you and Kevin it would take for the Aumann Agreement Theorem to kick in.

"He who dies with the most toys is nonetheless dead." --anonymous

Unless one of the toys in question is a cryostat. Then there's still hope.

The last time I really checked (which was back in the early days), you had a far higher than normal proportion of posts with negative karma, which is the main thing that I use to evaluate a poster's status. In general I find total karma to be unreliable because karma seems generally linked to post count (in the old days, this link was quite direct).

However, looking back now I see that your recent comments appear to have been much more generally appreciated. I am not as active as I would like and therefore haven't seen many of these comments. This was quite an interesting discovery, as it made me aware of a greater need to evaluate status in the present state and account for shifts over time, so thanks, I guess.

Back when there was no limit to the number of downvotes one account could make, someone - either several people or one person with multiple accoutns - went through his comment history and systematically downvoted every post. (I inferred this at the time from the fact that the number of downvotes - particularly for out-of-the-way replies and meta stuff - was too consistent. And possibly other evidence I don't recall, which would've come from noticing large shifts in karma at once.)
Pjeby has re-branded himself here, reversing the negative status he had acquired. I would judge that he has above average status now.
"Status" was capitalized to reference a specific meaning - my brain's emotional perception of status in the abstract - and it was a joke, hence the ';-)'

Since he uses this material in his business, the results of which are his livelihood, his marketable image is highly important, yes, and this article can show up when potential clients search for his name or materials, making it something he has to protect and correct to the best of his ability.

If a larger business had had their materials similarly summarized, I would have anticipated copyright infringement and takedown notices rather than politely phrased requests for editing.

Who is "Eliezer Yudowsky?"


"If I can't easily answer the question or refine my self-model relative to the provided suggestion, I assume that the description is accurate."

To be frank, I'm skeptical of that heuristic. For "love language," I literally could not orient myself correctly to answer any of the questions, nor could I honestly describe myself as really matching any of those categories. But I'm quite confident that that doesn't mean that they're all true, it just means that none of them apply to me!

Well, note that I investigated the questions associated with the language that I did feel applied to me. If none of them seemed right even to a first approximation I would have assumed that the love languages thing didn't make much sense or didn't work for me in particular. My point was that once I've picked one that seems basically right, I don't then cherry-pick subcomponents without a good reason.

Adding the "akrasia" tag would be helpful here.

Yes in all cases, but absolutely only if reversible.

I am asexual and thus have not experienced any of the romantic/sexual emotions. I feel as if doing so would almost certainly help my understanding of others, as well as broaden my emotional range. However, I seem to do quite fine without these emotions, and they seem to cause more problems than they are worth in many of the people around me. Therefore I would only take such pills if they were reversible, as my present state is quite happy and the alternative could certainly be worse.

No kidding. Do people remember that guy who was here at the very beginning and wouldn't shut up about how the key to being rational was castration? I doubt that troll would have had much to say would have been helpful but the position has a certain intuitive plausibility to me. To begin with, I'm pretty sure the ebb and flow of sexual arousal would be really easy to money pump.
You can just answer it for each case. Would you take either pill if they were irreversible? If they were reversible?

"Study strategy over the years and achieve the spirit of the warrior. Today is your victory over yourself of yesterday; tomorrow is your victory over lesser men."

--Miyamoto Musashi, The Book of Five Rings

"One thousand five hundred years ago, everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew that the Earth was flat... and fifteen minutes ago, you knew people were alone on this planet. Think about what you'll know tomorrow." -- Agent K, "Men in Black"

Five hundred years ago, everybody knew that the Earth was flat...

Not true! The ancient Greeks measured the circumference of the Earth to within 1%.

Ali can be short for several female names, but it can also be a male name.

This is a cultural norm kind of thing, but in the cultural norms where Alicorn chose her name, I think it really was intended to be a feminine username. I think women do have a tendency to try and choose somewhat feminine usernames, because otherwise a lot of the time on the internet they will be mistaken for men which gets annoying quickly. I think something that would allow us to definitely solve this problem is profile pictures (which don't have to be your actual picture) or user profiles.

The user name "Alicorn" seems gender-indeterminate to me.

Maybe, but I certainly assumed she was female the first time I heard the name, and I had never heard it before... maybe associations with Alice or Allison or whatever. Anyway it sure seems determinately female to me.
I assume that is without knowing that the word "alicorn" is related to unicorns? Or are you not confident enough in females liking unicorns much more so than males to be able to give a probability estimate? When I once wasn't sure about Alicorn's gender, I googled "alicorn", saw alicorn was a word related to unicorns and assigned a 95% probability then that Alicorn was female, which was confirmed by seeing someone refer to her as she on here.

Report: No discernible response for anything except the creepy old man (minor positive emotional response). Note that I don't really have a conception of "cute" or "sexy," so disregard my responses for cute boy, cute girl, and sexiest person.

Pardon me, markdown didn't like me leaving off 'http://'. Fixed.

Does anyone here have experience with piracetam?

2gwern13y []
Yes, I find it useful. I recommend the [] forums for the most reliable information on piracetam, nootropics and dietary supplements in general.
Some very informal experience: I've found its effects to be more noticeable when alcohol is involved - it seems to reduce the subjective "fuzzy-headedness" of being drunk, and I have a weak suspicion that it reduces hangover symptoms by a fair amount if taken the night of drinking (I don't get smashed often enough to test this properly). The most obvious effect is on my dreams if I take it at night - they become much more vivid. Several others I know who've tried it report the same thing, but I've read that some people don't experience this effect at all. If you don't get much dietary choline, you might consider supplementing with lecithin to avoid getting a headache.

What's an easy way to explain the paperclip thing?

We happen to like things like ice cream and happiness. But we could have liked paperclips. We could have liked them a lot, and not liked anything else enough to have it instead of paperclips. If that had happened, we'd want to turn everything into paperclips - even ourselves and each other!

I've found this to be true as well. Calling someone a fool in casual conversation is bizarrely more insulting than calling them a damn fool, as everyone will understand that the latter is a joke but the former might be taken seriously.

This is an incredibly good joke.

I'm pretty sure this would indicate that the AI is definitely not friendly.

Not necessarily: perhaps it is Friendly but is reasoning in a utilitarian manner: since it can only maximize the utility of the world if it is released, it is worth torturing millions of conscious beings for the sake of that end. I'm not sure this reasoning would be valid, though...

Fake difficulty applies to multiplayer too. Anything that adds barriers to entry or needless clicks is fake difficulty. Games like Starcraft, where you sometimes end up fighting the interface instead of your opponent, have a lot of fake difficulty. If you're going by That Other Site's definition of fake difficulty, the #1 thing on the list is "Bad technical aspects make it difficult," which certainly seems to apply!

For example, in Starcraft you have to micro all your workers to different mineral patches at the start of the game in order to get th... (read more)

Starcraft is a bad game, though; it's only popular because the ridiculously primitive 1998-era interface means that actual physical speed is required to control your units correctly, which adds barriers to entry to competitive play and makes it more challenging to play and therefore more impressive for someone to be good at. It's pretty much the embodiment of fake difficulty in game design.

The relative physical speed is what counts. The best players would benefit from a modern interface at least as much as much as the worst. Fake difficulty is a meaningful word only in singleplayer. Fake difficulty is giving computer controlled opponents more hit points or map hacks instead of better AI. In multiplayer, the difficulty is provided and dependent on the human opponent who is subject to the same rules as you, and the game is just a medium - a chess board, a tennis court. Edit: And barriers to entry are actually lower for Starcraft relative to other games because it's so old and so popular - there is an entire encyclopedia [] devoted to it it full of advice and ready to use game plans.
I never really got into playing starcraft because of the primitive interface, i could never really enjoy playing it, but I am into watching korean matches with english commentarys on youtube. I think that the primitive interface makes the game less enjoyable for me, but doesn't add 'fake difficulty'. I like that its a very difficult game to play well in terms of micro and macro, and then on top of that starcraft is also rich in strategy and 'tradition' (for some reason I like that starcraft is a very old game)
Um, I suppose your evidence is true, but the game is great in spite of its 1998-era interface. The balance between the races is sublime.

What does that mean in practical terms?

I adore many individual humans, and considering even complete strangers one at a time, I can offer the benefit of the doubt to a considerable degree. I abhor us as a species, and when large groups of humans do stupid or evil things, my benefit-of-doubt mechanisms stop working and I fall back on "we suck".

I suspect that short, concise posts and long, thought-out ones both get higher karma than ones that fall in between.

I don't find that that's necessarily correct. For example, this post of mine expressing skepticism about cryonics or this one questioning a highly rated post were both fairly highly rated. I think needless contrarianism gets downvoted, but reasonable arguments generally don't, even if they advance unpopular cases.

This seems unusual. You are much more likely to be injured against a knife than you are against a gun. I am moderately confident that I can take a handgun away from someone before they shoot me, given sufficiently close conditions; I am much less confident in my ability to deal with a knife.

From [] Injury rates were higher for robbers with knives, but people are probably less likely to fight back or otherwise provoke a robber with a gun.
That makes the knife scenario an even better dilemma than the gun scenario! The reason I'm more likely to intervene against a knife is that it's easier to protect the woman from a knife than from a gun. Against a knife, all she needs is some time to start running, but if a gun is involved, I need to actually subdue the assailant, which I can't. After all, he is bigger and stronger than me, and even has a weapon that can do serious damage. If all he has is a knife, though, all I need to do is buy enough time; even if I end up dead, the woman will probably get away.

I don't worry about this for the same reason that Eliezer doesn't worry about waking up with a blue tentacle for his arm.

Thanks for that generous spirit. But fine: You see a woman being dragged into an alley by a man with a gun. Scenario A) You have terminal brain cancer and you have 3 months to live. You read that morning that scientists have learned several new complications arising from freezing a brain. Scenario B) Your cryonics arrangements papers went through last night. You read that morning that scientists have successfully simulated a dog's brain in hardware after the dog has been cryogenically frozen for a year. Now what?

I'm pretty sure most people are concerned more with the scenario where revival comes before FAI.

I think most people who are concerned about revival aren't really considering on an emotional level FAI at all. I'd considered making the same promise regardless of FAI, but I think that it would be negligent of me to do so, with such important investment opportunities [] available. Also, I'm not sure I'd have that much money, even for just CronoDAS.

I take it you read "Transmetropolitan?" I don't think that particular reference case is very likely.

I have not read that (*googles*) series of comic books.

No-- thanks for the tip! I will adjust my calculations accordingly.

This post was obviously a joke, but "we should kill this guy so as to avoid social awkwardness" is probably a bad sentiment, revival or no revival.

On the other hand, "we should (legally) kill this guy so as to save his life" is unethical [] and I would never do it. But it is a significant question and the kind of reasoning that is relevant to all sorts of situations.

That seems like a fairly extreme outlier to me. I'm an extrovert, and for me that appears to mean simply that I prefer activities in which I interact with people to activities where I don't interact with people.

I plan to donate once I have X dollars of nonessential income, and yes, I have a specific value for X.

8Eliezer Yudkowsky13y
Antiakrasia, future-self-influencing recommendation: if you can afford $10/year today, make sure your current level of giving is not zero.

Did your calculations for X take into account discounting at 0-10%? Money for research years from now does much less good than money now.

I'm in the "amassing resources" phase at present. Part of the reason I'm on this site is to try and find out what organizations are worth donating to.

I am in no way a hero. I'm just a guy who did the math, and at least part of my motivation is selfish anyway.

I strongly advise you to immediately start donating something to somewhere, even if it's $10/year to Methuselah. If there's one thing you learn working in the nonprofit world, it's that people who donated last year are likely to also donate this year, and people who last year planned to donate "next year" will this year be planning to donate "next year".

I don't believe in excuses, I believe that signing up for cryonics is less rational than donating to prevent existential risks. For somewhat related reasons, I do not intend to have children.

6Eliezer Yudkowsky13y
Sounds like you could be in a consistent state of heroism, then. May I ask to which existential risk(s) you are currently donating?

Suppose your child dies. Afterward, everyone alive at the time of an unFriendly intelligence explosion plus the tiny handful signed up for cryonics (including your child), also dies. Would you say in retrospect that you'd been a bad parent, or would you plead that, in retrospect, you made the best possible decision given the information that you had?

I, personally, will allocate any resources that I would otherwise use for cryonics to the prevention of existential risks.

I have no child; this is not coincidence. If I did have a kid you can damn well better believe that kid would be signed up for cryonics or I wouldn't be able to sleep.

I, personally, will allocate any resources that I would otherwise use for cryonics to the prevention of existential risks.

I'll accept that excuse for your not being signed up yourself - though I'm rather skeptical until I see the donation receipt. I will not accept that excuse for your child not being signed up. I'll accept it as an excuse for not having a child, but not as an excuse for having a child and then not signing them up for cryonics. Take it out of the movie budget, not the existential risks budget.

This might get me blasted off the face of the Internet, but by my (admittedly primitive) calculations, there is a >95% chance that I will live to see the end of the world as we know it, whether that be a positive or negative end. I do not see any reason to sign up for cryonics, as it will merely constitute a drain on my currently available resources with no tangible benefit. I am further unconvinced that cryonics is a legitimate industry. I am, of course, open to argument, but I really can't see cryonics as something that would rationally inspire this sort of reaction.

I'm curious as to how you calculate that >95%. I ask because I, personally, overestimated the threats from what amounts to unfriendly AI at two points in time (during the Japanese 5th generation computer project, and during Lenat's CYC project), and I overestimated the threat from y2k (and I thought I had a solid lower bound on its effects from unprepared sectors of the economy at the time). Might you be doing something similar? Full disclosure: I have cryonics arrangements in place (with Alcor), but I'm unsure whether the odds of actually being revived or uploaded justify the (admittedly small) costs. Since I've signed up (around 1990 or so) I've revised my guess as to the odds downwards for a couple of reasons: (a) full Drexler/Merkle nanotech is taking much longer to be developed than I'd have guessed - "never" is still a distinct possibility (b) If we do get full nanotech, Robin Hanson's malthusian scenario of exploding upload replication looks chillingly plausible (c) During the Bush years, biodeathicists like Leon Kass actually got positions in high places. I'd anticipated that life extension might be a very hard technical problem - but not that there would be people in power actively trying to stop it.
Probably no tangible benefit, but expected utility? Those few percent, or tenths of a percent, where cryonics saves you are worth a lot (assuming you have values that make cryonics worth considering in the first place). (Full disclosure: I'm not signed up, but only because I think cryonics costs would come from the same "far-mode speculative futurism" mental account [] as better uses of money [], rather than "luxury consumption". If not for that consideration — which I'm not all that sure about in any case — the decision would be massively overdetermined [].)
I've yet to be convinced by the arguments for cryonics either. Given my age and health there's a < 1% chance that I will die in the next 20 years. There are numerous reasons why cryonics could fail and I estimate the chances of it succeeding at < 10%. The events that would make it more likely to succeed will also tend to make my survival without cryonics more likely. Overall I don't find the cost/benefit very compelling. The weirdness of it (contra the theme of Eliezer's post) is a factor as well.

It's my impression that, regardless of whether or not you actually have status, acting like you do is probably undesirable, as it gets you thinking in the wrong patterns.

I understand the joke, but the title nonetheless reminded me of the statements that political candidates make at the end of their commercials.

Of course. It's supposed to. I repurposed the wording because it amused me to do so. I'm sorry if you don't like it.

I like this post, but I don't like the title. I don't see what it has to do with the content, and it seems to assert high status.

Even ignoring Alicorn's actual explanation, given that she is the third-highest karma contributor, it's fair to say that she does have high status here.
It means that I endorse the contents of the post, which is about endorsed beliefs. It wasn't meant to be status-asserting.

This seems like one of the most irrational posts I've seen here. It starts off wrong (sunlight is actually bad for your skin) and goes downhill from there.

It doesn't even tell us what sort of bean we're looking at. Java or Cocoa? Perhaps if it is a mescal bean it really would allow us to see (or at least hallucinate) the whole landscape.
Load More