All of Gastogh's Comments + Replies

Additionally, optimizing for a particular identity might not only be counterproductive - it might actually be a quick way to get people to despise you.

Sure, but not optimizing for a particular identity can easily be just as harmful. This goes especially for social situations; consider being gay and not optimizing for a non-gay facade in an emphatically anti-gay environment.

Given that, the obvious follow-up question is how to tell the good identities from the bad, and I think the post does well in identifying some of the bad types. This, for example:


... (read more)

Don't know if this is where it comes from, but I always thought of "sequences" as an elaboration on the idea of rationality as a martial art; the term has some significance in theatrical swordplay, and it could also be compared to the Japanese kata.

I read the first half, skimmed the second, and glanced at a handful of the slides. Based on that, I would say it's mostly introductory material with nothing new for those who have read the sequences. IOW, a summary of the lecture would basically be a summary of a summary of LW.

To me it seems like a joke.

To the extent that it is a joke it is a bad joke, inappropriate to the context, with an undesirable expected influence, encouraging flawed patterns of thought. ie. The feature of humor that allows it to bypass critical facilities would makes the joke interpretation worse than a more direct interpretation. Something being a 'joke' does not make it immune from criticism. Or, rather, it often does make it immune from criticism but this is unfortunate. This comment [] in response to the text that it quotes being overwhelmingly positively received is a negative sign. I speculate (or perhaps merely hope) that in a different thread it may not have been given as much leeway.

This would explain why some people recommend starting sentences with "I think..." etc. to reduce conflicts.

In a model-sharing mode that does not make much sense. Sentences "I think X" and "X" are equivalent.

I think it does make sense, even in model-sharing mode. "I think" has a modal function; modal expressions communicate something about your degree of certainty in what you're saying, and so does leaving them out. The general pattern is that flat statements without modal qualifiers are interpreted as being spoken... (read more)

Seconded. Granted, my sample size is pretty minuscule, but still.

And as an extra reason why LW folks might be interested in Rajaniemi's books, the second book of the series, The Fractal Prince, mentions something called "extrapolated volition" being at the heart of one of the cultures in the novels' setting.

One of Rajaniemi's short stories (not in either of these books) even had a mention of a "Coherent Extrapolated Volition" and a brief description of what that meant, IIRC.
Thanks for the suggestion!

Why do you think that having Asperger's gives you immunity to revulsion at the quality of a review?

  1. Are there other values that, if we traded them off, might make MFAI much easier?

I don't understand this question. Is it somehow not trivially obvious that the more values you remove from the equation (starting with "complexity"), the easier things become?

Right, but not all trade-offs are equal. Thinking-rainbows-are-pretty and self-determination are worth different amounts.

Sign me up for the interest list as well. On a related note: given the number of upvotes for the others who have expressed interest, the writeup might warrant a Discussion-level post when the time comes; if it does end up working anywhere near as well as Rhinehart's personal experiences, I feel we shouldn't risk the finding being buried in the comments of this thread.

Also, in case you don't share his misgivings about providing brand names, such a list would be appreciated. Part of the reason is that Rhinehart says he lives in one of the largest metropolita... (read more)

Man, my very own Discussion-level post! I'll start working on that. As for my list, I'm about 90% done with the first sweep, and then I'll go back through it and try to find better alternatives. One thing I've noticed so far is how outrageously high the dosages of some vitamins that makers generally sell! I knew it was above the FDA recommendations, but I didn't know that it was sometimes more than 3 orders of magnitude!

I mostly steer clear of AI posts like this, but I wanted to give props for the drawing of unsurpassable elegance.

:-) You think I should break out and try my luck as an artist? Silly question - of course I should!

Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

Some possibilities:

  1. There have been deliberate efforts at community-building, as evidenced by all the meetup-threads and one whole sequence, which may suggest that one is supposed to identify with the locals. Even relatively innocuous things like introduction and census threads can contribute to this if one chooses to take a less than charitable view of them, since they focus on LW itself instead of

... (read more)
I agree with these, and I wonder how we can counteract these effects. For example I've often used "LWer" as shorthand for "LW participant". Would it be better to write out the latter in full? Should we more explicitly invite newcomers to think of LW in instrumental/consequentialist terms, and not in terms of identity and affiliation? For example, we could explain that "joining the LW community" ought to be interpreted as "making use of LW facilities and contributing to LW discussions and projects" rather than "adopting 'LW member' as part of one's social identity and endorsing some identifying set of ideas", and maybe link to some articles like Paul Graham's Keep Your Identity Small [].

I always was rather curious about that other story EY mentions in the comments. (The "gloves off on the application of FT" one, not the boreanas one.) It could have made for tremendously useful memetic material / motivation for those who can't visualize a compelling future. Given all the writing effort he would later invest in MoR, I suppose the flaw with that prospect was a perceived forced tradeoff between motivating the unmotivated and demotivating the motivated.

I would strongly prefer that Eliezer not write a compelling eutopia ever. Avatar was already compelling enough to make a whole bunch of people pretty unhappy awhile back.

What's even more interesting is that if this idea has any actual basis in reality... then it offers the possibility of coming up with approaches to counter it: promoting the idea that waking up from cryo will involve being enmeshed in a community rightaway.

Do we expect that to really be the case, though?

With current activities. nope; that's why part of the promotion would have to be the laying of the foundation of that community, trying to help it come into being (or at least make it easier to do so).

This may be somewhat besides the point of the OP, but "cryonics" + "social obligations" in the context of the old headache about the popularity of cryonics reminded me of this:

The laws of different countries allow potential donors to permit or refuse donation, or give this choice to relatives. The frequency of donations varies among countries.

There are two main methods for determining voluntary consent: "opt in" (only those who have given explicit consent are donors) and "opt out" (anyone who has not refused is a d

... (read more)

I voted No, but then I remembered that under the terms of the experiment as well as for practical purposes, there are things far more subtle than merely pushing a "Release" button that would count as releasing the AI. That said, if I could I'd change my vote to Not sure.

No suicide note has surfaced, PGP-signed or otherwise. No public statements that I've been able to find have identified witnesses or method.

Some of this information has been released since the posting of the parent, but because the tone of the post feels like it was jumping a gun or two, I wanted to throw this out there:

There are good reasons why the media might not want to go into detail on these things, especially when the person in question was young, famous and popular. The relatively recent Bridgend suicide spiral was (is?) a prime example of such ... (read more)

I'm not sure how literally I'm supposed to take that last statement, or how general its intended application is. It just doesn't seem practicable.

I'm assuming you wouldn't drop everything else that's going on in your life for an unspecified amount of time in order to personally force a stranger to stay alive, all just as a response to them stating that it would be their preference to die. Was this only meant to apply if it was someone close to you who expressed that desire, or do you actually work full-time in suicide prevention or something?

Well, that's a best-case scenario. Obviously opportunity costs and such might make it impractical. But if possible you should prevent them from killing themself and work on persuading them not to try. I don't work in suicide prevention and I don't know anyone who does; this is just my judgement of the hypothetical scenario presented (with a few additional assumptions for details that weren't specified.)

If they really can't even see that someone can care, then it certainly sounds as though the problem is in their understanding rather than your explanations. The viewpoint of "I don't care what happens if it doesn't involve me in any way" doesn't seem in any way inherently self-contradictory, so it'd be a hard position to argue against, but that shouldn't be getting in the way of seeing that not everyone has to think that way. Things like these three comments might have a shot at bridging the empathic gap, but if that fails... I got nothing.

This may seem like nitpicking, but I promise it's for non-troll purposes.

In short, I don't understand what the problem is. What do you mean by falling flat? That they don't understand what you're saying, that they don't agree with you, or something else? Are you trying to change their minds so that they'd think less about themselves and more about the civilization at large? What precisely is the goal that you're failing to accomplish?

On the occasions I've had this conversation, IIRC, I don't seem to have managed to even get to the stage of them understanding that I /can/ care about what happens after I die, let alone get to an agreement about what's /worth/ caring about post-mortem.

Wanting to kill a specific person may be a requirement for fueling the spell, sure, but I don't see why that necessarily entails everyone else being immune to what is essentially a profoundly lethal effect. Once a bullet is in the air, it doesn't matter what motivated the firing of the gun.

The bit about nobody mentioning collateral damage sounds like an argument from silence. I'll tentatively grant you the point about "no possible defense", but to me it seems like Moody could well have been talking about deliberate, cold-blooded murder rather tha... (read more)

I don't remember anything about the spell not being able to hit anything but the intended target, either in canon or the MoRverse. What's your source? Or, if there is no explicit source, what makes it "obvious"?

I just said. "You have to mean it", so it's odd that you could kill someone you didn't mean to. Even if you interpret it as "You have to want someone dead, not necessarily the same person", "if you're arrested for killing with it, there's no possible defense", and "I meant to kill the Death Eater, but I hit the bystander" is a possible defense. Also nobody ever mentioned collateral damage.

I started reading too late to catch most notes of this sort by EY (and I often skip Author's Notes anyway), but from personal real-time observation of other fanfics it seems to be a tremendous help for authors to beg for reviews, in any and all senses of "begging". Asking for stuff is good, and holding updates hostage for the price of reviews is even better (assuming there actually are any readers). Giving public thanks to reviewers also works.

Kudos to the one who formulated the questions. I found them unusually easy to answer, at large.

I'm only puzzled at the lack of an umbrella option for the humanities in the question on profession. Were they meant to fall into the category of social sciences?

Not being that well-versed in the MLP-verse I didn't read the fic, but here's my two cents anyway:

If "I'm afraid of dying" didn't manage the intended emotional appeal, it may be because of those allegations of selfishness you already noted. One solution is to steer attention away from what death implies for her, and towards what it means for someone else. Altruism, if not overdone, should work better than self-interest (however enlightened). Here's an excerpt from one Damien's fanfic Ascension, which I felt worked quite well:

This Saria was just

... (read more)

Can anyone with a better historical perspective on these things tell me if there's a single recorded occurrence of the year 2045 being mentioned as the magic deadline for some cool futuristic thing before Permutation City was published? It just seems like I'm seeing that date a whole lot in these contexts.

Isn't it Kurzweil arguing that the singularity is going to happen in 2045 and who cares about confidence intervals?

Thanks for posting this here. I hadn't been keeping tabs on the SIAI site itself and hadn't noticed the whole matching drive until this post.

Upvoting for capturing the remark for those of us who didn't catch it before it was edited out. Yvain has the best puns.

I agree that there's some merit to treating alcohol's effects on you and others separately, but if we do that, shouldn't we then also work to exclude some of its benefits as "social externalities"? Like the whole "alcohol -> socializing -> mental well-being"-pattern?

You should exclude the mental well-being of the others you socialize with while drunk, yes. But that's not going to show up on your personal longevity.

Yeah, I guess the equation was misapplied there. The point was that the statistics won't (or might not) chalk the death up to alcohol like they should, which I'd say is a harmfully misleading omission; even if it's not a longevity problem for the drunk driver, it is for the other person.

Color me unconvinced. These "benefits" may come from any number of things, and taking alcohol as a general remedy may not be an advisable course of action because the problem is likely to be specific. Consider the following (I'll be using "longevity" as shorthand for "improvement WRT total mortality"):

  • Alcohol -> lowered social anxiety -> more socialization -> mental well-being -> longevity
  • Alcohol -> distraction from (seemingly) insurmountable problems -> mental well-being -> longevity
  • Alcohol -> [ins
... (read more)
I was about to say, "Of course they controlled for income, that's totally basic", but I looked. Most of the studies didn't control for class or income. :( Looking elsewhere, income is positively associated with drinking, so income could well be the hidden variable increasing drinking and decreasing mortality. Also, some of the studies controlled for body mass index, meaning that if your beer gut increases your mortality, that doesn't show up.
Well said. It would actually be interesting to see some research on the biological side of alcohol consumption, say, some studies on the longetivity of rats consuming C2H5OH-containing drinks versus their non-alcoholic controls. (At the very least, the rats might be saved from less pleasant experiments...)
The only way that would contribute to the total mortality rate for drinkers being lower than for non-drinkers would be if I'm more likely to kill a pedestrian given that the pedestrian is sober than given that the pedestrian is drunk. (OTOH, an effect such as “I'm (going to get) drunk, so I'm not driving tonight -> I'm walking back home rather than driving to there -> I'm less likely to die walking a mile than driving a mile” would be in the right direction, though --I guess-- much smaller than other effects. My money's on the biggest effect being the one about income.)
I don't think it would be right or proper to control for killing other people due to alcohol use even if you could. The social externalities of alcohol use are a separate question from the private benefits.

People can mean one of two things when they talk about sex ratios; the first is birth rates, and the second is the number of people that exist at a given moment. In much of the world men have a lifespan several years shorter than women (and lead riskier lives, though that may already be taken into account), which may indeed lead to women being the majority.

If you count them worldwide, given the selective abortions/infanticides in China I'm not that sure that's the case. ETA: From Wikipedia: The sex ratio for the entire world population is 101 males to 100 females. []

The best solution I’ve heard started by looking at who benefits from this norm [older women] and wondering whether they could have contributed to it.

Young men benefit from the decreased competition in the mating market.

Another, less plausible, suggestion I’ve heard is that it’s to do with mental capacity. I find this unconvincing because we have few objections to a high-status man dating a beautiful but low-intellect woman.

The objections never seemed all that few to me. The negative connotations of the term "trophy wife" are pretty well-established, IMO.

[comment deleted]

The Happiness and Self-Help-section might have Klevador's Be Happier in it. The post could serve as an index to many of the recurring themes in that section, as well as a springboard for further research, what with all those sources plugged at the end.

The two links to an article on Solving The Wrong Problem found in the original are dead. I'm doubtful of that article having much of value to add to what's right on the tin, but in case it did (or simply for the sake of completeness): does anyone know where it could be found? Googling the title returns thousands of hits, some of them blog posts by the same name by various authors.

Here you go [].

I'm considering buying Parachute and Flow, but I have a few questions about the latter. Its author has written more than one book on the topic, so I'd like to know:

a) Is this the only book among his publications that I should read? b) ...and if not, which ones should I read and what's the appropriate order? c) Are you recommending this particular book over the others by Csíkszentmihályi because you've read them all and consider it the best, or because you've only read the one and found it worth the time even in isolation?

I'm sorry; of Csikzentmihalyi's books, I have only read Flow. However, I have read at least 40 self-help books, and I would put that book in the top 4.

I believe EY mentioned somewhere that 'Verres' was a composite of Herreshoff and Vassar.


There are anecdotes where pseudo-explanations like "memory bias" just don't cut it—in order for you to confidently deny psi you have to confidently accuse them of lying,

Can you give an example or two of such anecdotes?

Am I only one who has serious trouble following presentations in a fictitious dialogue format such as this? The sum of my experience of the whole Obert/Subhan exchange and almost every intermediate step therein boils down to the line:

Subhan: "Fair enough. Where were we?"

Nope, you're not the only one. That said, I also know people who react well to this sort of presentation. Different strokes, and all. (That said, this particular example of presentation-through-dialogue is IMHO relatively poor.)
You are not alone.

On Skype with Eliezer, I said: "Eliezer, you've been unusually pleasant these past three weeks. I'm really happy to see that, and moreover, it increases my probability than an Eliezer-led FAI research team will work. What caused this change, do you think?"

Eliezer replied: "Well, three weeks ago I was working with Anna and Alicorn, and every time I said something nice they fed me an M&M."

Made me smile. Thanks for sharing.

Hopefully now that the experiment is over, they will return to the original schedule of giving M&Ms for new HPMoR chapters. Seriously, people are suffering here. :D

I'd prefer AI Safety Institute over Center for AI Safety, but I agree with the others that that general theme is the most appropriate given what you do.

More seriously, Internet shows a lot about what people truly like, since there's so much choice, and it's not constrained by issues like practicality and prices. Notice total lack of interest in realistic violence and gore and anything more than one standard deviation outside of sexual norms of the society, and none of these due to lack of availability.

Eh? Total lack of interest? Have you ever been on 4chan? Realistic violence threads crop up regularly over there, and it's notorious for catering to almost any kind of sexual deviance the average person c... (read more)

I actually know various chans quite well, and they all pretend to be those totally ridiculous everything goes places, but when you actually look at them >90% of threads are perfectly reasonable discussions of perfectly ordinary subjects. Especially outside /b/. This generated far more interest on 4chan than all gore threads put together [].

For example, suppose a computer program needs to model people very accurately to make some predictions, and it models those people so accurately that the "simulated" people can experience conscious suffering. In a very large computation of this type, millions of people could be created, suffer for some time, and then be destroyed when they are no longer needed for making the predictions desired by the program. This idea was first mentioned by Eliezer Yudkowsky in Nonperson Predicates.

Nitpick: we can date this concern at least as far back as Ve... (read more)

For example, Butler (1863) argues that machines will need us to help them reproduce,

I'm not sure if this is going to win you any points. Maybe for thoroughness, but citing something almost 150 years old in the field of AI doesn't reflect particularly well on the citer's perceived understanding of what's up to scratch and not in this day and age. It kind of reads like a strawnman; "the arguments for this position are so weak we have to go back to the nineteenth century to find any." That may actually be the case, but if so, it might not be wort... (read more)

To be honest the entire concept of Kaj's paper reads like a strawman. Only in the sense that the entire concept is so ridiculous that it feels inexcusably contemptuous to attribute that belief to anyone. This is why it is a good thing Kaj is writing such papers and not me. My abstract of "WTF? Just.... no." wouldn't go down too well.
Yeah, we'll probably cut that sentence.

I'd say "moral atheism" is being used as an idiomatic expression; a set of more than one word with a meaning that's gestalt to its individual components. One of the synonyms for "atheism" is "godlessness", so by analogy "moral atheism" would just mean "morality-lessness".

We have a word for "morality-lessness", and it is amorality, which coincidentally works more naturally in your analogy: If morality is analogous to theism, then a-morality is analogous to a-theism. I hope you understand my trouble with the use of an idiom that implicitly equates morality with theism. (Well, amorality with atheism, which is more the problem.) (sorry about all the edits, this was written horribly.)

It paraphrases the bottom line of the metaethics sequence - or what I took to be the bottom line of those posts, anyway. Namely, that one can have values and a naturalistic worldview at the same time.

So, having values is moral theism? The choice of words seems suspect.

Whatever doubt or doctrinal Atheism you and your friends may have, don't fall into moral atheism.

-Charles Kingsley


PM sent. will do fine; the message system there runs faster than my email and a prolonged discussion would probably clutter other formats such as my primary emailbox or the threads around LW.

Some common forum might be necessary for projects whose commentariat consists of several people, so as to avoid people stating the same points over and over, but the idea of mailing lists doesn't hold much appeal - I get huge noise-to-signal ratios from some of my university mailing lists and I'd rather not add to the list of mailboxes I have to sieve through on a regular basis. I've never tried LJ; does it work differently and is it any better?

It's better for hosting discussions among multiple people. So far I haven't seen the kind of demand that would warrant setting up a community there, though.

Sounds good. Will you be uploading yours here à la Chapter One Part One up there, or is there another site you're using?

I've been emailing the complete first chapter to interested readers -- if you PM me your email address I'll send it to you. Should I leave my commentary on I'm also willing to set up a mailing list or a private LJ community if we had enough folks interested in using something like that. I'd rather not use a wiki or a public site because my novel is intended for eventual publication.

I'm starting to feel I don't know what's being meant by uncertainty here. It is not, to me, a reason in and of itself either way - to push the button or not. And not being a reason to do one thing or another, I find myself confused at the idea of looking for "reasons other than uncertainty". (Or did I misunderstand that part of your post?) For me it's just a thing I have to reason in the presence of, a fault line to be aware of and to be minimized to the best of my ability when making predictions.

For the other point, here's some direct disclosure... (read more)

Perhaps I confused the issue by introducing the word "uncertainty." I'm happy to drop that word. You started out by saying "The reason why perhaps not push the button: unforeseeable (?) unintended consequences." My point is that there are unforeseen unintended consequences both to pushing and not-pushing the button, and therefore the existence of those consequences is not a reason to do either. You are now arguing, instead, that the reason to not-push the button is that the expected consequences of pushing it are poor. You don't actually say that they are worse than the expected consequences of not-pushing it are better, but if you believe that as well, then (as I said above) that's an excellent reason to not-push the button. It's just a different reason than you started out citing.
Load More