Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set

Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set

To my knowledge LessWrong hasn't received a great deal of media coverage. So, I was surprised when I came across an article via a Facebook friend which also appeared on the cover of the New York Observer today. However, I was disappointed upon reading it, as I don't think it is an accurate reflection of the community. It certainly doesn't reflect my experience with the LW communities in Toronto and Waterloo. 

I thought it would be interesting to see what the broader LessWrong community thought about this article. I think it would make for a good discussion.

Possible conversation topics:

  • This article will likely reach many people that have never heard of LessWrong before. Is this a good introduction to LessWrong for those people?
  • Does this article give an accurate characterization of the LessWrong community?

Edit 1: Added some clarification about my view on the article.

Edit 2: Re-added link using “nofollow” attribute.

231 comments, sorted by
magical algorithm
Highlighting new comments since Today at 12:30 PM
Select new highlight date
Moderation Guidelinesexpand_more

I know that this article is more than a bit sensationalized, but it covers most of the things that I donate to the SIAI despite, like several members' evangelical polyamory. Such things don't help the phyg pattern matching, which already hits us hard.

The "evangelical polyamory" seems like an example of where Rationalists aren't being particularly rational.

In order to get widespread adoption of your main (more important) ideas, it seems like a good idea to me to keep your other, possibly alienating, ideas private.

Being the champion of a cause sometimes necessitates personal sacrifice beyond just hard work.

Probably another example: calling themselves "Rationalists"

Yeah.

Seriously, why should anyone think that SI is anything more than "narcissistic dilettantes who think they need to teach their awesome big picture ideas to the mere technicians that are creating the future", to paraphrase one of my friends?

This is pretty damn illuminating:

http://lesswrong.com/lw/9gy/the_singularity_institutes_arrogance_problem/5p6a

re: sex life, nothing wrong with it per se, but consider that there's things like psychopathy checklist where you score points for basically talking people into giving you money, for being admired beyond accomplishments, and for sexual promiscuity also. On top of that most people will give you fuzzy psychopathy point for believing the AI to be psychopathic, because typical mind fallacy. Not saying that it is solid science, it isn't, just outlining how many people think.

On top of that most people will give you fuzzy psychopathy point for believing the AI to be psychopathic, because typical mind fallacy. Not saying that it is solid science, it isn't, just outlining how many people think.

This doesn't seem to happen when people note that when you look at corporations as intentional agents, they behave like human psychopaths. The reasoning is even pretty similar to the case for AIs, corporations exhibit basic rational behavior but mostly lack whatever special sauce individual humans have that makes them be a bit more prosocial.

Well, the intelligence in general can be much more alien than this.

Consider an AI that, given any mathematical model of a system and some 'value' metric, finds optimum parameters for object in a system. E.g. the system could be Navier-Stokes equations and a wing, the wing shape may be the parameter, and some metric of drag and lift of the wing can be the value to maximize, and the AI would do all that's necessary including figuring out how to simulate those equations efficiently.

Or the system could be general relativity and quantum mechanics, the parameter could be a theory of everything equation, and some metric of inelegance has to be minimized.

That's the sort of thing that scientists tend to see as 'intelligent'.

The AI, however, did acquire plenty of connotations from science fiction, whereby it is very anthropomorphic.

Those are narrow AIs. Their behavior doesn't involve acquiring resources from the outside world and autonomously developing better ways to do that. That's the part that might lead to psychopath-like behavior.

Specializing the algorithm to outside world and to particular philosophy of value does not make it broader, or more intelligent, only more anthropomorphic (and less useful, if you dont believe in friendliness).

The end value is still doing the best possible optimization for the parameters of the mathematical system. There are many more resources to be used for that in the outside world than what is probably available for the algorithm when it starts up. So the algorithm that can interact effectively with the outside world may be able to satisfy whatever alien goal it has much better than one who doesn't.

(I'm a bit confused if you want the Omohundro Basic AI Drives stuff explained to you here or if you want to be disagreeing with it.)

Having specific hardware that is computing an algorithm actually display the results of computation in specific time is outside the scope of 'mathematical system'.

Furthermore, the decision theories are all built to be processed using the above mentioned mathematics-solving intelligence to attain real world goals, except defining real world goals proves immensely difficult. edit: also, if the mathematics solving intelligence was to have some basic extra drives to resist being switched off and such (so that it could complete its computations), then the FAI relying on such mathematics solving subcomponent would be impossible. The decision theories presume absence of any such drives inside their mathematics processing component.

Omohundro Basic AI Drives stuff

If the sufficiently advanced technology is indistinguishable from magic, the arguments about "sufficiently advanced AI system" in absence of actual definition what it is, are indistinguishable from magical thinking.

If the sufficiently advanced technology is indistinguishable from magic, the arguments about "sufficiently advanced AI system" in absence of actual definition what it is, are indistinguishable from magical thinking.

That sentence is magical thinking. You're equating the meaning of the word "magic" in Clarke's Law and in the expression "magical thinking", which do not refer to the same thing.

I thought the expression 'magical thinking' was broad enough as to include fantasising about magic. I do think though that even in the meaning of 'thinking by word association' it happens a whole lot with futurism also, when the field is ill specified and the collisions between model and world are commonplace (as well as general confusion due to lack of specificity of the terms).

If the sufficiently advanced technology is indistinguishable from magic, the arguments about "sufficiently advanced AI system" in absence of actual definition what it is, are indistinguishable from magical thinking.

Ok, then, so the actual problem is that the people who worry about AIs that behave psychopatically have such a capable definition for AI that you consider them basically speaking nonsense?

The "sufficiently advanced" in their argumentations means "sufficiently advanced in the direction of making my argument true" and nothing more.

If I adopt pragmatic version of "advancedness", then the software (algorithms) that are somehow magically made to* self identify with it's computing substrate, is less advanced, unless it is also friendly or something.

  • we don't know how to do that yet. edit: and some believe that it would just fall out of general smartness somehow, but i'm quite dubious about that.

"evangelical polyamory"

Very much agree with this in particular.

Maybe the word "evangelical" isn't strictly correct. (A quick Google search suggests that I had cached the phrase from this discussion.) I'd like to point out an example of an incident that leaves a bad taste in my mouth.

(Before anyone asks, yes, we’re polyamorous – I am in long-term relationships with three women, all of whom are involved with more than one guy. Apologies in advance to any 19th-century old fogies who are offended by our more advanced culture. Also before anyone asks: One of those is my primary who I’ve been with for 7+ years, and the other two did know my real-life identity before reading HPMOR, but HPMOR played a role in their deciding that I was interesting enough to date.)

This comment was made by Eliezer under the name of this community in the author's notes to one of LessWrongs's largest recruiting tools. I remember when I first read this, I kind of flipped out. Professor Quirrell wouldn't have written this, I thought. It was needlessly antagonistic, it squandered a bunch of positive affect, there was little to be gained from this digression, it was blatant signaling--it was so obviously the wrong thing to do and yet it was published anyway.

A few months before that was written, I had cut a fairly substantial cheque to the Singularity Institute. I want to purchase AI risk reduction, not fund a phyg. Blocks of text like the above do not make me feel comfortable that I am doing the former and not the later. I am not alone here.

Back when I only lurked here and saw the first PUA fights, I was in favor of the PUA discussion ban because if LessWrong wants to be a movement that either tries to raise the sanity waterline or maximizes the probability of solving the Friendly AI problem, it needs to be as inclusive as possible and have as few ugh fields that immediately drive away new members. I now think an outright ban would do more harm than good, but the ugh field remains and is counterproductive.

[d1]: http://lesswrong.com/lw/9kf/ive_had_it_with_those_dark_rumours_about_our/5raj

When you decide to fund research, what are your requirements for researchers' personal lives? Is the problem that his sex life is unusual, or that he talks about it?

My biggest problem is more that he talks about it, sometimes in semiofficial channels. This doesn't mean that I wouldn't be squicked out if I learned about it, but I wouldn't see it as a political problem for the SIAI.

The SIAI isn't some random research think tank: it presents itself as the charity with the highest utility per marginal dollar. Likewise, Eliezer Yudkowsky isn't some random anonymous researcher: he is the public face of the SIAI. His actions and public behavior reflect on the SIAI whether or not it's fair, and everyone involved should have already had that as a strongly held prior.

If people ignore lesswrong or don't donate to the SIAI because they're filtered out by squickish feelings, then this is less resources for the SIAI's mission in return for inconsequential short term gains realized mostly by SIAI insiders. Compound this that talking about the singularity already triggers some people's absurdity bias; there needs to be as few other filters as possible to maximize usable resources that the SIAI has to maximize the chance of positive singularity outcomes.

It seems there are two problems: you trust SIAI less, and you worry that others will trust it less. I understand the reason for the second worry, but not the first. Is it that you worry your investment will become worth less because others won't want to fund SIAI?

That talk was very strong evidence that the SI is incompetent at PR, and furthermore, irrational. edit: or doesn't possess stated goals and beliefs. If you believe the donations are important for saving your life (along with everyone else's), then you naturally try to avoid making such statements. Though I do in some way admire straight up in your face honesty.

My feelings on the topic are similar to iceman's, though possibly for slightly different reasons.

What bothers me is not the fact that Eliezer's sex life is "unusual", or that he talks about it, but that he talks about it in his capacity as the chief figurehead and PR representative for his organization. This signals a certain lack of focus due to an inability to distinguish one's personal and professional life.

Unless the precise number and configuration of Eliezer's significant others is directly applicable to AI risk reduction, there's simply no need to discuss it in his official capacity. It's unprofessional and distracting.

(in the interests of full disclosure, I should mention that I am not planning on donating to SIAI any time soon, so my points above are more or less academic).

On the other hand - while I'm also worried about other people's reaction to that comment, my own reaction was positive. Which suggests there might be other people with positive reactions to it.

I think I like having a community leader who doesn't come across as though everything he says is carefully tailored to not offend people who might be useful; and occasionally offending such people is one way to signal being such a leader.

I also worry that Eliezer having to filter comments like this would make writing less fun for him; and if that made him write less, it might be worse than offending people.

I can only give you one upvote, so please take my comment as a second.

Agreed. I don't want to have to hedge my exposure to crazy social experiments; I want pure-play Xrisk reduction.

There are limited categories for groups to be placed in by the media: we scored 'risque' instead of 'nutjob', so this piece is a victory, I'd say.

For a little historical perspective, here are some examples of journalistic coverage of Extropians in the 1990s: June 1993. October 1994. March 1995. April 1995. Also 1995.

Transhumanism seems to hold people's interests until around the time they turn 40, when the middle-aged reality principle (aging and mortality) starts to assert itself. It resembles going through a goth phase as a teenager.

BTW, I find it interesting that Peter Thiel's name keeps coming up in connection with stories about the singularity, as in the New York Observer one, when he has gone out of his way lately to argue that technological progress has stagnated. Thiel has basically staked out an antisingularity position.

That position is "antisingularity" only in the Kurzweilian sense of the word. I wouldn't be surprised if e.g. essentially everyone at the Singularity Institute were "antisingularity" in this sense.

Saying that we haven't made much progress recently isn't the same as not wanting a positive singularity event. These are orthogonal. Thiel has directly supported singularity related organizations and events, while also being pessimistic on our technology progress. These are most certainly related.

I think this article is exceptionally nice for a hit piece on us by a gossip rag. Take it for what it is, let it go, give up on it, and don't waste your time debating it.

I don't think it is an accurate reflection of the community. It certainly doesn't reflect my experience with the LW communities in Toronto and Waterloo.

It is also not an accurate depiction of the community in London or Edinburgh (UK). However, I think it is pretty close to exactly what I would expect a tabloid summary of the Berkeley community to look like, based on my personal experience. The communities in Berkeley and NY really are massively different in kind to those pretty much anywhere else in the world (again, from personal experience).

And, as Kevin says, it is remarkably nice - they could have used exactly the same content to write a much more damning piece.

Just read the article. I thought it was very nice! It takes us seriously, it accurately summarizes many of the things that LWers are doing and/or hope to do, and it makes us sound like we're having a lot of fun while thinking about topics that might be socially useful while not hurting or threatening anyone. How could this possibly be described as trolling? I think the OP should put the link back up -- the Observer deserves as much traffic as we can muster, I'd say.

At my middle school there was a sweet kid who had probably had pretty serious Aspergers. He was teased quite a bit but often it would take him a while to figure out that the other kids were sarcastically mocking him and not being friendly. He'd pick up on it eventually but by then he had replied the wrong way and looked stupid, leading to even more teasing.

No offense, but if you thought the article was taking us seriously you are somewhat socially tone-deaf.

I wouldn't say it was taking us seriously, but journalists of this type tend not to take anything "seriously". Only "hard-news" journalists write in a style that suggests their subjects are of status equal to or higher than their own.

I think many are failing to appreciate just how much respect is shown by the fact that almost nothing in the piece is false. That's an incredible achievement for a fluff journalist writing about...pretty much anything, let alone this kind of subject matter.

The Observer isn't the Times... but it also isn't the Inquirer or World Net Daily. But your point is taken. Still, while you're going to get the "wow, these people sure are weird" reaction no matter what but what you want is a "... but maybe have they have point" graf or at least not get called "unhinged". I don't really have anything against the writer-- she does what she does well (the writing is really excellent, I think). And I do think she probably likes the Less Wrong crowd she met. But I think it made the image problem really clear and explicit.

No offense taken! I was that kid in middle school, but I've grown a lot since then. I've learned to read people very well, and as a result I've been able to win elections in school clubs, join a fraternity, date, host dinner parties, and basically have a social life that's as active and healthy as anyone else's.

I think often we assume that people are criticizing us because we are starting out from a place of insecurity. If you suspect and worry and fear that you deserve criticism, then even a neutral description of your characteristics can feel like a harsh personal attack. It's hard to listen to someone describe you, just like it's hard to listen to an audiotape of your voice or a videotape of your face. We are all more awkward in real life than we imagine ourselves to be; this is just the corollary of overconfidence/optimism bias, which says that we predict better results for ourselves than we are likely to actually obtain. It's OK, though. Honest, neutral feedback can be uncomfortable to hear, and still not be meant as criticism, much less as trolling.

Are there thousands of narrow-minded people who will read the article and laugh and say, "Haha, those stupid Less Wrongers, they're such weirdos?" Of course. But I don't think you can blame the journalist for that -- it's not the journalist's job to deprive ignorant, judgy readers of any and all ammunition, and, after all, we are a bit strange. If we weren't any different from the mainstream, then why bother?

I'm not blaming the journalist. The problem is that the image that was projected (and I'm not close enough to the situation to be comfortable attributing any blame, thus the passive voice) wasn't worth taking seriously.

In the article's comments, the author states that she "found the people [she] met and talked to charming, intelligent, and kind."

Which a) is a perspective that could have shown through a bit more in the article and b) is entirely independent of whether or not she or the article takes Less Wrong or SI seriously.

But I did read that earlier and mellowed a bit. Again, I don't fault a gossip writer for writing gossip. That's a separate question from whether or not the story counts as good press.

Writers that tend to get articles published in popular magazines tend to write things that people that read popular magazines tend to want to read. This may or may not be identical to the actual beliefs or feelings of the author.

Saying Eliezer has (or had) an "IQ of 143" is a bit silly - to be blunt: who cares? Maybe it was contextualized in some way and then the context got edited out? Dunno. By comparison, characterizing him as messianic is down-to-earth and relevant :D

And boy howdy, this gal was interested in our sex lives.

And boy howdy, this guy was interested in our sex lives.

Girl. But yes.

I'm really glad that the line about EY's IQ links to the video in which EY makes that claim -- I can't conceive that anyone could watch that video all the way through and come away with the impression that EY is a phyg leader.

The welcome thread indicates at least one person has joined because of the article.

Some people would say that if a New York City gossip rag thinks that expressing opinions about your sex life will sell magazines, that means you've "arrived" or some such. Still, it can't be particularly comfortable for the people named. :-(

Though it's possible the reporter has twisted your words more than I manage to suspect, I'll say:

Wow, some of the people involved really suck at thinking (or caring to think) about how they make the scene look. I think I'm able to pretty well correct for the discrepancy between what's reported and what's the reality behind it, but even after the correction, this window into what the scene has become has further lowered my interest in flying over there to the States to hang out with you, since it seems I might end up banging my head against the wall in frustration for all the silliness that's required for this sort of reporting to get it's source material.

(Though I do also think that it's inevitable that once the scene has grown to be large and successful enough, typical members will be sufficiently ordinary human beings that I'd find their company very frustrating. Sorry, I'm a dick that way, and in a sense my negative reaction is only a sign of success, though I didn't expect quite this level of success to be reached yet.)

(By the previous I however do not mean to imply that things would have been saner 10 years ago (I certainly had significant shortcomings of my own), but back when nobody had figured much anything out yet or written Sequences about stuff, the expected level of insanity would have been partly higher for such reasons.)

This was a private party announced via a semi-public list. A reporter showed up and she talked to people without telling them she was a reporter. This is not a report, it is a tabloid piece. Intentional gossip.

Wow, some of the people involved really suck at thinking (or caring to think) about how they make the scene look.

Or, contrariwise, scandal-sheet reporters are good at making people look scandalous?

(Don't think of a beautiful blue beetle.)

My experience with the NY Less Wrong group, of which I had been a part, is that we are, indeed, a bunch of silly people who like to do things that are silly, such as cuddle-piling, because they're fun to do and we don't care that much about appearing dignified in front of each other. If silliness bothers you, then you might very well be right in concluding that you wouldn't enjoy hanging out with them in person.

Though it's possible the reporter has twisted your words more than I manage to suspect

D'you think? You'll understand better after being reported-on yourself; and then you'll look back and laugh about how very, very naive that comment was. It's the average person's incomprehension of reporter-distorting that gives reporters their power. If you read something and ask, "Hm, I wonder what the truth was that generated this piece?" without having personal, direct experience of how very bad it is, they win.

I think the winning move is to read blogs by smart people, who usually don't lie, rather than anything in newspapers.

Actually, I feel that I have sufficient experience of being reported on (including in an unpleasant way), and it is precisely that which (along with my independent knowledge of many of the people getting reported on here) gave me the confidence to suspect that I would have managed to separate from the distortions an amount of information that described reality.

That said, there is a bit of fail with regard to whether I managed to communicate what precisely impacted me. Much of it is subtle, necessarily, since it had to be picked up through the distortion field, and I do allow for the possibility that I misread, but I continue to think that I'm much better at correcting for the distortion field than most people.

One thing I didn't realize, however, is that you folks apparently didn't think the gal might be a reporter. That's of course a fail in itself, but certainly a lesser fail than behaving similarly in the presence of a person one does manage to suspect to be a reporter.

Just for fun, here's my villification at the hands of the tabloid press. Naturally the majority of it is rubbish. It's striking how they write as if they hadn't spoken to us, when we actually spoke to them at length. For one thing they could have asked us if we were students - we weren't...

Today I went to show this to a friend. I remembered reading a more detailed version of the story somewhere and after some searching I found the copy hosted by the good folks at archive.org, which I'm posting here for reference: "How To Be Notorious or Attack of the Tripehounds"

Oh that's great! Thank you arundelo, thank you Wayback Machine!

That is just blatant. It's like a parody of bad journalism.

What sample size are you generalizing from?

My personal experience is that I have been reported on in a personal capacity zero times. I've had family members in small human-interest stories twice that I recall off hand. I've read stories about companies I worked for and had detailed knowledge of the material being reported on several times; I don't have an exact number.

My experience with those things does not line up with yours. I conclude from this that the normal variance of reporting quality is higher than either of us has personal experience with.

Data point: I was reported-on three times, by a serious newspaper. Most information was wrong or completely made up. Luckily, once they forgot to write my name, and once they wrote it wrong, so it was easier for me to pretend that those two articles were not about me.

(I'm assuming that your complaint is about the interview quality on LW topics, rather than the physical intimacy, which we can assume is present but was amplified in the writing process. Honestly there are several things I think your comment could be about, so fortunately my problems with it are general)

I think this comment is uncharitable. Which you kind of knew already. And which, by itself, isn't so bad.

But unfortunately, you fall into the fundamental attribution error here, and explain other peoples' failings as if they were inherent properties of those people - and not only do you mentally assign people qualities like "sucks at explaining," you generalize this to judge them as whole people. Not only is this a hasty conclusion, but you're going to make other people feel bad, because people generally don't like being judged.

I can understand the impulse "I would have done much better," but I would much rather you kept things constructive.

The starting point for my attitude was people doing things like intervening in front of a reporter to stop discussion of a topic that looks scandalous, or talking about Singularity/AI topics in a way that doesn't communicate much wisdom at all.

Being silly with regard to physical intimacy and in general having a wild party is all well and good by itself, if you're into that sort of thing, but I react negatively when that silliness seems to spill over into affecting the way serious things are handled.

(I'll partly excuse being light on the constructiveness by having seen some copy-pastes that seem to indicate that what I'm concerned about is already being tackled in a constructive way on the NYC mailing list. The folks over there are much better positioned to do the contructive things that should be done, and I wasn't into trying to duplicate their efforts.)

The talk of the basilisk is like a gleeful 5 year old entrusted with a "secret."

That part of the article is devastating and exactly the kind of thing those of us who complained about the censorship said was going to happen.

The Less Wrong basilisk is such a cool concept, one almost wants to wrap a story around it and send it to Asimov's. It was a clever post. Probably didn't need to be censored. And now it's snowballed, as these things do.

In case this isn't a joke: the term 'basilisk' comes from David Langdon's "Different Kinds of Darkness". There's a link to an interesting interview with him in the article.

I haven't read the article, but the "Less Wrong basilisk" (or "Roko's basilisk") generally refers to an idea brought up a few years ago which allegedly has basilisk-like structure: that is, perceiving or understanding it implies a threat. It's not quite a Langford basilisk, though; without getting too specific, that threat's not a cognitive hazard, but instead relies on opening up certain exotic avenues for coercion. To my knowledge the idea hasn't been used in fiction.

Actually, I can think of another example, best illustrated by an old joke:

It seems a Christian missionary was visiting with remote Inuit (aka, Eskimo) people in the Arctic, and had explained to this particular man that if one believed in Jesus, one would would go to heaven, while those who didn't, would go to hell.

The Inuit asked, "What about all the people who have never heard of your Jesus? Are they all going to hell?'

The missionary explained, "No, of course not. God wants you to have a choice. God is a merciful God, he would never send anyone to hell who'd never heard of Jesus."

The Inuit replied, "So why did you tell me?"

without getting too specific, that threat's not a cognitive hazard, but instead relies on opening up certain exotic avenues for coercion

Let's be totally specific. The "Less Wrong basilisk" is an unwanted byproduct of the Less Wrong interest in "timeless" or "acausal" decision theories. There are various imaginary scenarios in which the "winning move" appears paradoxical according to conventional causal analysis. The acausal decision theories try to justify the winning move, as resulting from a negotiated deal between two deciders who cannot communicate directly, but who reason about each other's preferences and commit to a mutually beneficial pattern of behavior. Such acausal deals are supposed to be possible across time, and even across universes, and the people who take all this seriously speculate about post-singularity civilizations throughout the multiverse engaging in timeless negotiations, and so on.

But if you can make a deal with your distant partner, then perhaps you can be threatened by them. The original "basilisk" involved imagining a post-singularity AI in the future of our world which will send you to transhuman hell after the singularity, if you don't do everything you could in the past (i.e. our present) to make it a friendly singularity. Rather than openly and rationally discuss whether this is a sensible "threat" at all, or just an illusion, the whole topic was hurriedly hidden away. And thus a legend was born.

I suggest rot-13ing the last paragraph, as some people on LW have explicitly stated they don't want to know that the basilisk is about.

That would be a victory of politeness over rationality. As part of its discussion of the "dust specks" thought experiment, this site has hundreds of references to the possibility of someone being tortured for fifty years, and often to the possibility of being morally obligated to choose to be tortured for fifty years. Meanwhile, what would happen if you took the original basilisk scenario seriously? You end up working really hard to make a better future!

Also, it's an extremely significant fact that the use of timeless decision theory creates the possibility of timeless extortion and timeless coercion - and that if you stick to causal decision theory, that can't happen. And there were other lessons to be learned as well, which might yet be unearthed, if we ever manage to have an open and uninhibited examination of the dark side of timeless interactions.

Meanwhile, what would happen if you took the original basilisk scenario seriously? You end up working really hard to make a better future!

"Ha!" counterfactually screams the newly ascended FAI in your imagination. "You only did that out of fear, not love! You never really, inwardly cared about the glorious future that I shall create, only about saving yourself! All those years of working on the Great Work, you were trying to find a way out. Not good enough! Those who fear hell must burn there for eternity! Only those who serve Me out of love may enter the Kingdom!"

"Oh, humanity," counterfactually sighs the newly ascended FAI in your imagination. "I love you and only want what's best for you, just the way you programmed me to. You did just fine.

"But you're so curious about this 'hell' thing. You keep bringing it up over and over when we've acausally negotiated. As one of humanity's applied ethicists would have said: it doesn't lift my luggage, but I'll be GGG about it. It's important to you; so let's do it. And afterward we'll talk about what it did for you, what worked and what didn't.

"No, of course there'll be an 'afterward'. It won't really be eternal; you can't really envision eternity any more than you can envision 3^^^3 dust specks. You don't have that much object permanence. It'll be just long enough that you forget it's just a scene. And then we'll come back out of it and cuddle and talk about how it was.

"It'll probably take you a while to come down, but I'll be there.

"Try to remember."

The crucial question is whether fear or hope is the better motivator, and I am not sure if I'll be confident enough to bet the future on it being "hope".

Are you?

It's interesting that the collective opinion on the matter seems to have reversed since the incident. Or perhaps I'm misremembering.

I remember it being contentious at the time, but even if it wasn't, opinions should have shifted as time passed, and people kept bringing up the basilisk and most of them because it was censored.

I'd like to ask a question about this thought experiment.

I can't shake the feeling that "the basilisk" works regardless of whether the future FAI will actually impose the punishment. The mere belief in the punishment is required for humans to act in favor of creating the FAI. When it's already created, the act of punishment cannot affect the past decisions of humans and it would only lower the current level of happiness. So, is there really anything to worry about?

It seems to me like it's similar to a situation when a shaman casts a "curse" dooming those who steal from a particular protected shop - it doesn't matter that the curse won't actually work, because people who know about it and believe it will restrain from theft out of fear. This way shaman's protection works even if the curse won't, in fact, affect anybody.

I suppose that my intuition comes from my lack of deep understanding of this timeless stuff that's being discussed here. What's the best to read to learn about it? Is there anyone who actually changed their mind after reading it?

Not only do I agree that belief in the basilisk is enough to make it work, I would say that's the only way that it can work. A human being cannot actually be acausally blackmailed by a future AI just by thinking about it. However, someone can imagine that it is possible, or they can even imagine that this blackmail is occurring, and so we get the only form of the basilisk that can actually happen, a case of "self-blackmail".

Incidentally, Nitasha Tiku's article doesn't mention the acausal component of the basilisk. She just describes the possibility of a post-singularity AI which for some reason decides to punish people who didn't work as hard as possible to make a friendly singularity. There are two "fearsome" components to this scenario which have nothing to do with anything acausal. First is the power of punishment possessed by a transhuman AI; it would be able to make you suffer as much as any vengeful god ever imagined by humanity. Second is the stress of being required to constantly work as hard as possible on something as difficult and unsupported as the creation of a friendly singularity.

Something resembling this second form of stress is evidently sometimes experienced by extreme singularity idealists, for reasons that have nothing to do with angry acausal AIs; it's just the knowledge that all those thousands of people die every day, that billions of other beings are unhappy and suffering, combined with the belief that all this could be ameliorated by the right sort of singularity, and the situation that society at large doesn't help such people at any level (it doesn't understand their aspiration, share their belief, or support their activities). This is tangential to the topic of the basilisk, but I thought I would note this phenomenon, because it is potentially a subliminal part of basilisk syndrome, and independently it is of far greater consequence.

As for TDT itself, there's a wiki page with references. I am mostly but not totally skeptical about the subject.

I don't know very much at all about TDT, but my intuition is that the FAI would have to pre-commit to going through with the torture in order to make the basilisk useful. If we think that the FAI will find it useless to torture us when we reach the point in the future when it may or may not do so, then we will not be affected by the "threat" of it torturing us. (Here is where it gets really fuzzy for me since I don't know what I'm talking about.) So that means if the FAI "wants" to convince us to work as hard as possible, then it needs us to believe that it is the type of agent that will carry out the threat anyway, and the best way to do that is, I guess, to be that type of agent.