Best career models for doing research?

by Kaj_Sotala1 min read7th Dec 20101028 comments

33

Careers
Personal Blog

Ideally, I'd like to save the world. One way to do that involves contributing academic research, which raises the question of what's the most effective way of doing that.

The traditional wisdom says if you want to do research, you should get a job in a university. But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.

I suspect I would have more time to actually dedicate on research, and I could get doing it quicker, if I took a part-time job and did the research in my spare time. E.g. the recommended rates for a freelance journalist in Finland would allow me to spend a week each month doing work and three weeks doing research, of course assuming that I can pull off the freelance journalism part.

What (dis)advantages does this have compared to the traditional model?

Some advantages:

  • Can spend more time on actual research.
  • A lot more freedom with regard to what kind of research one can pursue.
  • Cleaner mental separation between money-earning job and research time (less frustration about "I could be doing research now, instead of spending time on this stupid administrative thing").
  • Easier to take time off from research if feeling stressed out.

Some disadvantages:

  • Harder to network effectively.
  • Need to get around journal paywalls somehow.
  • Journals might be biased against freelance researchers.
  • Easier to take time off from research if feeling lazy.
  • Harder to combat akrasia.
  • It might actually be better to spend some time doing research under others before doing it on your own.

EDIT: Note that while I certainly do appreciate comments specific to my situation, I posted this over at LW and not Discussion because I was hoping the discussion would also be useful for others who might be considering an academic path. So feel free to also provide commentary that's US-specific, say.

Careers1
Personal Blog

33

Rendering 500/1001 comments, sorted by (show more) Highlighting new comments since Today at 2:47 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I believe that most people hoping to do independent academic research vastly underestimate both the amount of prior work done in their field of interest, and the advantages of working with other very smart and knowledgeable people. Note that it isn't just about working with other people, but with other very smart people. That is, there is a difference between "working at a university / research institute" and "working at a top university / research institute". (For instance, if you want to do AI research in the U.S., you probably want to be at MIT, Princeton, Carnegie Mellon, Stanford, CalTech, or UC Berkeley. I don't know about other countries.)

Unfortunately, my general impression is that most people on LessWrong are mostly unaware of the progress made in statistical machine learning (presumably the brand of AI that most LWers care about) and cognitive science in the last 20 years (I mention these two fields because I assume they are the most popular on LW, and also because I know the most about them). And I'm not talking about impressive-looking results that dodge around the real issues, I'm talking about fundamental progress towards resolving the key problems... (read more)

6Danny_Hintze10yThis might not even be a significant problem when the time does come around. High fluid intelligence only lasts for so long, and thus using more crystallized intelligence later on in life to guide research efforts rather than directly performing research yourself is not a bad strategy if the goal is to optimize for the actual research results.
4jsteinhardt10yThose are roughly my thoughts as well, although I'm afraid that I only believe this to rationalize my decision to go into academia. While the argument makes sense, there are definitely professors that express frustration with their position. What does seem like pretty sound logic is that if you could get better results without a research group, you wouldn't form a research group. So you probably won't run into the problem of achieving suboptimal results from administrative overhead (you could always just hire less people), but you might run into the problem of doing work that is less fun than it could be. Another point is that plausibly some other profession (corporate work?) would have less administrative overhead per unit of efficiency, but I don't actually believe this to be true.
6nhamann10yCould you point me towards some articles here? I fully admit I'm unaware of most of this progress, and would like to learn more.

A good overview would fill up a post on its own, but some relevant topics are given below. I don't think any of it is behind a paywall, but if it is, let me know and I'll link to another article on the same topic. In cases where I learned about the topic by word of mouth, I haven't necessarily read the provided paper, so I can't guarantee the quality for all of these. I generally tried to pick papers that either gave a survey of progress or solved a specific clearly interesting problem. As a result you might have to do some additional reading to understand some of the articles, but hopefully this is a good start until I get something more organized up.

Learning:

Online concept learning: rational rules for concept learning [a somewhat idealized situation but a good taste of the sorts of techniques being applied]

Learning categories: Bernoulli mixture model for document classification, spatial pyramid matching for images

Learning category hierarchies: nested Chinese restaurant process, hierarchical beta process

Learning HMMs (hidden Markov models): HDP-HMMs this is pretty new so the details haven't been hammered out, but the article should give you a taste of how people are approaching th... (read more)

1Perplexed10yI'm not planning to do AI research, but I do like to stay no more than ~10 years out of date regarding progress in fields like this. At least at the intelligent-outsider level of understanding. So, how do I go about getting and keeping almost up-to-date in these fields. Is MacKay's book [http://www.cs.toronto.edu/~mackay/itila/book.html] a good place to start on machine learning? How do I get an unbiased survey of cognitive science? Are there blogs that (presuming you follow the links) can keep you up to date on what is getting a buzz?
2jsteinhardt10yI haven't read MacKay myself, but it looks like it hits a lot of the relevant topics. You might consider checking out Tom Griffiths' website [http://cocosci.berkeley.edu/tom/], which has a reading list [http://cocosci.berkeley.edu/tom/bayes.html] as well as several tutorials [http://cocosci.berkeley.edu/publications.php?topic=Foundations].
1sark10yWe should try to communicate with long letters (snail mail) more. Academics seem to have done that a lot in the past. From what I have seen these exchanges seem very productive, though this could be a sampling bias. I don't see why there aren't more 'personal communication' cites, except for them possibly being frowned upon.
1jsteinhardt10yWhy use snail mail when you can use skype? My lab director uses it regularly to talk to other researchers.
3sark10yBecause it is written. Which makes it good for communicating complex ideas. The tradition behind it also lends it an air of legitimacy. Researchers who don't already have a working relationship with each other will take each other's letters more seriously.
2jsteinhardt10yUpvoted for the good point about communication. Not sure I agree with the legitimacy part (what is p(Crackpot | Snail Mail) compared to p(Crackpot | Email)? I would guess higher).
1Sniffnoy10yWhat I'm now wondering is, how does using email vs. snail mail affect the probability of using green ink, or its email equivalent...
1sark10yHeh you are probably right. It just seemed strange to me how researchers cannot just communicate with each other as long as they have the same research interests. My first thought was that it might have been something to do with status games, where outsiders are not allowed. I suppose some exchanges require rapid and frequent feedback. But then, like you mentioned, wouldn't Skype do?
1jsteinhardt10yI'm not sure what the general case looks like, but the professors who I have worked with (who all have the characteristic that they do applied-ish research at a top research university) are both constantly barraged by more e-mails than they can possibly respond to. I suspect that as a result they limit communication to sources that they know will be fruitful. Other professors in more theoretical fields (like pure math) don't seem to have this problem, so I'm not sure why they don't do what you suggest (although some of them [http://polymathprojects.org/] do). And I am not sure that all professors run into the same problem as I have described, even in applied fields.

(Shrugs.)

Your decision. The Singularity Institute does not negotiate with terrorists.

WFG, please quit with the 'increase existential risk' idea. Allowing Eliezer to claim moral high ground here makes the whole situation surreal.

A (slightly more) sane response would be to direct your altruistic punishment towards the SIAI specifically. They are, after all, the group who is doing harm (to you according to your values). Opposing them makes sense (given your premises.)

Something to keep in mind when you reply to comments here is that you are the default leader of this community and its highest status member. This means comments that would be reasonably glib or slightly snarky from other posters can come off as threatening and condescending when made by you. They're not really threatening but they can instill in their targets strong fight-or-flight responses. Perhaps this is because in the ancestral environment status challenges from group leaders were far more threatening to our ancestor's livelihood than challenges from other group members. When you're kicking out trolls it's a sight to see, but when you're rhetorically challenging honest interlocutors it's probably counter-productive. I had to step away from the computer because I could tell that even if I was wrong the feelings this comment provoked weren't going to let me admit it (and you weren't even actually mean, just snobby).

As to your question, I don't think my understanding of the idea requires anyone to be an idiot. In fact from what you've said I doubt we're that far a part on the matter of how threatening the idea is. There may be implications I haven't thought through that you hav... (read more)

After several years as a post-doc I am facing a similar choice.

If I understand correctly you have no research experience so far. I'd strongly suggest completing a doctorate because:

  • you can use that time to network and establish a publication record
  • most advisors will allow you as much freedom as you can handle, particularly if you can obtain a scholarship so you are not sucking their grant money. Choose your advisor carefully.
  • you may well get financial support that allows you to work full time on your research for at least 4 years with minimal accountability
  • if you want, you can practice teaching and grant applications to taste how onerous they would really be
  • once you have a doctorate and some publications, it probably won't be hard to persuade a professor to offer you an honorary (unpaid) position which gives you an institutional affiliation, library access, and maybe even a desk. Then you can go ahead with freelancing, without most of the disadvantages you cite.

You may also be able to continue as a post-doc with almost the same freedom. I have done this for 5 years. It cannot last forever, though, and the longer you go on, the more people will expect you to devote yourself to grant applications, teaching and management. That is why I'm quitting.

4Kaj_Sotala10yHuh. That's a fascinating idea, one which had never occurred to me. I'll have to give this suggestion serious consideration.
7billswift10yRon Gross's The Independent Scholar's Handbook has lots of ideas like this. A lot of the details in it won't be too useful, since it is mostly about history and the humanities, but quite a bit will be. It is also a bit old to have some more recent stuff, since there was almost no internet in 1993.
3James_Miller10yOr become a visiting professor in which you teach one or two courses a year in return for modest pay, affiliation and library access.

If you feel more comfortable labeling it 'terrorism'... well... it's your thinking to bias.

No, the label is accurate. Right smack bang inside the concept of terrorism. And I am saying that as someone who agrees that Eliezer is behaving like a socially inept git.

someone has to stand up against your abuse

Why? Feeling obliged to fight against people just gives them power over you.

Dude, don't be an idiot. Really.

Repeating "But I say so!" with increasing emphasis until it works. Been taking debating lessons from Robin?

6multifoliaterose10yIt seems to me that the natural effect of a group leader persistently arguing from his own authority is Evaporative Cooling of Group Beliefs [http://lesswrong.com/lw/lr/evaporative_cooling_of_group_beliefs/]. This is of course conducive to confirmation bias and corresponding epistemological skewing for the leader; things which seem undesirable for somebody in Eliezer's position. I really wish that Eliezer was receptive to taking this consideration seriously.
4wedrifid10yThe thing is he usually does. That is one thing that has in the past set Eliezer apart from Robin and impressed me about Eliezer. Now it is almost as though he has embraced the evaporative cooling concept as an opportunity instead of a risk and gone and bought himself a blowtorch to force the issue!
2JGWeissman10yMaybe, given the credibility he has accumulated on all these other topics, you should be willing to trust him on the one issue on which he is asserting this authority and on which it is clear that if he is right, it would be bad to discuss his reasoning.

Maybe, given the credibility he has accumulated on all these other topics, you should be willing to trust him on the one issue on which he is asserting this authority and on which it is clear that if he is right, it would be bad to discuss his reasoning.

The well known (and empirically verified) weakness in experts of the human variety is that they tend to be systematically overconfident when it comes to judgements that fall outside their area of exceptional performance - particularly when the topic is one just outside the fringes.

When it comes to blogging about theoretical issues of rationality Eliezer is undeniably brilliant. Yet his credibility specifically when it comes to responding to risks is rather less outstanding. In my observation he reacts emotionally and starts making rookie mistakes of rational thought and action. To the point when I've very nearly responded 'Go read the sequences!' before remembering that he was the flipping author and so should already know better.

Also important is the fact that elements of the decision are about people, not game theory. Eliezer hopefully doesn't claim to be an expert when it comes to predicting or eliciting optimal reactions in others.

5XiXiDu10yI do not consider this strong evidence as there are many highly intelligent and productive people who hold crazy beliefs: * Francisco J. Ayala [http://en.wikipedia.org/wiki/Francisco_J._Ayala] who “…has been called the “Renaissance Man of Evolutionary Biology” is a geneticist ordained as a Dominican priest [http://www.scientificamerican.com/article.cfm?id=the-christian-mans-evolution] . “His “discoveries have opened up new approaches to the prevention and treatment of diseases that affect hundreds of millions of individuals worldwide…” * Francis Collins [http://en.wikipedia.org/wiki/Francis_Collins_%28geneticist%29] (geneticist, Human Genome Project) noted for his landmark discoveries of disease genes and his leadership of the Human Genome Project (HGP) and described by the Endocrine Society as “one of the most accomplished scientists of our time” is a evangelical Christian. * Peter Duesberg [http://en.wikipedia.org/wiki/Peter_Duesberg] (a professor of molecular and cell biology at the University of California, Berkeley) claimed that AIDS is not caused by HIV, which made him so unpopular that his colleagues and others have — until recently — been ignoring his potentially breakthrough work on the causes of cancer. * Georges Lemaître [http://en.wikipedia.org/wiki/Georges_Lema%C3%AEtre] (a Belgian Roman Catholic priest) proposed what became known as the Big Bang theory of the origin of the Universe. * Kurt Gödel [http://en.wikipedia.org/wiki/Kurt_G%C3%B6del] (logician, mathematician and philosopher) who suffered from paranoia and believed in ghosts. “Gödel, by contrast, had a tendency toward paranoia. He believed in ghosts [http://www.newyorker.com/archive/2005/02/28/050228crat_atlarge]; he had a morbid dread of being poisoned by refrigerator gases; he refused to go out when certain distinguished mathematicians were in town, apparently out of concern that they might try to kill him.” * Mark Chu
3David_Gerard10yI took wedrifid's point as being that whether EY is right or not, the bad effect described happens. This is part of the lose-lose nature of the original problem (what to do about a post that hurt people).
2shokwave10yI don't think this rhetoric is applicable. Several very intelligent posters have deemed the idea dangerous; a very intelligent you deems it safe. You argue they are wrong because it is 'obviously safe'. Eliezer is perfectly correct to point out that, on the whole of it, 'obviously it is safe' just does not seem like strong enough evidence when it's up against a handful of intelligent posters who appear to have strong convictions.
2wedrifid10yPardon? I don't believe I've said any such thing here or elsewhere. I could of course be mistaken - I've said a lot of things and don't recall them all perfectly. But it seems rather unlikely that I did make that claim because it isn't what I believe. This leads me to the conclusion that... ... This rhetoric isn't applicable either. ;)

Note that comments like these are still not being deleted, by the way. LW censors Langford Basilisks, not arbitrarily great levels of harmful stupidity or hostility toward the hosts - those are left to ordinary downvoting.

I'm putting the finishing touches on a future Less Wrong post about the overwhelming desirability of casually working in Australia for 1-2 years vs "whatever you were planning on doing instead". It's designed for intelligent people who want to earn more money, have more free time, and have a better life than they would realistically be able to get in the US or any other 1st world nation without a six-figure, part-time career... something which doesn't exist. My world saving article was actually just a prelim for this.

Are you going to accompany the "this is cool" part with a "here's how" part? I estimate that would cause it to influence an order of magnitude more people, by removing an inconvenience that looks at least trivial and might be greater.

3David_Gerard10yI'm now thinking of why Australian readers should go to London and live in a cramped hovel in an interesting place. I feel like I've moved to Ankh-Morpork.
2erratio10yAs someone already living in Australia and contemplating a relocation to the US for study purposes, I would be extremely interested in this article

This decree is ambiguous enough to be seen as threatening people not to repost their banned comments (made in good faith but including too much forbidden material by accident) even after removing all objectionable content. I think this should be clarified.

9Eliezer Yudkowsky10yConsider that clarification made; no such threat is intended.

Whats frustrating is I would have had no idea it was deleted- and just assumed it wasn't interesting to anyone, had I not checked after reading the above. I'd much rather be told to delete the relevant portions of the comment- lets at least have precise censorship!

Wow. Even the people being censored don't know it. That's kinda creepy!

his comment led me to discover that quite a long comment I made a little bit ago had been deleted entirely.

How did you work out that it had been deleted? Just by logging out, looking and trying to remember where you had stuff posted?

I think it's a standard tool: trollish comments look like being ignored to the trolls. But I think it's impolite to delete comments made in good faith without notification and usable guidelines for cleaning up and reposting. (Hint hint.)

3Jack10yI only made one comment on the subject and I was rather confused that it was being ignored. I also knew I might have said too much about the Roko post and actually included a sentence saying that if I crossed the line I'd appreciate being told to edit it instead of having the entire thing deleted. So I just checked that one comment in particular. If other comments of mine have been deleted I wouldn't know about it, though this was the only comment in which I have discussed the Roko post.
3[anonymous]10yI doubt that this is a deliberate feature.

Do you also think that global warming is a hoax, that nuclear weapons were never really that dangerous, and that the whole concept of existential risks is basically a self-serving delusion?

Also, why are the folks that you disagree with the only ones that get to be described with all-caps narrative tropes? Aren't you THE LONE SANE MAN who's MAKING A DESPERATE EFFORT to EXPOSE THE TRUTH about FALSE MESSIAHS and the LIES OF CORRUPT LEADERS and SHOW THE WAY to their HORDES OF MINDLESS FOLLOWERS to AN ENLIGHTENED FUTURE? Can't you describe anything with all-caps narrative tropes if you want?

Not rhethorical questions, I'd actually like to read your answers.

You are evidently confused about what the word means. The systematic deletion of any content that relates to an idea that the person with power does not wish to be spoken is censorship in the same way that threatening to (probabilistically) destroy humanity is terrorism. As in, blatantly obviously - it's just what the words happen to mean.

Going around saying 'this isn't censorship' while doing it would trigger all sorts of 'crazy cult' warning bells.

6fortyeridania10yYes, the acts in question can easily be denoted by the terms "blackmail" and "censorship." And your final sentence is certainly true as well. To avoid being called a cult, to avoid being a cult, and to avoid doing bad things generally, we should stop the definition debate and focus on whether people's behavior has been appropriate. If connotation conundrums keep you quarreling about terms, pick variables (e.g. "what EY did"=E and "what WFG precommitted to doing, and in fact did"=G) and keep talking.

Conditioning on yourself deeming it optimal to make a metaphorical omelet by breaking metaphorical eggs, metaphorical eggs will deem it less optimal to remain vulnerable to metaphorical breakage by you than if you did not deem it optimal to make a metaphorical omelet by breaking metaphorical eggs; therefore, deeming it optimal to break metaphorical eggs in order to make a metaphorical omelet can increase the difficulty you find in obtaining omelet-level utility.

4JGWeissman10yMany metaphorical eggs are not [metaphorical egg]::Utility maximizing agents.

Consider taking a job as a database/web developer at a university department. This gets you around journal paywalls, and is a low-stress job (assuming you have or can obtain above-average coding skills) that leaves you plenty of time to do your research. (My wife has such a job.) I'm not familiar with freelance journalism at all, but I'd still guess that going the software development route is lower risk.

Some comments on your list of advantages/disadvantages:

  • Harder to network effectively. - I guess this depends on what kind of research you want to do. For the areas I've been interested in, networking does not seem to matter much (unless you count participating in online forums as networking :).
  • Journals might be biased against freelance researchers. - I publish my results online, informally, and somehow they've usually found an interested audience. Also, the journals I'm familiar with require anonymous submissions. Is this not universal?
  • Harder to combat akrasia. - Actually, might be easier.

A couple other advantages of the non-traditional path:

  • If you get bored you can switch topics easily.
  • I think it's crazy to base one's income on making research progress. How do you stay o
... (read more)

May I at this point point out that I agree that the post in question should not appear in public. Therefore, it is a question of the author's right to retract material, not of censorship.

9wedrifid10yThat does not actually follow. Censoring what other people say is still censoring even if you happened to have said related things previously.
4waitingforgodel10yIn this case, the comment censored was not posted by you. Therefore you're not the author. FYI the actual author didn't even know it was censored.
3Vladimir_Nesov10yGood idea. We should've started using this standard reference when the censorship complaints began, but at least henceforth.

In other words, you have allegedly precommited to existential terrorism, killing the Future with small probability if your demands are not met.

5Eliezer Yudkowsky10yWFG gave this reply which was downvoted under the default voting threshold: I'm also reposting this just in case wfg tries to delete it or modify it later, since I think it's important for everyone to see. Ordinarily I'd consider that a violation of netiquette, but under these here exact circumstances...
3CharlieSheen9yWow, that manages to signal a willingness to use unnecessarily risky tactics, malignancy AND marginal incompetence. While I do understand that right wing people are naturally the kind of people who bring about increased existential risks, I think my own occasional emails to left wing blogers aren't' that shabby (since that makes me a dirty commie enabler). In fact I email all sorts of blogers with questions and citations to papers they might be interested in. Muhahahaha. How does one even estimate something like 0.0001% increase of existential risk out of something like sending an email to a blogger? The error bars on the thing are vast. All you are doing is putting up a giant sign with negative characteristics that will make people want to cooperate with you less.
[-][anonymous]10y 11

You're trying very hard to get everyone to think that SIAI has lied to donors or done something equally dishonest. I agree that this is an appropriate question to discuss, but you are pursuing the matter so aggressively that I just have to ask: do you know something we don't? Do you think that you/other donors have been lied to on a particular occasion, and if so, when?

Well I guess this is our true point of disagreement. I went to the effort of finding out a lot, went to SIAI and Oxford to learn even more, and in the end I am left seriously disappointed by all this knowedge. In the end it all boils down to:

"most people are irrational, hypocritical and selfish, if you try and tell them they shoot the messenger, and if you try and do anything you bear all the costs, internalize only tiny fractions of the value created if you succeed, and you almost certainly fail to have an effect anyway. And by the way the future is an impending train wreck"

I feel quite strongly that this knowledge is not a worthy thing to have sunk 5 years of my life into getting. I don't know, XiXiDu, you might prize such knowledge, including all the specifics of how that works out exactly.

If you really strongly value the specifics of this, then yes you probably would on net benefit from the censored knowledge, the knowledge that was never censored because I never posted it, and the knowledge that I never posted because I was never trusted with it anyway. But you still probably won't get it, because those who hold it correctly infer that the expected value of releasing it is strongly negative from an altruist's perspective.

The future is probably an impending train wreck. But if we can save the train, then it'll grow wings and fly up into space while lightning flashes in the background and Dragonforce play a song about fiery battlefields or something. We're all stuck on the train anyway, so saving it is worth a shot.

I hate to see smart people who give a shit losing to despair. This is still the most important problem and you can still contribute to fixing it.

TL;DR: I want to give you a hug.

most people are irrational, hypocritical and selfish, if you try and tell them they shoot the messenger, and if you try and do anything you bear all the costs, internalize only tiny fractions of the value created if you succeed,

So? They're just kids!

(or)

He glanced over toward his shoulder, and said, "That matter to you?"

Caw!

He looked back up and said, "Me neither."

4Roko10yI mean I guess I shouldn't complain that you don't find this bothers you, because you are, in fact, helping me by doing what you do and being very good at it, but that doesn't stop it being demotivating for me! I'll see what I can do regarding quant jobs.
3Jack10yUpvoted for the excellent summary!
4katydee10yI'm curious about the "future is an impending train wreck" part. That doesn't seem particularly accurate to me.
3timtyler10yThat doesn't sound right to me. Indeed, it sounds as though you are depressed :-( Unsolicited advice over the public internet is rather unlikely to help - but maybe focus for a bit on what you want - and the specifics of how to get to there.
3katydee10yThis isn't meant as an insult, but why did it take you 5 years of dedicated effort to learn that?
4Roko10ySpecifics. Details. The lesson of science is that details can sometimes change the overall conclusion. Also some amount of nerdyness meaning that the statements about human nature weren't obvious to me.

An example of this would be errors or misconduct in completing past projects.

When I asked Anna about the coordination between SIAI and FHI, something like "Do you talk enough with each other that you wouldn't both spend resources writing the same research paper?", she was told me about the one time that they had in fact both presented a paper on the same topic at a conference, and that they do now coordinate more to prevent that sort of thing.

I have found that Anna and others at SIAI are honest and forthcoming.

As you may know from your study of marketing, accusations stick in the mind even when one is explicitly told they are false. In the parent comment and a sibling, you describe a hypothetical SIAI lying to its donors because... Roko had some conversations with Carl that led you to believe we care strongly about existential risk reduction?

If your aim is to improve SIAI, to cause there to be good organizations in this space, and/or to cause Less Wrong-ers to have accurate info, you might consider:

  1. Talking with SIAI and/or with Fellows program alumni, so as to gather information on the issues you are concerned about. (I’d be happy to talk to you; I suspect Jasen and various alumni would too.) And then
  2. Informing folks on LW of anything interesting/useful that you find out.

Anyone else who is concerned about any SIAI-related issue is also welcome to talk to me/us.

3waitingforgodel10yActually that citation is about both positive and negative things -- so unless you're also asking pro-SIAI people to hush up, you're (perhaps unknowingly) seeking to cause a pro-SIAI bias. Another thing that citation seems to imply is that reflecting on, rather than simply diverting our attention away from scary thoughts is essential to coming to a correct opinion on them. One of the interesting morals from Roko's contest is that if you care deeply about getting the most benefit per donated dollar you have to look very closely at who you're giving it to. Market forces work really well for lightbulb-sales businesses, but not so well for mom-and-pop shops, let alone charities. The motivations, preferences, and likely future actions of the people you're giving money to become very important. Knowing if you can believe the person, in these contexts, becomes even more important. As you note, I've studied marketing, sales, propaganda, cults, and charities. I know that there are some people who have no problem lying for their cause (especially if it's for their god or to save the world). I also know that there are some people who absolutely suck at lying. They try to lie, but the truth just seeps out of them. That's why I give Roko's blurted comments more weight than whatever I'd hear from SIAI people who were chosen by you -- no offence. I'll still talk with you guys, but I don't think a reasonably sane person can trust the sales guy beyond a point. As far as your question goes, my primary desire is a public, consistent moderation policy for LessWrong. If you're going to call this a community blog devoted to rationality, then please behave in sane ways. (If no one owns the blog -- if it belongs to the community -- then why is there dictatorial post deletion?) I'd also like an apology from EY with regard to the chilling effects his actions have caused. But back to what you replied to: What would SIAI be willing to lie to donors about? Do you have any answers to

To answer your question, despite David Gerard's advice:

I would not lie to donors about the likely impact of their donations, the evidence concerning SIAI's ability or inability to pull off projects, how we compare to other organizations aimed at existential risk reduction, etc. (I don't have all the answers, but I aim for accuracy and revise my beliefs and my statements as evidence comes in; I've actively tried to gather info on whether we or FHI reduce risk more per dollar, and I often recommend to donors that they do their own legwork with that charity comparison to improve knowledge and incentives). If a maniacal donor with a gun came searching for a Jew I had hidden in my house, or if I somehow had a "how to destroy the world" recipe and someone asked me how to use it, I suppose lying would be more tempting.

While I cannot speak for others, I suspect that Michael Vassar, Eliezer, Jasen, and others feel similarly, especially about the "not lying to one's cooperative partners" point.

3David_Gerard10yI suppose I should add "unless the actual answer is not a trolley problem" to my advice on not answering this sort of hypothetical ;-) (my usual answer to hypotheticals is "we have no plans along those lines", because usually we really don't. We're also really good at not having opinions on other organisations, e.g. Wikileaks, which we're getting asked about A LOT because their name starts with "wiki". A blog [http://blog.wikimedia.org/] post on the subject is imminent. Edit: up now [http://blog.wikimedia.org/blog/2010/12/09/what%E2%80%99s-in-a-name-in-the-case-of-%E2%80%98wiki%E2%80%99-lots-of-things/] .)
9David_Gerard10yWell, uh, yeah. The horse has bolted. It's entirely unclear what choosing to keep one's head in the sand gains anyone. Although this is a reasonable question to want the answer to, it's obvious even to me that answering at all would be silly [http://lesswrong.com/lw/383/the_trolley_problem_dodging_moral_questions/] and no sensible person who had the answer would. Investigating the logic or lack thereof behind the (apparently ongoing) memory-holing is, however, incredibly on-topic and relevant for LW.
3wedrifid10yTotal agreement here. In Eliezer's [http://lesswrong.com/lw/383/the_trolley_problem_dodging_moral_questions/32hh?c=1] words:
4David_Gerard10yA fellow called George Orwell [http://en.wikipedia.org/wiki/Memory_hole].
2wedrifid10yAhh, thankyou.
4David_Gerard10yI presume you're not a native English speaker then - pretty much any moderately intelligent native English speaker has been forced to familiarity with 1984 [http://en.wikipedia.org/wiki/Nineteen_Eighty-Four] at school. (When governments in the UK are being particularly authoritarian, there is often a call to send MPs copies of 1984 with a note "This is not a manual.") Where are you from? Also, you really should read the book, then lots of the commentary on it :-) It's one of the greatest works of science fiction and political fiction in English.
4wedrifid10yI can tell you all about equal pigs and newspeak but 'memory-holing' has not seemed to make as much of a cultural footprint - probably because as a phrase it is rather awkward fit. I wholeheartedly approve of Orwell in principle but actually reading either of his famous books sounds too much like highschool homework. :)
4Jack10yAnimal Farm is probably passable (though it's so short). 1984 on the other hand is maybe my favorite book of all time. I don't think I've had a stronger emotional reaction to another book. It makes Shakespeare's tragedies look like comedies. I'd imagine you'd have similar feelings about it based on what I've read of your comments here.

Don't let EY chill your free speech -- this is supposed to be a community blog devoted to rationality... not a SIAI blog where comments are deleted whenever convenient.

You are compartmentalizing. What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech. That the decision conflicts with freedom of speech doesn't necessarily mean that it's incorrect, and if the correct decision conflicts with freedom of speech, or has you kill a thousand children (estimation of its correctness must of course take this consequence into account), it's still correct and should be taken.

(There is only one proper criterion to anyone's actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn't yield the correct decision. )

(This is a note about a problem in your argument, not an argument for correctness of EY's decision. My argument for correctness of EY's decision is here and here.)

4wedrifid10yThis is possible but by no means assured. It is also possible that he simply didn't choose to write a full evaluation of consequences in this particular comment.
2Vladimir_Golovin10yUpvoted. This just helped me get unstuck on a problem I've been procrastinating on.

Gollum gave his private assurances to Frodo - and we all know how that turned out.

Well I'm convinced. Frodo should definitely have worked out a way to clone the ring and made sure the information was available to all of Middle Earth. You can never have too many potential Ring-Wraiths.

2[anonymous]10ySuddenly I have a mental image of "The Lord of the Rings: The Methods of Rationality."
5Alicorn10ySomeone should write that (with a better title). We could have a whole genre of rational fanfiction.

Well, look, I deleted it of my own accord, but only after being prompted that it was a bad thing to have posted. Can we just drop this? It makes me look like even more of a troublemaker than I already look like, and all I really want to do is finish the efficient charity competition then get on with life outside teh intenetz.

7XiXiDu10yThis is very important. If the SIAI is the organisation to solve the friendly AI problem and implement CEV then it should be subject to public examination, especially if they ask for money.
6David_Gerard10yThe current evidence that anyone anywhere can implement CEV is two papers in six years that talk about it a bit. There appears to have been nothing else from SIAI and no-one else in philosophy appears interested. If that's all there is for CEV in six years, and AI is on the order of thirty years away, then (approximately) we're dead. This is rather disappointing, as if CEV is possible then a non-artificial general intelligence should be able to implement it, at least partially. And we have those. The reason for CEV is (as I understand it) the danger of the AI going FOOM before it cares about humans. However, human general intelligences don't go FOOM but should be able to do the work for CEV. If they know what that work is. Addendum: I see others have been asking "but what do you actually mean?" for a couple of years now [http://lesswrong.com/lw/nu/taboo_your_words/i97?c=1].
7Nick_Tarleton10yThis strikes me as a demand for particular proof [http://lesswrong.com/lw/1ph/youre_entitled_to_evidence_but_not_proof/]. SIAI is small (and was much smaller until the last year or two), the set of people engaged in FAI research is smaller, Eliezer has chosen to focus on writing about rationality over research for nearly four years, and FAI is a huge problem, in which any specific subproblem should be expected to be underdeveloped at this early stage. And while I and others expect work to speed up in the near future with Eliezer's attention and better organization, yes, we probably are dead. Somewhat nitpickingly, this is a reason for FAI in general. CEV is attractive mostly for moving as much work from the designers to the FAI as possible, reducing the potential for uncorrectable error, and being fairer than letting the designers lay out an object-level goal system. This sounds interesting; do you think you could expand?
2David_Gerard10yIt wasn't intended to be - more incredulity. I thought this was a really important piece of the puzzle, so expected there'd be something at all by now. I appreciate your point: that this is a ridiculously huge problem and SIAI is ridiculously small. I meant that, as I understand it, CEV is what is fed to the seed AI. Or the AI does the work to ascertain the CEV. It requires an intelligence to ascertain the CEV, but I'd think the ascertaining process would be reasonably set out once we had an intelligence on hand, artificial or no. Or the process to get to the ascertaining process. I thought we needed the CEV before the AI goes FOOM, because it's too late after. That implies it doesn't take a superintelligence to work it out. Thus: CEV would have to be a process that mere human-level intelligences could apply. That would be a useful process to have, and doesn't require first creating an AI. I must point out that my statements on the subject are based in curiosity, ignorance and extrapolation from what little I do know, and I'm asking (probably annoyingly) for more to work with.
4Nick_Tarleton10y"CEV" can (unfortunately) refer to either CEV the process of determining what humans would want if we knew more etc., or the volition of humanity output by running that process. It sounds to me like you're conflating these. The process is part of the seed AI and is needed before it goes FOOM, but the output naturally is neither, and there's no guarantee or demand that the process be capable of being executed by humans.
2David_Gerard10yOK. I still don't understand it, but I now feel my lack of understanding more clearly. Thank you! (I suppose "what do people really want?" is a large philosophical question, not just undefined but subtle in its lack of definition.)
5Roko10yI have recieved assurances that SIAI will go to significant efforts not to do nasty things, and I believe them. Private assurances given sincerely are, in my opinion, the best we can hope for, and better than we are likely to get from any other entity involved in this. Besides, I think that XiXiDu, et al are complaining about the difference between cotton and silk, when what is actually likely to happen is more like a big kick in the teeth from reality. SIAI is imperfect. Yes. Well done. Nothing is perfect. At least cut them a bit of slack.
2timtyler10yWhat?!? Open source code - under a permissive license - is the traditional way to signal that you are not going to run off into the sunset with the fruits of a programming effort. Private assurances are usually worth diddly-squat by comparison.
5XiXiDu10yHow so? I've just reread some of your comments on your now deleted post. It looks like you honestly tried to get the SIAI to put safeguards into CEV. Given that the idea spread to many people by now, don't you think it would be acceptably to discuss the matter before one or more people take it serious or even consider to implement it deliberately?
3Perplexed10yOk by me. It is pretty obvious by this point that there is no evil conspiracy involved here. But I think the lesson remains, I you delete something, even if it is just because you regret posting it, you create more confusion than you remove.
2waitingforgodel10yI think the question you should be asking is less about evil conspiracies, and more about what kind of organization SIAI is -- what would they tell you about, and what would they lie to you about.
4XiXiDu10yIf the forbidden topic would be made public (and people would believe it), it would result in a steep rise of donations towards the SIAI. That alone is enough to conclude that the SIAI is not trying to hold back something that would discredit it as an organisation concerned with charitable objectives. The censoring of the information was in accordance with their goal of trying to prevent unfriendly artificial intelligence. Making the subject matter public did already harm some people and could harm people in future.
7David_Gerard10yBut the forbidden topic is already public. All the effects that would follow from it being public would already follow. THE HORSE HAS BOLTED. It's entirely unclear to me what pretending it hasn't does for the problem or the credibility of the SIAI.
3XiXiDu10yIt is not as public as you think. If it was then people like waitingforgodel wouldn't ask about it. I'm just trying to figure out how to behave without being able talk about it directly. It's also really interesting on many levels.
8wedrifid10yRather more public than a long forgotten counterfactual discussion collecting dust in the blog's history books would be. :P
3David_Gerard10yPrecisely. The place to hide a needle is in a large stack of needles. The choice here was between "bad" and "worse" - a trolley problem, a lose-lose hypothetical - and they appear to have chosen "worse".
6wedrifid10yI prefer to outsource my needle-keeping security to Clippy in exchange for allowing certain 'bending' liberties from time to time. :)
4David_Gerard10yUpvoted for LOL value. We'll tell Clippy the terrible, no good, very bad idea with reasons as to why this would hamper the production of paperclips. "Hi! I see you've accidentally the whole uFAI! Would you like help turning it into paperclips?"
3wedrifid10yBrilliant.
2TheOtherDave10yOf course, if Clippy were clever he would then offer to sell SIAI a commitment to never release the UFAI in exchange for a commitment to produce a fixed number of paperclips per year, in perpetuity. Admittedly, his mastery of human signaling probably isn't nuanced enough to prevent that from sounding like blackmail.
5David_Gerard10yI really don't see how that follows. Will more of the public take it seriously? As I have noted, so far the reaction from people outside SIAI/LW has been "They did WHAT? Are they IDIOTS?" That doesn't make it not stupid or not counterproductive. Sincere stupidity is not less stupid than insincere stupidity. Indeed, sincere stupidity is more problematic in my experience as the sincere are less likely to back down, whereas the insincere will more quickly hop to a different idea. Citation needed. Citation needed.
5XiXiDu10yI sent you another PM.
4David_Gerard10yHmm, okay. But that, I suggest, appears to have been a case of reasoning oneself stupid. It does, of course, account for SIAI continuing to attempt to secure the stable doors after the horse has been dancing around in a field for several months taunting them with "COME ON IF YOU THINK YOU'RE HARD ENOUGH." (I upvoted XiXiDu's comment here because he did actually supply a substantive response in PM, well deserving of a vote, and I felt this should be encouraged by reward.)

I'm not aware that LW moderators have ever deleted content merely for being critical of or potentially bad PR for SIAI, and I don't think they're naive enough to believe deletion would help. (Roko's infamous post was considered harmful for other reasons.)

The largest disadvantage to not having, essentially, an apprenticeship is the stuff you don't learn.

Now, if you want to research something where all you need is a keen wit, and there's not a ton of knowledge for you to pick up before you start... sure, go ahead. But those topics are few and far between. (EDIT: oh, LW-ish stuff. Meh. Sure, then, I guess. I thought you meant researching something hard >:DDDDD

No, but really, if smart people have been doing research there for 50 years and we don't have AI, that means that "seems easy to make progr... (read more)

First bit of advice: grow up. Be interested in research because you're curious, not because you're special and/or status-seeking.

This comment strikes me as rather confrontational, and also as offering advice based on a misguided understanding of my motives.

Until the word that comes to mind is "improve" rather than "save" you will be looking at the wrong questions for the wrong reasons.

I have very little clue of what you're trying to say.

8Vaniver10yOf course. Whenever someone says they want to do something impossibly hard, the proper response is to dismiss them. Either they agree with you, and you made the right call, or they disagree with you, and you've cemented their resolve. But JoshuaZ is right that I think the wording is ridiculous. "Save the world" is nothing but applause lights [http://lesswrong.com/lw/jb/applause_lights/]. If it were "see a positive singularity" we would have at least gone from 0th order to 1st order. If it were "make life extension available sooner" we've progressed from 1st order to 2nd order. If it were "make lab-grown organs available sooner" we've progressed from 2nd order to 3rd order. If one comes to me with the last desire, I'd tell them to move to Wake Forest and become friends with Anthony Atala. If someone comes to me with the first desire, I pat them on the head. Even Norman Borlaug didn't "save the world." He might have personally extended lifespans by billions of human-years, but that's not close, even on a log scale. And if you want to be a second Norman Borlaug, trying to save the world seems like a poor way to approach that goal because you focus on the wrong questions. Borlaug wanted to improve the world- to make hungry people full by creating wheat varieties that were disease-resistant. He had the altruistic impulse, but he was facing a problem worked out to third order. The altruistic impulse is a great thing, but if you don't have a third order problem yet keep looking. And when asking for career advice, it's relevant information where you are in that process. If you're at 2nd order, your interests will already be narrow enough that most advice will be inapplicable to your situation. The 2nd order person in the particular track I've outlined already knows they will strongly benefit from getting a medical degree in America, England, or China (there may be a few other countries on this list; this isn't my specialty) or that they should earn a bunch of money while
9Kaj_Sotala10yYou're reading way too much into that single line. I wanted to express the sentiment of "I want to be as effective as possible in doing good", and there was a recent post covering that topic which happened to be named "how to save the world", so I linked to it. If that post hadn't been there, I might have said something like "I want to do something meaningful with my life". I was also assuming "saving the world" and other similar expressions to be standard LW jargon for "doing as much good as possible". As for my actual goals... Ideally I'd like to help avert a negative singularity, though since I don't have very high hopes of that actually being possible, I also give the goals of "just have fun" and "help people in the short term" considerable weight, and am undecided as to how much effort I'll in the end spend explicitly on singularity matters. But to the degree that I do end up trying to help the singularity, the three main approaches I've been playing with are * Just make money and donate that to SIAI. * Help influence academia to become more aware of these issues. * Become well-known enough (via e.g. writing, politics) among normal people that I can help spread singularity-related ideas and hopefully get more people to take them seriously. These are obviously not mutually exclusive, and indeed, one of the reasons I'm playing around with the idea of "freelance academia" is that it allows me to do some of the academic stuff without the commitment that e.g. getting a PhD would involve (as I'm not yet sure whether the academic approach is the one that I'd find the most rewarding). All three also have to varying extent an intrinsic appeal, beyond just the singularity aspect: I wouldn't mind having a bit more money, intellectual work is rewarding by itself, and so is writing and having a lot of people care about your opinions. As for the details of the academic career path, the "help avoid a negative singularity" aspect of that currently mainly involves
2JoshuaZ10yI believe that Vaniver is making a distinction between "improve" and "save" in saying that any given individual is unlikely to have a large-scale enough impact to be described as "saving" the world, but that many people can improve the world. This point may have some validity, although Norman Borlaugh may be a relevant counterexample to show that it isn't completely impossible.
4ata10yThat would be only relevant if Kaj had said "I expect to save the world" instead of "Ideally, I'd like to save the world". I read the latter as specifying something like "all existential risks are averted and the world gets much more awesome" as an optimization target, not as something that he wants to (let alone expects to be able to) do completely and singlehandedly. And as an optimization target, it makes good sense. Why aim for imperfection? The target is the measure of utility, not a proposed action or plan on its own. (Possibly relevant: Trying to Try [http://lesswrong.com/lw/uh/trying_to_try/].) (One thing I see about that paragraph that could be legitimately disputed is the jump from specifying the optimization target to "One way to do that involves contributing academic research, which raises the question of what's the most effective way of doing that" without establishing that academic research is itself the best way (or at least a good way) for a very smart person to optimize the aforementioned goal. That itself would be an interesting discussion, but I think in this post it is taken as an assumption. (See also this comment [http://lesswrong.com/lw/2w5/vipassana_meditation_developing_metafeeling_skills/2tqy?c=1] .))

Another idea is the "Bostrom Solution", i.e. be so brilliant that you can find a rich guy to just pay for you to have your own institute at Oxford University.

Then there's the "Reverse Bostrom Solution": realize that you aren't Bostrom-level brilliant, but that you could accrue enough money to pay for an institute for somebody else who is even smarter and would work on what you would have worked on. (FHI costs $400k/year, which isn't such a huge amount as to be unattainable by Kaj or a few Kaj-like entities collaborating)

4shokwave10ySounds like a good bet even if you are brilliant. Make money, use money to produce academic institute, do your research in concert with academics at your institute. This solves all problems of needing to be part of academia, and also solves the problem of academics doing lots of unnecessary stuff - at your institute, academics will not be required to do unnecessary stuff.
9Roko10yMaybe. The disadvantage is lag time, of course. Discount rate for Singularity is very high. Assume that there are 100 years to the singularity, and that P(success) is linearly decreasing in lag time; then every second approximately 25 galaxies are lost, assuming that the entire 80 billion galaxies' fate is decided then. 25 galaxies per second. Wow.
5PeerInfinity10yI'm surprised that noone has asked Roko where he got these numbers from. Wikipedia says [http://en.wikipedia.org/wiki/Observable_universe#Matter_content] that there are about 80 billion galaxies in the "observable universe", so that part is pretty straightforward. Though there's still the question of why all of them are being counted, when most of them probably aren't reachable with slower-than-light travel. But I still haven't found any explanation for the "25 galaxies per second". Is this the rate at which the galaxies burn out? Or the rate at which something else causes them to be unreachable? Is it the number of galaxies, multiplied by the distance to the edge of the observable universe, divided by the speed of light? calculating... Wikipedia says [http://en.wikipedia.org/wiki/Observable_universe#Size] that the comoving distance from Earth to the edge of the observable universe is about 14 billion parsecs (46 billion light-years short scale, i.e. 4.6 × 10^10 light years) in any direction. Google Calculator says 80 billion galaxies / 46 billion light years = 1.73 galaxies per year, or 5.48 × 10^-8 galaxies per second so no, that's not it. If I'm going to allow my mind to be blown by this number, I would like to know where the number came from.
2Caspian10yI also took a while to understand what was meant, so here is my understanding of the meaning: Assumptions: There will be a singularity in 100 years. If the proposed research is started now it will be a successful singularity, e.g. friendly AI. If the proposed research isn't started by the time of the singularity, it will be a unsuccessful (negative) singularity, but still a singularity. The probability of the successful singularity linearly decreases with the time when the research starts, from 100 percent now, to 0 percent in 100 years time. A 1 in 80 billion chance of saving 80 billion galaxies is equivalent to definitely saving 1 galaxy, and the linearly decreasing chance of a successful singularity affecting all of them is equivalent to a linearly decreasing number being affected. 25 galaxies per second is the rate of that decrease.
2Roko10yI meant if you divide the number of galaxies by the number of seconds to an event 100 years from now. Yes, not all reachable. Probably need to discount by an order of magnitude for reachability at lightspeed.
3shokwave10yGuh. Every now and then something reminds me of how important the Singularity is. Time to reliable life extension is measured in lives per minute, time to Singularity is measured in galaxies per second.
2XFrequentist10yIn what I take to be a positive step towards viscerally conquering my scope neglect, I got a wave of chills reading this.

"Rule out"? Seriously? What kind of evidence is it?

You extracted the "rule out" phrase from the sentence:

I just wanted show that intelligence and rational conduct do not rule out the possibility of being wrong about some belief.

From within the common phrase 'do not rule out the possibility' no less!

You then make a reference to '0 and 1s not probabilities' with exaggerated incredulity.

To put it mildly this struck me as logically rude and in general poor form. XiXiDu deserves more courtesy.

To me, $200,000 for a charity seems to be pretty much the smallest possible amount of money. Can you find any charitable causes that recieve less than this?

Basically, you are saying that SIAI DOOM fearmongering is a trick to make money. But really, it fails to satisfy several important criteria:

  • it is shit at actually making money. I bet you that there are "save the earthworm" charities that make more money.

  • it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.

  • it is not optimized for beli

... (read more)
4Roko10yA moment's googling finds this: http://www.buglife.org.uk/Resources/Buglife/Buglife%20Annual%20Report%20-%20web.pdf [http://www.buglife.org.uk/Resources/Buglife/Buglife%20Annual%20Report%20-%20web.pdf] ($863 444) I leave it to readers to judge whether Tim is flogging a dead horse here.
3wedrifid10yNot the sort of thing that could, you know, give you nightmares?
4Roko10yThe sort of thing that could give you nightmares is more like the stuff that is banned. This is different than the mere "existential risk" message.

But probably far more than 1% of cave-men who chose to seek out a sabre-tooth tiger to see if they were friendly died due to doing so.

The relevant question on an issue of personal safety isn't "What % of the population die due to trying this?"

The relevant question is: "What % of the people who try this will die?"

In the first case, rollerskating downhill, while on fire, after having taken arsenic would seem safe (as I suspect no-one has ever done precisely that)

I have not only been warned, but I have stared the basilisk in the eyes, and I'm still here typing about it. In fact, I have only cared enough to do so because it was banned, and I wanted the information on how dangerous it was to judge the wisdom of the censorship.

On a more general note, being terrified of very unlikely terrible events is a known human failure mode. Perhaps it would be more effective at improving human rationality to expose people to ideas like this with the sole purpose of overcoming that sort of terror?

5Jack10yI'll just second that I also read it a while back (though after it was censored) and thought that it was quite interesting but wrong on multiple levels. Not 'probably wrong' but wrong like an invalid logic proof is wrong (though of course I am not 100% certain of anything). My main concern about the censorship is that not talking about what was wrong with the argument will allow the proliferation of the reasoning errors that left people thinking the conclusion was plausible. There is a kind of self-fulfilling prophesy involved in not recognizing these errors which is particularly worrying.
7JGWeissman10yConsider this invalid proof that 1 = 2: 1. Let x = y 2. x^2 = x*y 3. x^2 - y^2 = x*y - y^2 4. (x - y)*(x + y) = y*(x - y) 5. x + y = y 6. y + y = y (substitute using 1) 7. 2y = y 8. 2 = 1 You could refute this by pointing out that step (5) involved division by (x - y) = (y - y) = 0, and you can't divide by 0. But imagine if someone claimed that the proof is invalid because "you can't represent numbers with letters like 'x' and 'y'". You would think that they don't understand what is actually wrong with it, or why someone might mistakenly believe it. This is basically my reaction to everyone I have seen oppose the censorship because of some argument they present that the idea is wrong and no one would believe it.
2Jack10yI'm actually not sure if I understand your point. Either it is a round-about way of making it or I'm totally dense and the idea really is dangerous (or some third option). It's not that the idea is wrong and no one would believe it, it's that the idea is wrong and when presented with with the explanation for why it's wrong no one should believe it. In addition, it's kind of important that people understand why it's wrong. I'm sympathetic to people with different minds that might have adverse reactions to things I don't but the solution to that is to warn them off, not censor the topics entirely.
2shokwave10yThe point we are trying to make is that we think the people who stared the basilisk in the eyes and metaphorically turned to stone are stronger evidence.
8Vaniver10yI get that. But I think it's important to consider both positive and negative evidence- if someone's testimony that they got turned to stone is important, so are the testimonies of people who didn't get turned to stone. The question to me is whether the basilisk turns people to stone or people turn themselves into stone. I prefer the second because it requires no magic powers on the part of the basilisk. It might be that some people turn to stone when they see goatse for the first time, but that tells you more about humans and how they respond to shock than about goatse. Indeed, that makes it somewhat useful to know what sort of things shock other people. Calling this idea 'dangerous' instead of 'dangerous to EY" strikes me as mind projection.
2Vladimir_Nesov10yThis isn't evidence about that hypothesis, it's expected [http://lesswrong.com/lw/39l/how_to_lose_100_karma_in_6_hours_what_just/3432?c=1] that most certainly nothing happens. Yet you write for rhetorical purposes as if it's supposed to be evidence against the hypothesis. This constitutes either lying or confusion (I expect it's unintentional lying, with phrases produced without conscious reflection about their meaning, so a little of both lying and confusion).
5Jack10yThe sentence of Vaniver's you quote seems like a straight forward case of responding to hyperbole [http://lesswrong.com/lw/38u/best_career_models_for_doing_research/344l?c=1] with hyperbole in kind.

I'd just like to insert a little tangent here: Roko's post and the select comments are the only things that moderation had any effect on whatsoever since the launch of the site - if I remember correctly. I don't think even the PUA wars had any interference from above. Of course, this is a community blog, but even this level of noninterference is very non-typical on the internet. Normally you'd join a forum and get a locked thread and a mod warning every so often.

Additionally, that on LW we get this much insight about the workings of SIAI-as-a-nonprofit and have this much space for discussion of any related topics is also an uncommon thing and should be considered a bonus.

2katydee10yTechnically that's not true, if I recall correctly all AI/singularity topics were banned for the first six months to prevent this from being just another Singularity community in the crucial early phases.
2Kutta10yIt was only two months; nevertheless you're right about that.

Even I think you're just being silly now. I really don't see how this helps refine the art of human rationality.

Then I guess I'll be asked to leave the lesswrong site.

Don't let Eliezer to provoke you like that. Obviously just reposting comments would be a waste of time and would just get more of the same. The legitimate purposes of your script are:

  • Ensure that you don't miss out on content.
  • Allow you to inform other interested people of said content (outside of the LessWrong system).
  • Make it possible for you to make observations along the lines of "there has been a comment censored here".
4TheOtherDave10yI'll note that the policy didn't restrict itself to comments reposted here. Admittedly, Internet anonymity being what it is, it's an unenforceable policy outside of LessWrong unless the policy-violator chooses to cop to it.

I removed that sentence. I meant that I didn't believe that the SIAI plans to harm someone deliberately. Although I believe that harm could be a side-effect and that they would rather harm a few beings than allowing some Paperclip maximizer to take over.

You can call me a hypocrite because I'm in favor of animal experiments to support my own survival. But I'm not sure if I'd like to have someone leading an AI project who thinks like me. Take that sentence to reflect my inner conflict. I see why one would favor torture over dust specks but I don't like such... (read more)

2Vladimir_Nesov10yI apologize. I realized my stupidity in interpreting your comment a few seconds after posting the reply (which I then deleted).

I'm speaking of convincing people who don't already agree with them. SIAI and LW look silly now in ways they didn't before.

There may be, as you posit, a good and convincing explanation for the apparently really stupid behaviour. However, to convince said outsiders (who are the ones with the currencies of money and attention), the explanation has to actually be made to said outsiders in an examinable step-by-step fashion. Otherwise they're well within rights of reasonable discussion not to be convinced. There's a lot of cranks vying for attention and money, and an organisation has to clearly show itself as better than that to avoid losing.

(I would have liked to reply to the deleted comment, but you can't reply to deleted comments so I'll reply to the repost.)

  • EDIT: Roko reveals that he was actually never asked to delete his comment! Disregard parts of the rest of this comment accordingly.

I don't think Roko should have been requested to delete his comment. I don't think Roko should have conceded to deleting his comment.

The correct reaction when someone posts something scandalous like

I was once criticized by a senior singinst member for not being prepared to be tortured or raped for th

... (read more)

Roko may have been thinking of [just called him, he was thinking of it] a conversation we had when he and I were roommates in Oxford while I was visiting the Future of Humanity Institute, and frequently discussed philosophical problems and thought experiments. Here's the (redeeming?) context:

As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn't found in humans and isn't a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on "why doesn't anyone create investment funds for future people?" However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.

Usually this doesn't work out well, as t... (read more)

2waitingforgodel10yFirst off, great comment -- interesting, and complex. But, some things still don't make sense to me... Assuming that what you described led to: 1. How did precommitting enter in to it? 2. Are you prepared to be tortured or raped for the cause? Have you precommitted to it? 3. Have other SIAI people you know of talked about this with you, have other SIAI people precommitted to it? 4. What do you think of others who do not want to be tortured or raped for the cause? Thanks, wfg

I find this whole line of conversation fairly ludicrous, but here goes:

Number 1. Time-inconsistency: we have different reactions about an immediate certainty of some bad than a future probability of it. So many people might be willing to go be a health worker in a poor country where aid workers are commonly (1 in 10,000) raped or killed, even though they would not be willing to be certainly attacked in exchange for 10,000 times the benefits to others. In the actual instant of being tortured anyone would break, but people do choose courses of action that carry risk (every action does, to some extent), so the latter is more meaningful for such hypotheticals.

Number 2. I have driven and flown thousands of kilometers in relation to existential risk, increasing my chance of untimely death in a car accident or plane crash, so obviously I am willing to take some increased probability of death. I think I would prefer a given chance of being tortured to a given chance of death, so obviously I care enough to take at least some tiny risk from what I said above. As I also said above, I'm not willing to make very big sacrifices (big probabilities of such nasty personal outcomes) for tiny shifts ... (read more)

6waitingforgodel10yThis sounds very sane, and makes me feel a lot better about the context. Thank you very much. I very much like the idea that top SIAI people believe that there is such a thing as too much devotion to the cause (and, I'm assuming, actively talk people who are above that level down as you describe doing for Roko). As someone who has demonstrated impressive sanity around these topics, you seem to be in a unique position to answer these questions with an above-average level-headedness: 1. Do you understand the math behind the Roko post deletion? 2. What do you think about the Roko post deletion? 3. What do you think about future deletions?

Do you understand the math behind the Roko post deletion?

Yes, his post was based on (garbled versions of) some work I had been doing at FHI, which I had talked about with him while trying to figure out some knotty sub-problems.

What do you think about the Roko post deletion?

I think the intent behind it was benign, at least in that Eliezer had his views about the issue (which is more general, and not about screwed-up FAI attempts) previously, and that he was motivated to prevent harm to people hearing the idea and others generally. Indeed, he was explicitly motivated enough to take a PR hit for SIAI.

Regarding the substance, I think there are some pretty good reasons for thinking that the expected value (with a small probability of a high impact) of the info for the overwhelming majority of people exposed to it would be negative, although that estimate is unstable in the face of new info.

It's obvious that the deletion caused more freak-out and uncertainty than anticipated, leading to a net increase in people reading and thinking about the content compared to the counterfactual with no deletion. So regardless of the substance about the info, clearly it was a mistake to delete (w... (read more)

Less Wrong has been around for 20 months. If we can rigorously carve out the stalker/PIN/illegality/spam/threats cases I would be happy to bet $500 against $50 that we won't see another topic banned over the next 20 months.

That sounds like it'd generate some perverse incentives to me.

8CarlShulman10yUrk.
5TheOtherDave10yJust to be clear: he recognizes this by comparison with the alternative of privately having the poster delete it themselves [http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33uv?c=1], rather than by comparison to not-deleting. Or at least that was my understanding. Regardless, thanks for a breath of clarity in this thread. As a mostly disinterested newcomer, I very much appreciated it.
2CarlShulman10yWell, if counterfactually Roko hadn't wanted to take it down I think it would have been even more of a mistake to delete it, because then the author would have been peeved, not just the audience/commenters.
5TheOtherDave10yWhich is fine. But Eliezer's comments on the subject suggest to me that he doesn't think that. More specifically, they suggest that he thinks the most important thing is that the post not be viewable, and if we can achieve that by quietly convincing the author to take it down, great, and if we can achieve it by quietly deleting it without anybody noticing, great, and if we can't do either of those then we achieve it without being quiet, which is less great but still better than leaving it up. And it seemed to me your parenthetical could be taken to mean that he agrees with you that deleting it would be a mistake in all of those cases, so I figured I would clarify (or let myself be corrected, if I'm misunderstanding).
2TimFreeman9yI agree with your main point, but the thought experiment seems to be based on the false assumption that the risk of being raped or murdered are smaller than 1 in 10K if you stay at home. Wikipedia guesstimates that 1 in 6 women in the US are on the receiving end of attempted rape at some point [http://en.wikipedia.org/wiki/Rape_statistics], so someone who goes to a place with a 1 in 10K chance of being raped or murdered has probably improved their personal safety. To make a better thought experiment, I suppose you have to talk about the marginal increase in rape or murder rate when working in the poor country when compared to staying home, and perhaps you should stick to murder since the rape rate is so high.
6Nick_Tarleton10yRoko was not requested to delete his comment. See this parallel thread [http://lesswrong.com/lw/38u/best_career_models_for_doing_research/3371?c=1]. (I would appreciate it if you would edit your comment to note this, so readers who miss this comment don't have a false belief reinforced.) (ETA: thanks) Agreed (and I think the chance of wfg's reposts being deleted is very low, because most people get this). Unfortunately, I know nothing about the alleged event (Roko may be misdescribing it, as he misdescribed my message to him) or its context.
3waitingforgodel10yI wish I could upvote twice

Most people who actually work full-time for SIAI are too busy to read every comments thread on LW. In some cases, they barely read it at all. The wacky speculation here about SIAI is very odd -- a simple visit in most cases would eliminate the need for it. Surely more than a hundred people have visited our facilities in the last few years, so plenty of people know what we're really like in person. Not very insane or fanatical or controlling or whatever generates a good comic book narrative.

I am much more sympathetic to "keeping goatse off of site X" than "keeping people from seeing goatse," and so that's a reasonable policy. If your site is about posting pictures of cute kittens, then goatse is not a picture of a cute kitten.

However, it seems to me that suspected Langford basilisks are part of the material of LessWrong. Imagine someone posted in the discussion "hey guys, I really want to be an atheist but I can't stop worrying about whether or not the Rapture will happen, and if it does life will suck." It seems... (read more)

No, this is the "collective-action-problem" - where the end of the world arrives - despite a select band of decidedly amateurish messiahs arriving and failing to accomplish anything significant.

You are looking at those amateurs now.

Others are more sidelined than supporting a particular side.

I would prefer you not treat people avoiding a discussion as evidence that people don't differentially evaluate the assertions made in that discussion.

Doing so creates a perverse incentive whereby chiming in to say "me too!" starts to feel like a valuable service, which would likely chase me off the site altogether. (Similar concerns apply to upvoting comments I agree with but don't want to see more of.)

If you are seriously interested in data about how many people believe or disbelie... (read more)

Certainly. However, error-checking oneself is notoriously less effective than having outsiders do so.

"For the computer security community, the moral is obvious: if you are designing a system whose functions include providing evidence, it had better be able to withstand hostile review." - Ross Anderson, RISKS Digest vol 18 no 25

Until a clever new thing has had decent outside review, it just doesn't count as knowledge yet.

one wonders how something like that might have evolved, doesn't one? What happened to all the humans who came with the mutation that made them want to find out whether the sabre-toothed tiger was friendly?

I don't see how very unlikely events that people knew the probability of would have been part of the evolutionary environment at all.

In fact, I would posit that the bias is most likely due to having a very high floor for probability. In the evolutionary environment things with probability you knew to be <1% would be unlikely to ever be brought to yo... (read more)

one wonders how something like that might have evolved, doesn't one?

No, really, one doesn't wonder. It's pretty obvious. But if we've gotten to the point where "this bias paid off in the evolutionary environment!" is actually used as an argument, then we are off the rails of refining human rationality.

2Roko10yWhat's wrong with using "this bias paid off in the evolutionary environment!" as an argument? I think people who paid more attention to this might make fewer mistakes, especially in domains where there isn't a systematic, exploitable difference between EEA and now. The evolutionary environment contained enetities capable of dishing out severe punishments, unertainty, etc. If anything, I think that the heuristic that an idea "obviously" can't be dangerous is the problem, not the heuristic that one should take care around possibilities of strong penalites.
4timtyler10yIt is a fine argument for explaining the widespread occcurrence of fear. However, today humans are in an environment where their primitive paranoia is frequently triggered by inappropriate stimulii. Dan Gardener goes into this in some detail in his book: Risk: The Science and Politics of Fear [http://www.amazon.com/Risk-Science-Politics-Dan-Gardner/dp/1905264151] Video of Dan discussing the topic: Author Daniel Gardner says Americans are the healthiest and safest humans in the world, but are irrationally plagued by fear. He talks with Maggie Rodriguez about his book 'The Science Of Fear.' [http://vspy.org/?v=Lx9tZ-g3H8g]

I'm willing to take Eliezer's word for it if he thinks it is, so I blanked it

I know why you did it. My intention is to register disagreement with your decision. I claim it would have sufficed to simply let Eliezer delete the comment, without you yourself taking additional action to further delete it, as it were.

Let it go.

I could do without this condescending phrase, which unnecessarily (and unjustifiably) imputes agitation to me.

2ata10ySorry, you're right. I didn't mean to imply condescension or agitation toward you; it was written in a state of mild frustration, but definitely not at or about your post in particular.

An important academic option: get tenure at a less reputable school. In the States at least there are tons of universities that don't really have huge research responsibilities (so you won't need to worry about pushing out worthless papers, preparing for conferences, peer reviewing, etc), and also don't have huge teaching loads. Once you get tenure you can cruise while focusing on research you think matters.

The down side is that you won't be able to network quite as effectively as if you were at a more prestigious university and the pay isn't quite as good.

2utilitymonster10yDon't forget about the ridiculous levels of teaching you're responsible for in that situation. Lots worse than at an elite institution.
2Jordan10yNot necessarily. I'm not referring to no-research universities, which do have much higher teaching loads (although still not ridiculous. Teaching 3 or 4 classes a semester is hardly strenuous). I'm referring to research universities that aren't in the top 100, but which still push out graduate students. My undegrad Alma Mater, Kansas University, for instance. Professors teach 1 or 2 classes a semester, with TA support (really, when you have TAs, teaching is not real work). They are still expected to do research, but the pressure is much less than at a top 50 school.

But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.

This depends on the field, university, and maybe country. In many cases, doing your own research is the main focus f... (read more)

This is a politically reinforced heuristic that does not work for this problem.

Transparency is very important regarding people and organisations in powerful and unique positions. The way they act and what they claim in public is weak evidence in support of their honesty. To claim that they have to censor certain information in the name of the greater public good, and to fortify the decision based on their public reputation, does bear no evidence about their true objectives. The only way to solve this issue is by means of transparency.

Surely transparenc... (read more)

I would choose that knowledge if there was the chance that it wouldn't find out about it. As far as I understand your knowledge of the dangerous truth, it just increases the likelihood of suffering, it doesn't make it guaranteed.

I don't understand your reasoning here -- bad events don't get a "flawless victory" badness bonus for being guaranteed. A 100% chance of something bad isn't much worse than a 90% chance.

I wonder what fraction of actual historical events a hostile observer taking similar liberties could summarize to also sound like some variety of "a fantasy story designed to manipulate".

There are thousands of truths I know that I don't want you to know about. (Or, to be more precise, that I want you to not know about.) Are you really most interested in those, out of all the truths I know?

I think I'd be disturbed by that if I thought it were true.

If you can think of something equally bad that targets SIAI specifically, (or anyone reading this can), email it to badforsiai.wfg@xoxy.net

It's not about being 'bad', dammit.

Ask yourself what you want then ask yourself how to achieve it. Eliminate threats because they happen to be in your way, not out of spite.

FYI, this is an excellent example of contempt.

[-][anonymous]10y 6

Most people wouldn't dispute the first half of your comment. What they might take issue with is this:

Yes, that means we have to trust Eliezer.

The problem is that we have to defer to Eliezer's (and, by extension, SIAI's) judgment on such issues. Many of the commenters here think that this is not only bad PR for them, but also a questionable policy for a "community blog devoted to refining the art of human rationality."

7JGWeissman10yIf you are going to quote and respond to that sentence, which anticipates people objecting to trusting Eliezer to make those judgments, you should also quote and repond to my response to that anticipation (ie, the next sentence): Also, I am getting tired of objections framed as predictions that others would make the objections. It is possible to have a reasonable discussion with people who put forth their own objections, explain their own true rejections, and update their own beleifs. But when you are presenting the objections you predict others will make, it is much harder, even if you are personally convinced, to predict that these nebulous others will also be persuaded by my response. So please, stick your own neck out if you want to complain about this.
2[anonymous]10yThat's definitely a fair objection, and I'll answer: I personally trust Eliezer's honesty, and he is obviously much smarter than myself. However, that doesn't mean that he's always right, and it doesn't mean that we should trust his judgment on an issue until it has been discussed thoroughly. I agree. The above paragraph is my objection.

Desrtopa isn't affiliated with SIAI. You seem to be deliberately designing confusing comments, a la Glenn Beck's "I'm just asking questions" motif.

3David_Gerard10yIs calling someone here Glenn Beck equivalent to Godwination? wfg's post strikes me as almost entirely reasonable (except the last question, which is pointless to ask) and your response as excessively defensive. Also, you're saying this to someone who says he's a past donor and has not yet ruled out being a future donor. This is someone who could reasonably expect his questions to be taken seriously. (I have some experience of involvement in a charity [http://wikimediafoundation.org] that suffers a relentless barrage of blitheringly stupid questions from idiots, and my volunteer role is media handling - mostly I come up with good and effective soundbites. So I appreciate and empathise with your frustration, but I think I can state with some experience behind me that your response is actually terrible.)

Okay. Given your and the folks who downvoted my comment's perceptions, I'll revise my opinion on the matter. I'll also put that under "analogies not to use"; I was probably insufficiently familiar with the pop culture.

The thing I meant to say was just... Roko made a post, Nick suggested it gave bad impressions, Roko deleted it. wfg spent hours commenting again and again about how he had been asked to delete it, perhaps by someone "high up within SIAI", and how future censorship might be imminent, how the fact that Roko had had a bascially unrelated conversation suggested that we might be lying to donors (a suggestion that he didn't make explicitly, but rather left to innuendo), etc. I feel tired of this conversation and want to go back to research and writing, but I'm kind of concerned that it'll leave a bad taste in readers mouths not because of any evidence that's actually being advanced, but because innuendo and juxtapositions, taken out of context, leave impressions of badness.

I wish I knew how to have a simple, high-content, low-politics conversation on the subject. Especially one that was self-contained and didn't leave me feeling as though I couldn't bow out after awhile and return to other projects.

9David_Gerard10yThe essential problem is that with the (spectacular) deletion of the Forbidden Post, LessWrong turned into the sort of place where posts get disappeared. Those are not good places to be on the Internet. They are places where honesty is devalued and statements of fact must be reviewed for their political nature. So it can happen here - because it did happen. It's no longer in the class "things that are unthinkable". This is itself a major credibility hit for LW. And when a Roko post disappears - well, it was one of his posts that was disappeared before. With this being the situation, assumptions of bad faith are going to happen. (And "stupidity" is actually the assumption of good faith.) Your problem now is to restore trust in LW's intellectual integrity, because SIAI broke it good and hard. Note that this is breaking an expectation, which is much worse than breaking a rule - if you break a rule you can say "we broke this rule for this reason", but if you break expectations, people feel the ground moving under their feet, and get very upset. There are lots of suggestions in this thread as to what people think might restore their trust in LW's intellectual integrity, SIAI needs to go through them and work out precisely what expectations they broke and how to come clean on this. I suspect you could at this point do with an upside to all this. Fortunately, there's an excellent one: no-one would bother making all this fuss if they didn't really care about LW. People here really care about LW and will do whatever they can to help you make it better. (And the downside is that this is separate from caring about SIAI, but oh well ;-) ) (and yes, this sort of discussion around WP/WMF has been perennial since it started.)
5Emile10yLike Airedale, I don't have that impression - my impression is that 1) Censorship by website's owner doesn't have the moral problems associated with censorship by governments (or corporations), and 2) in online communities, dictatorship can work quite well, as long as the dictator isn't a complete dick. I've seen quite functional communities where the moderators would delete posts without warning if they were too stupid, offensive, repetitive or immoral (such as bragging about vandalizing wikipedia). So personally, I don't see a need for "restoring trust". Of course, as your post attests, my experience doesn't seem to generalize to other posters.
5Airedale10yI’ve seen several variations of this expressed about this topic, and it’s interesting to me, because this sort of view is somewhat foreign to me. I wouldn’t say I’m pro-censorship, but as an attorney trained in U.S. law, I think I’ve very much internalized the idea that the most serious sorts of censorship actions are those taken by the government (i.e., this is what the First Amendment free speech right is about, and that makes sense because of the power of the government), and that there are various levels of seriousness/danger beyond that, with say, big corporate censorship also being somewhat serious because of corporate power, and censorship by the owner of a single blog (even a community one) not being very serious at all, because a blogowner is not very powerful compared to the government or a major corporation, and shutting down one outlet of communication on the Internet is comparatively not a big deal because it’s a big internet where there are lots of other places to express one’s views. If a siteowner exercises his or her right to delete something on a website, it's just not the sort of harm that I weigh very heavily. What I’m totally unsure of is where the average LW reader falls on the scale between you and me, and therefore, despite the talk about the Roko incident being such a public relations disaster and a “spectacular” deletion, I just don’t know how true that is and I’m curious what the answer would be. People who feel like me may just not feel the need to weigh in on the controversy, whereas people who are very strongly anti-censorship in this particular context do.
3[anonymous]10yThat's not really the crux of the issue (for me, at least, and probably not for others). As David Gerard put it, the banning of Roko's post was a blow to people's expectations, which was why it was so shocking. In other words, it was like discovering that LW wasn't what everyone thought it was (and not in a good way). Note: I personally wouldn't classify the incident as a "disaster," but was still very alarming.
3SilasBarta10yI wish you used a classification algorithm that more naturally identified the tension between "wanting low-politics conversation" and comparing someone to Glenn Beck as a means of criticism.
3AnnaSalamon10ySorry. This was probably simply a terrible mistake born of unusual ignorance of pop culture and current politics. I meant to invoke "using questions as a means to plant accusations" and honestly didn't understand that he was radically unpopular. I've never watched anything by him.
2SilasBarta10yWell, it's not that Beck is unpopular; it's that he's very popular with people of a particular political ideology. In fairness, though, he is sort of the canonical example for "I'm just asking questions, here!". (And I wasn't one of those voting you down on this.) I think referring to the phenomenon itself is enough to make one's point on the issue, and it's not necessary to identify a person who does it a lot.
2XiXiDu10yThis is about politics [http://en.wikipedia.org/wiki/Politics]. The censorship of an idea related to a future dictator implementing some policy is obviously about politics. You tell people to take friendly AI serious. You tell people that we need friendly AI to marshal our future galactic civilisation. People take it serious. Now the only organisation working on this is the SIAI. Therefore the SIAI is currently in direct causal control of our collective future. So why do you wonder people care about censorship and transparency? People already care about what the U.S. is doing and demand transparency. Which is ludicrous in comparison to the power of a ruling superhuman artificial intelligence that implements what the SIAI came up with as the seed for its friendliness. If you really think that the SIAI has any importance and could possible achieve to influence or implement the safeguards for some AGI project, then everything the SIAI does is obviously very important to everyone concerned (everyone indeed).
2Vaniver10y-3 after less than 15 minutes suggests so!

If an organisation, that is working on a binding procedure for a all-powerful dictator to implement it on the scale of the observable universe, tried to censor information, that could directly affect me for the rest of time in the worst possible manner, I got a very weak belief that their causal control is much more dangerous than the acausal control between me and their future dictator.

Caring "to censor the topic" doesn't make sense...

So you don't care if I post it everywhere and send it to everyone I can?

To generalise your answer: "the inferential distance is too great to show people why we're actually right." This does indeed suck, but is indeed not reasonably avoidable.

The approach I would personally try is furiously seeding memes that make the ideas that will help close the inferential distance more plausible. See selling ideas in this excellent post.

3TheOtherDave10yFor what it's worth, I gather from various comments he's made in earlier posts that EY sees the whole enterprise of LW as precisely this "furiously seeding memes" strategy. Or at least that this is how he saw it when he started; I realize that time has passed and people change their minds. That is, I think he believes/ed that understanding this particular issue depends on understanding FAI theory depends on understanding cognition (or at least on dissolving common misunderstandings about cognition) and rationality, and that this site (and the book he's working on) are the best way he knows of to spread the memes that lead to the first step on that chain. I don't claim here that he's right to see it that way, merely that I think he does. That is, I think he's trying to implement the approach you're suggesting, given his understanding of the problem.
3David_Gerard10yWell, yes. (I noted it as my approach, but I can't see another one to approach it with.) Which is why throwing LW's intellectual integrity under the trolley like this is itself remarkable.
2TheOtherDave10yWell, there's integrity, and then there's reputation, and they're different. For example, my own on-three-minutes-thought proposed approach [http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33hj?c=1] is similar to Kaminsky's [http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33o6?c=1?context=3] , though less urgent. (As is, I think, appropriate... more people are working on hacking internet security than on, um, whatever endeavor it is that would lead one to independently discover dangerous ideas about AI. To put it mildly.) I think that approach has integrity, but it won't address the issues of reputation: adopting that approach for a threat that most people consider absurd won't make me seem any less absurd to those people.

The concept of ethical injunctions is known in SIAI circles I think. Enduring personal harm for your cause and doing unethical things for your cause are therefore different. Consider Eliezer's speculation about whether a rationalist "confessor" should ever lie in this post, too. And these personal struggles with whether to ever lie about SIAI's work.

4waitingforgodel10yThat "confessor" link is terrific If banning Roko's post would reasonably cause discussion of those ideas to move away from LessWrong, then by EY's own reasoning (the link you gave) it seems like a retarded move. Right?
6Bongo10yIf the idea is actually dangerous, it's way less dangerous to people who aren't familiar with pretty esoteric Lesswrongian ideas. They're prerequisites to being vulnerable to it. So getting conversation about the idea away from Lesswrong isn't an obviously retarded idea.

It's not just more clear, it allows for better credit assignment in cases where both good and bad points are made.

You are serious?

  • What qualifies as a 'Friendly' AI?
  • If someone is about to execute an AI running 'CEV' should I push a fat man on top of him and save five people from torture? What about an acausal fat man? :)
  • (How) can acausal trade be used to solve the cooperation problem inherent in funding FAI development? If I recall this topic was one that was explicitly deleted. Torture was mostly just a superficial detail.

... just from a few seconds brainstorming. These are the kinds of questions that can not be discussed without, at the very least, significant ... (read more)

As I understand it, the purpose of bothering to advocate TDT is that it beats CDT in the hypothetical case of dealing with Omega (who does not exist), and is therefore more robust, then this failure in a non-hypothetical situation suggests a flaw in its robustness, and it should be regarded as less reliable than it may have been regarded previously.

The decision you refer to here... I'm assuming it is this still the Eliezer->Roko decision? (This discussion is not the most clearly presented.) If so for your purposes you can safely consider 'TDT/CDT' ir... (read more)

2David_Gerard10yThat's the one, that being the one specific thing I've been talking about all the way through. Vladimir Nesov cited acausal decision theories as the reasoning here [http://lesswrong.com/lw/2zg/ben_goertzel_the_singularity_institutes_scary/2v9q?c=1] and here [http://lesswrong.com/lw/2zg/ben_goertzel_the_singularity_institutes_scary/2vdu?c=1] - if not TDT, then a similar local decision theory. If that is not the case, I'm sure he'll be along shortly to clarify. (I stress "local" to note that they suffer a lack of outside review or even notice. A lack of these things tends not to work out well in engineering or science either.)
3wedrifid10yGood, that had been my impression. Independently of anything that Vladmir may have written it is my observation that the 'TDT-like' stuff was mostly relevant to the question "is it dangerous for people to think X?" Once that has been established the rest of the decision making, what to do after already having reached that conclusion, was for most part just standard unadorned human thinking. From what I have seen (including your references to reputation self sabotage by SIAI) you were more troubled by the the latter parts than the former. Even if you do care about the more esoteric question "is it dangerous for people to think X?" I note that 'garbage in, garbage out' applies here as it does elsewhere. (I just don't like to see TDT unfairly maligned. Tarnished by association as it were.)

I haven't been saying I believed it was wrong to censor (although I do think that it's a bad idea in general). I have been saying I believe it was stupid and counterproductive to censor, and that this is not only clearly evident from the results, but should have been trivially predictable (certainly to anyone who'd been on the Internet for a few years) before the action was taken. And if the LW-homebrewed, lacking in outside review, Timeless Decision Theory was used to reach this bad decision, then TDT was disastrously inadequate (not just slightly inadequ... (read more)

Yes, the attempt to censor was botched and I regret the botchery. In retrospect I should have not commented or explained anything, just PM'd Roko and asked him to take down the post without explaining himself.

7David_Gerard10yThis is actually quite comforting to know. Thank you. (I still wonder WHAT ON EARTH WERE YOU THINKING at the time, but you'll answer as and when you think it's a good idea to, and that's fine.) (I was down the pub with ciphergoth just now and this topic came up ... I said the Very Bad Idea sounded silly as an idea, he said it wasn't as silly as it sounded to me with my knowledge. I can accept that. Then we tried to make sense of the idea of CEV as a practical and useful thing. I fear if I want a CEV process applicable by humans I'm going to have to invent it. Oh well.)
2Roko10yAnd I would have taken it down. My bad for not asking first most importantly.

Peter Singer's media-touted "position on infanticide" is an excellent example of why even philosophers might shy away from talking about hypotheticals in public. You appear to have just become Desrtopa's nighmare.

Please rephrase without using "selling your soul".

Are there any good ways of getting rich that don't involve a Faustian exchange with Lucifer himself?

3Alicorn10yPfft. No good ways.
2katydee10yWithout corrupting my value system, I suppose? I'm interested in getting money for reasons other than my own benefit. I am not fully confident in my ability to enter a field like finance without either that changing or me getting burned out by those around me.

I pointed out to Roko by PM that his comment couldn't be doing his cause any favors, but did not ask him to delete it, and would have discouraged him from doing so.

I'm sure someone else can explain this better than me, but: As I understand it, a util understood timelessly (rather than like money, which there are valid reasons to discount because it can be invested, lost, revalued, etc. over time) builds into how it's counted all preferences, including preferences that interact with time. If you get 10 utils, you get 10 utils, full stop. These aren't delivered to your door in a plain brown wrapper such that you can put them in an interest-bearing account. They're improvements in the four-dimensional state of the en... (read more)

Was it not clear that I do not assign particular credence to Eliezer when it comes to judging risks? I thought I expressed that with considerable emphasis.

I'm aware that you disagree with my conclusions - and perhaps even my premises - but I can assure you that I'm speaking directly to the topic.

Point of fact: the negative singularity isn't a superstimulus for evolved fear circuits: current best-guess would be that it would be a quick painless death in the distant future (30 years+ by most estimates, my guess 50 years+ if ever). It doesn't at all look like how I would design a superstimulus for fear.

2timtyler10yIt typically has the feature that you, all your relatives, friends and loved-ones die - probably enough for most people to seriously want to avoid it. Michael Vasser talks about "eliminating everything that we value in the universe [http://vspy.org/?v=S8NYfqnb04U]". Maybe better super-stimuli could be designed - but there are constraints. Those involved can't just make up the apocalypse that they think would be the most scary one. Despite that, some positively hell-like scenarios have been floated around recently. We will have to see if natural selection on these "hell" memes results in them becoming more prominent - or whether most people just find them too ridiculous to take seriously.
2wedrifid10yYes, you can only look at them through a camera lens, as a reflection in a pool or possibly through a ghost! ;)

Mostly not - but then I am a human full of cognitive biases. Has anyone else in the field paid them any attention? Do they have any third-party notice at all? We're talking here about somewhere north of a million words of closely-reasoned philosophy with direct relevance to that field's big questions, for example. It's quite plausible that it could be good and have no notice, because there's not that much attention to go around; but if you want me to assume it's as good as it would be with decent third-party tyre-kicking, I think I can reasonably ask for m... (read more)

You do not understand what you are talking about.

The basilisk idea has no positive value. All it does is cause those who understand it to bear a very low probability of suffering incredible disutility at some point in the future. Explaining this idea to someone does them about as much good as slashing their tires.

4XiXiDu10yI understand that but do not see that the description applies to the idea in question, insofar as it is in my opinion no more probable than fiction and that any likelihood is being outweighed by opposing ideas. There are however other well-founded ideas, free speech and transparency, that are being ignored. I also believe that people would benefit from talking about it and possible overcome and ignore it subsequently. But I'm tired of discussing this topic and will do you the favor to shut up about it. But remember that I haven't been the one who started this thread. It was Roko and whoever asked to delete Roko's comment.

One big disadvantage is that you won't be interacting with other researchers from whom you can learn.

Research seems to be an insiders' game. You only ever really see the current state of research in informal settings like seminars and lab visits. Conference papers and journal articles tend to give strange, skewed, out-of-context projections of what's really going on, and books summarise important findings long after the fact.

3Danny_Hintze10yAt the same time however, you might be able to interact with researchers more effectively. For example, you could spend some of those research weeks visiting selected labs and seminars and finding out what's up. It's true that this would force you to be conscientious about opportunities and networking, but that's not necessarily a bad thing. Networks formed with a very distinct purpose are probably going to outperform those that form more accidentally. You wouldn't be as tied down as other researchers, which could give you an edge in getting the ideas and experiences you need for your research, while simultaneously making you more valuable to others when necessary (For example, imagine if one of your important research contacts needs two weeks of solid help on something. You could oblige whereas others with less fluid obligations could not.).

Your post has been moved to the Discussion section, not deleted.

Eh, if the stuff hinted at really exist, you should release it anyway. I expect the stuff is not really that bad and you'll hurt SIAI more with innuendo than with the stuff.

From my point of view, and as I discussed in the post (this discussion got banned with the rest, although it's not exactly on that topic), the problem here is the notion of "blackmail". I don't know how to formally distinguish that from any other kind of bargaining, and the way in which Roko's post could be wrong that I remember required this distinction to be made (it could be wrong in other ways, but that I didn't notice at the time and don't care to revisit).

(The actual content edited out and posted as a top-level post.)

+5 is fine!

Y'know, one of the actual problems with LW is that I read it in my Internet as Television time, but there's a REALLY PROMINENT SCORE COUNTER at the top left. This does not help in not treating it as a winnable video game.

(That said, could the people mass-downvoting waitingforgodel please stop? It's tiresome. Please try to go by comment, not poster.)

2komponisto10ySo true! (Except it's at the top right. At least, the one I'm thinking of.)

YES IT IS. In case anyone missed it. It isn't Roko's post we're talking about right now

5Roko10yThere is still a moral sense in which if, after careful thought, I decided that that material should not have been posted, then any posts which resulted solely from my post are in a sense a violation of my desire to not have posted it. Especially if said posts operate under the illusion that my original post was censored rather than retracted. But in reality such ideas tend to propagate like the imp of the perverse [http://en.wikipedia.org/wiki/The_Imp_of_the_Perverse]: a gnawing desire to know what the "censored" material is, even if everyone who knows what it is has subsequently decided that they wished they didn't! E.g both me and Nesov have been persuaded (once fully filled in) that this is really nasty stuff and shouldn't be let out. (correct me if I am wrong). This "imp of the perverse" property is actually part of the reason why the original post is harmful. In a sense, this is an idea-virus which makes people who don't yet have it want to have it, but as soon as they have been exposed to it, they (belatedly) realize they really didn't want to know about it or spread it. Sigh.
8XiXiDu10yThe only people who seem to be filled in are you and Yudkowsky. I think Nesov just argues against it based on some very weak belief. As far as I can tell, I got all the material in question. The only possible reason I can see for why one wouldn't want to spread it is that its negative potential does outweigh its very-very-low-probability (and that only if you accept a long chain of previous beliefs). It doesn't. It also isn't some genuine and brilliant idea that all this mystery mongering makes it seem to be. Everyone I sent it just laughed about it. But maybe you can fill me in?
5Roko10yLook, you have three people all of whom think it is a bad idea to spread this. All are smart. Two initially thought it was OK to spread it. Furthermore, I would add that I wish I had never learned about any of these ideas. In fact, I wish I had never come across the initial link on the internet that caused me to think about transhumanism and thereby about the singularity; I wish very strongly that my mind had never come across the tools to inflict such large amounts of potential self-harm with such small durations of inattention, uncautiousness and/or stupidity, even if it is all premultiplied by a small probability. (not a very small one, mind you. More like 1/500 type numbers here) If this is not enough warning to make you stop wanting to know more, then you deserve what you get.

I wish I had never come across the initial link on the internet that caused me to think about transhumanism and thereby about the singularity;

I wish you'd talk to someone other than Yudkowsky about this. You don't need anyone to harm you, you already seem to harm yourself. You indulge yourself in self-inflicted psychological stress. As Seneca said, "there are more things that terrify us than there are that oppress us, and we suffer more often in opinion than in reality". You worry and pay interest for debt that will likely never be made.

Look, you have three people all of whom think it is a bad idea to spread this. All are smart.

I read about quite a few smart people who hold idiot beliefs, I only consider this to be marginal evidence.

Furthermore, I would add that I wish I had never learned about any of these ideas.

You'd rather be some ignorant pleasure maximizing device? For me truth is the most cherished good.

If this is not enough warning to make you stop wanting to know more, then you deserve what you get.

BS.

5Roko10yMore so than not opening yourself up to a small risk of severe consequences? E.g. if you found a diary that clearly belonged to some organized crime boss, would you open it up and read it? I see this situation as analogous.
3Manfred10yReally thought you were going to go with Tom Riddle on this one. Perfect line break for it :)
8Vaniver10yI see a lot more than three people here, most of whom are smart, and most of them think that Langford basilisks are fictional, and even if they aren't, censoring them is the wrong thing to do. You can't quarantine the internet, and so putting up warning signs makes more people fall into the pit.
3katydee10yI saw the original idea and the discussion around it, but I was (fortunately) under stress at the time and initially dismissed it as so implausible as to be unworthy of serious consideration. Given the reactions to it by Eliezer, Alicorn, and Roko, who seem very intelligent and know more about this topic than I do, I'm not so sure. I do know enough to say that, if the idea is something that should be taken seriously, it's really serious. I can tell you that I am quite happy that the original posts are no longer present, because if they were I am moderately confident that I would want to go back and see if I could make more sense out of the matter, and if Eliezer, Alicorn, and Roko are right about this, making sense out of the matter would be seriously detrimental to my health. Thankfully, either it's a threat but I don't understand it fully, in which case I'm safe, or it's not a threat, in which case I'm also safe. But I am sufficiently concerned about the possibility that it's a threat that I don't understand fully but might be able to realize independently given enough thought that I'm consciously avoiding extended thought about this matter. I will respond to posts that directly relate to this one but am otherwise done with this topic-- rest assured that, if you missed this one, you're really quite all right for it!
5Vaniver10yThis line of argument really bothers me. What does it mean for E, A, and R to seem very intelligent? As far as I can tell, the necessary conclusion is "I will believe a controversial statement of theirs without considering it." When you word it like that, the standards are a lot higher than "seem very intelligent", or at least narrower- you need to know their track record on decisions like this. (The controversial statement is "you don't want to know about X," not X itself, by the way.)
9katydee10yI am willing to accept the idea that (intelligent) specialists in a field may know more about their field than nonspecialists and are therefore more qualified to evaluate matters related to their field than I.
5Vaniver10yGood point, though I would point out that you need E, A, and R to be specialists when it comes to how people react to X, not just X, and I would say there's evidence that's not true.
4Desrtopa10yConsidering the extraordinary appeal that forbidden knowledge has even for the average person, let alone the exceptionally intellectually curious, I don't think this is a very effective way to warn a person off of seeking out the idea in question. Far from deserving what they get, such a person is behaving in a completely ordinary manner, to exceptionally severe consequence. Personally, I don't want to know about the idea (at least not if it's impossible without causing myself significant psychological distress to no benefit,) but I've also put significant effort into training myself out of responses such as automatically clicking links to shock sites that say "Don't click this link!"
5WrongBot10yIf the idea is dangerous in the first place (which is very unlikely), it is only dangerous to people who understand it, because understanding it makes you vulnerable. The better you understand it and the more you think about it, the more vulnerable you become. In hindsight, I would prefer to never have read about the idea in question. I don't think this is a big issue, considering the tiny probability that the scenario will ever occur, but I am glad that discussing it continues to be discouraged and would appreciate it if people stopped needlessly resurrecting it over and over again.
7Vaniver10yThis strikes me as tautological and/or confusing definitions. I'm happy to agree that the idea is dangerous to people who think it is dangerous, but I don't think it's dangerous and I think I understand it. To make an analogy, I understand the concept of hell but don't think it's dangerous, and so the concept of hell does not bother me. Does the fact that I do not have the born-again Christian's fear of hell mean that they understand it and I don't? I don't see why it should.
5WrongBot10yI can't figure out a way to explain this further without repropagating the idea, which I will not do. It is likely that there are one or more pieces of the idea which you are not familiar with or do not understand, and I envy your epistemological position.
2Vladimir_Nesov10yI wasn't "filled in", and I don't know whether my argument coincides with Eliezer's. I also don't understand why he won't explain his argument, if it's the same as mine, now that content is in the open (but it's consistent with, that is responds to the same reasons as, continuing to remove comments pertaining to the topic of the post, which makes it less of a mystery).
2Roko10yBut you think that it is not a good thing for this to propagate more?
3Vladimir_Nesov10yAs a decision on expected utility under logical uncertainty, but extremely low confidence, yes. I can argue that it most certainly won't be a bad thing (which I even attempted in comments to the post itself, my bad), the expectation of it being a bad thing derives from remaining possibility of those arguments failing. As Carl said [http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33vx?c=1], "that estimate is unstable in the face of new info" (which refers to his own argument, not necessarily mine).

Are you joking? Do you have any idea what a retarded law can do to existential risks?

P(law will pass|it is retarded && its sole advocate publicly described it as retarded) << 10^-6

I'd also like to see His Dark Materials with rationalist!Lyra. The girl had an alethiometer. She should have kicked way more ass than she did as soon as she realized what she had.

However, discussion of the chain of reasoning is on-topic for LessWrong (discussing a spectacularly failed local chain of reasoning and how and why it failed), and continued removal of bits of the discussion does constitute throwing LessWrong's integrity in front of the trolley.

You're throwing around accusations of lying pretty lightly.

I am not one of the downvoters you are complaining about but the distinction is a temporal one, not one of differing judgement. I have since had the chance to add my downvote. That suggests my reasoning may have a slightly higher correlation at least. :)

If you're genuinely unaware of the status-related implications of the way you phrased this comment, and/or of the fact that some people rate those kinds of implications negatively, let me know and I'll try to unpack them.

I understand that status-grabbing phrasing can explain why downvotes were in fact

... (read more)

Don't let EY chill your free speech -- this is supposed to be a community blog devoted to rationality... not a SIAI blog where comments are deleted whenever convenient.

Following is another analysis.

Consider a die that was tossed 20 times, and each time it fell even side up. It's not surprising because it's a low-probability event: you wouldn't be surprised if you observed most other combinations equally improbable under the hypothesis that the die is fair. You are surprised because a pattern you see suggests that there is an explanation for your observ... (read more)

2waitingforgodel10ySince we're playing the condescension game, following is another analysis: You read a (well written) slogan, and assumed that the writer must be irrational. You didn't read the thread he linked you to, you focused on your first impression and held to it.
2Vladimir_Nesov10yI'm not. Seriously. "Whenever convenient" is a very weak theory, and thus using it is a more serious flaw, but I missed that on first reading and addressed a different problem. Please unpack the references. I don't understand.
2waitingforgodel10ySorry, it looks like we're suffering from a bit of cultural crosstalk. Slogans, much like ontological arguments, are designed to make something of an illusion in the mind -- a lever to change the change the way you look at the world. "Whenever convenient" isn't there as a statement of belief, so much as a prod to get you thinking... "How much to I trust that EY knows what he's doing?" You may as well argue with Nike: "Well, I can hardly do everything..." (re: Just Do It) That said I am a rationalist... I just don't see any harm in communicating to the best of my ability. I linked you to this thread [http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2o3v?c=1], where I did display some biases, but also decent evidence for not having the ones you're describing... which I take to be roughly what you'd expect of a smart person off the street.
2Vladimir_Nesov10yI can't place this argument at all in relation to the thread above it. Looks like a collection of unrelated notes to me. Honest. (I'm open to any restatement; don't see what to add to the notes themselves as I understand them.)
3waitingforgodel10yThe whole post you're replying to comes from your request to "Please unpack the references". Here's the bit with references, for easy reference: The first part of the post you're replying to's "Sorry, it looks... best of my ability" maps to "You read a.. irrational" in the quote above, and this tries to explain the problem as I understand it: that you were responding to a slogans words not it's meaning. Explained it's meaning. Explained how "Whenever convenient" was a pointer to the "Do I trust EY?" thought. Gave a backup example via the Nike slogan. The last paragraph in the post you're replying to tried to unpack the "you focused... held to it" from the above quote

This thread raises the question about how many biologists and medical researchers are on here. Due to our specific cluster I expect a strong learning towards the IT people. So AI research gets over proportional recognition, while medical research including direct life extension falls on the wayside.

[-][anonymous]10y 5

An advice sheet for mathematicians considering becoming quants. It's not a path that interests me, but if it was I think I'd find this useful.

He's changed his mind since.

Or so you hope.

He has; this is made abundantly clear in the Metaethics sequence and particularly the "coming of age" sequence. That passage appears to be a reflection of the big embarrassing mistake he talked about, when he thought that he knew nothing about true morality (se "Could Anything Be Right?") and that a superintelligence with a sufficiently "unconstrained" goal system (or what he'd currently refer to as "a rock") would necessarily discover the ultimate true morality, so th... (read more)

Agreed that x-risk orgs are a biased source of info on P(risk) due to self-selection bias. Of course you have to look at other sources of info, you have to take the outside view on these questions, etc.

Personally I think that we are so ignorant and irrational as a species (humanity) and as a culture that there's simply no way to get a good, stable probability estimate for big important questions like this, much less to act rationally on the info.

But I think your pooh-pooh'ing such infantile and amateurish efforts as there is silly when the reasoning is e... (read more)

people can be wrong, regardless of their previous reputation

Still, it's incorrect to argue from existence of examples. You have to argue from likelihood. You'd expect more correctness from a person with reputation for being right than from a person with reputation for being wrong.

People can also go crazy, regardless of their previous reputation, but it's improbable, and not an adequate argument for their craziness.

And you need to know what fact you are trying to convince people about, not just search for soldier-arguments pointing in the preferred dir... (read more)

Consider it an experiment.

All right. In that case: what would you consider meaningful experimental results, and what would they demonstrate?

Huh, so there was a change?

Far be it from me to be anything but an optimist. I'm going with 'exceptions'. :)

But generally most of all I want to know about truths that other agents don't want me to know about.

I'm not sure that's a very good heuristic - are you sure that truly describes the truths you care most about? It seems analogous to the fact that people are more motivated by a cause if they learn some people opposes it, which is silly.

The compelling argument for me is that knowing about bad things is useful to the extent that you can do something about them, and it turns out that people who don't know anything (call them "non-cogniscenti") will probably free-ride their way to any benefits of action on the collective-action-problem that is the at issue here, whilst avoiding drawing any particular attention to themselves ==> avoiding the risks.

Vladimir Nesov doubts this prima facie, i.e. he asks "how do you know that the strategy of being a completely inert player is best?".

-- to which I answer, "if you want to be the first monkey shot into space, then good luck" ;D

But if you express it as a hypothetical choice between being a person who didn't know about any of this and had no way of finding out, versus what I have now, I choose the former.

I can't believe to hear this from a person who wrote about Ugh fields. I can't believe to read a plead for ignorance on a blog devoted to refining rationality. Ignorance is bliss, is that the new motto now?

Normal updating.

  • Original prior for basilik-danger.
  • Eliezer_Yudkowsky stares at basilisk, turns to stone (read: engages idea, decides to censor). Revise pr(basilisk-danger) upwards.
  • FormallyknownasRoko stares at basilisk, turns to stone (read: appears to truly wish e had never thought it). Revise pr(basilisk-danger) upwards.
  • Vladimir_Nesov stares at basilisk, turns to stone (read: engages idea, decides it is dangerous). Revise pr(basilisk-danger) upwards.
  • Vaniver stares at basilisk, is unharmed (read: engages idea, decides it is not dangerous). Revise pr(
... (read more)
7Jack10yOkay, but more than four people have engaged with the idea. Should we take a poll? The problem of course is that majorities often believe stupid things. That is why a free marketplace of ideas free from censorship is a really good thing! The obvious thing to do is exchange information until agreement but we can't do that, at least not here. Also, the people who think it should be censored all seem to disagree about how dangerous the idea really is, suggesting it isn't clear how it is dangerous. It also seems plausible that some people have influenced the thinking of other people- for example it looks like Roko regretted posting after talking to Eliezer. While Roko's regret is evidence that Eliezer is right, it isn't the same as independent/blind confirmation that the idea is dangerous.
4Vaniver10yHow many of me would there have to be for that to work? Also, why is rationalism the risk factor for this basilisk? Maybe the basilisk only turns to stone people with brown eyes (or the appropriate mental analog).
2Vladimir_Nesov10yThis equivocates the intended meaning of turning to stone in the original discussion you replied to. Fail. (But I understand what you meant now.)
[-][anonymous]10y 4

Does this decree have a retrospective effect? And what about the private message system?

Perhaps the reason you are having trouble coming up with a satisfactory characterization of blackmail is that you want a definition with the consequence that it is rational to resist blackmail and therefore not rational to engage in blackmail.

Pleasant though this might be, I fear the universe is not so accomodating.

Elsewhere VN asks how to unpack the notion of a status-quo, and tries to characterize blackmail as a threat which forces the recipient to accept less utility than she would have received in the status quo. I don't see any reason in game theory ... (read more)

Another pointless flamewar.

On RW it's called Headless Chicken Mode, when the community appears to go nuts for a time. It generally resolves itself once people have the yelling out of their system.

The trick is not to make any decisions based on the fact that things have gone into headless chicken mode. It'll pass.

[The comment this is in reply to was innocently deleted by the poster, but not before I made this comment. However, I think I'm making a useful point here, so would prefer to keep this comment.]

FWIW, loads of my comments were apparently deleted by administrators at the time.

4wedrifid10yI was away for a couple of months while the incident took place and when I returned I actually used your user page to reconstruct most of the missing conversation (with blanks filled from other user pages and an alternate source). Yours was particularly useful because of how prolific you were with quoting those to whom you were replying. I still have ten pages of your user comments stored on my harddrive somewhere. :)
4timtyler10yYes, some weren't fully deleted, but - IIRC - others were. If I am remembering this right, the first deleted post (Roko's) left comments behind in people's profiles, but with the second deleted post the associated comments were rendered completely inaccessible to everyone. At the time, I figured that the management was getting better at nuking people's posts. After that - rather curiously - some of my subsequent "marked deleted" posts remained visible to me when logged in - so I wasn't even aware of what had been "marked deleted" to everyone else for most of the time - unless I logged out of the site.

Open source AGI is not a good thing. In fact, it would be a disastrously bad thing. Giving people the source code doesn't just let them inspect it for errors, it also lets them launch it themselves. If you get an AGI close to ready for launch, then sharing its source code means that instead of having one party to decide whether there are enough safety measures ready to launch, you have many parties individually deciding whether to launch it themselves, possibly modifying its utility function to suit their own whim, and the hastiest party's AGI wins.

Ideally... (read more)

I think that you don't realize just how bad the situation is.

I don't think that you realize how bad it is. I'd rather have the universe being paperclipped than supporting the SIAI if that means that I might be tortured for the rest of infinity!

To the best of my knowledge, SIAI has not planned to do anything, under any circumstances, which would increase the probability of you or anyone else being tortured for the rest of infinity.

Supporting SIAI should not, to the best of my knowledge, increase the probability of you or anyone else being tortured for the rest of infinity.

Thank you.

5XiXiDu10yBut imagine there was a person a level above yours that went to create some safeguards for an AGI. That person would tell you that you can be sure that the safeguards s/he plans to implement will benefit everyone. Are you just going to believe that? Wouldn't you be worried and demand that their project is being supervised? You are in a really powerful position because you are working for an organisation that might influence the future of the universe. Is it really weird to be skeptical and ask for reassurance of their objectives?
2[anonymous]10yCurrently, there are no entities in physical existence which, to my knowledge, have the ability to torture anyone for the rest of eternity. You intend to build an entity which would have that ability (or if not for infinity, for a googolplex of subjective years). You intend to give it a morality based on the massed wishes of humanity - and I have noticed that other people don't always have my best interests at heart. It is possible - though unlikely - that I might so irritate the rest of humanity that they wish me to be tortured forever. Therefore, you are, by your own statements, raising the risk of my infinite torture from zero to a tiny non-zero probability. It may well be that you are also raising my expected reward enough for that to be more than counterbalanced, but that's not what you're saying - any support for SIAI will, unless I'm completely misunderstanding, raise the probability of infinite torture for some individuals.
4Eliezer Yudkowsky10ySee the "Last Judge" section of the CEV paper. As Vladimir observes, the alternative to SIAI doesn't involve nothing new happening.
3[anonymous]10yThat just pushes the problem along a step. IF the Last Judge can't be mistaken about the results of the AI running AND the Last Judge is willing to sacrifice the utility of the mass of humanity (including hirself) to protect one or more people from being tortured, then it's safe. That's very far from saying there's a zero probability.
2ata10yIf the Last Judge peeks at the output and finds that it's going to decide to torture people, that doesn't imply abandoning FAI, it just requires fixing the bug and trying again.
2Vladimir_Nesov10yJust because AGIs have capability to inflict infinite torture, doesn't mean they have a motive. Also, status quo (with regard to SIAI's activity) doesn't involve nothing new happening.
5[anonymous]10yI explained that he is planning to supply one with a possible motive (namely that the CEV of humanity might hate me or people like me). It is precisely because of this that the problem arises. A paperclipper, or any other AGI whose utility function had nothing to do with humanity's wishes, would have far less motive to do this - it might kill me, but it really would have no motive to torture me.

but he won me back by answering anyway <3

and similar "care aggressively" characteristic can probably be seen in any other discussion I engage.

I don't dispute that, and this was part of what prompted the warning to XiXi. When a subject is political and your opponents are known to use aggressive argumentative styles it is important to take a lot of care with your words - give nothing that could potentially be used against you.

The situation is analogous to recent discussion of refraining to respond to the trolley problem. If there is the possibility that people may use your words agains... (read more)

Okay, you can leave it abstract. Here's what I was hoping to have explained: why were you discussing what people would really be prepared to sacrifice?

... and not just the surface level of "just for fun," but also considering how these "just for fun" games get started, and what they do to enforce cohesion in a group.

4David_Gerard10yBig +1. Every cause wants to be a cult [http://lesswrong.com/lw/lv/every_cause_wants_to_be_a_cult/]. Every individual (or, realistically, as many as possible) must know how to resist this for a group with big goals not to go off the rails.

Why would you even ask me that? Clearly I have considered the possibility (given that I am not a three year old) and equally clearly me answering you would not make much sense. :)

But the questioning of trusting people's nightmares is an interesting one. I tend to be of the mind that if someone has that much of an anxiety problem prompted by a simple abstract thought then it is best to see that they receive the appropriate medication and therapy. After that has been taken care of I may consider their advice.

to censor it as far as possible

As I noted, it's a trolley problem: you have the bad alternative of doing nothing, and then there's the alternative of doing something that may be better and may be worse. This case observably came out worse, and that should have been trivially predictable by anyone who'd been on the net a few years.

So the thinking involved in the decision, and the ongoing attempts at suppression, admits of investigation.

But yes, it could all be a plot to get as many people as possible thinking really hard about the "forbidden" i... (read more)

When someone says "look, here is this thing you did that led to these clear problems in reality" and the person they're talking to answers "ah, but what is reality?" then the first person may reasonably consider that dodging the question.

Care to share a more concrete context?

No zeal, just expressing my state of belief, and not willing to yield for reasons other than agreement (which is true in general, the censorship topic or not).

No, yielding and the lack thereof is not the indicator of zeal of which I speak. It is the sending out of your soldiers so universally that they reach even into the territory of other's preferences. That critical line between advocation of policy and the presumption that others must justify their very thoughts (what topics interests them and how their thoughts are affected by the threat of public shaming and censorship) is crossed.

The lack of boundaries is a telling sign according to my model of social dynamics.

This doesn't seem like an interesting question

It wasn't Vladimir_Nesov's interest that you feigned curiosity in and nor is it your place to decide what things others are interested in discussing. They are topics that are at least as relevant as such things as 'Sleeping Beauty' that people have merrily prattled on about for decades.

That you support a censorship of certain ideas by no means requires you to exhaustively challenge every possible downside to said censorship. Even if the decision were wise and necessary there is allowed to be disappointing consequences. That's just how things are sometimes.

The zeal here is troubling.

2Vladimir_Nesov10yWhat do you mean by "decide"? Whether they are interested in that isn't influenced by my decisions, and I can well think about whether they are, or whether they should be (i.e. whether there is any good to be derived from that interest). I opened this thread by asking, You answered this question, and then I said what I think about that kind of questions. It wasn't obvious [http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33hs?c=1] to me that you didn't think of some other kind of questions that I find important, so I asked first, not just rhetorically. What you implied in this comment [http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33do?c=1] seems very serious, and it was not my impression that something serious was taking place as a result of the banning incident, so of course I asked. My evaluation of whether the topics excluded (that you've named) are important is directly relevant to the reason your comment drew my attention.

We don't formally understand even the usual game theory, let alone acausal trade. It's far too early to discuss its applications.

You do see the irony there I hope...

3XiXiDu10yWould you have censored the information? If not, do you think it would be a good idea to discuss the subject matter on an external (public) forum? Would you be interested to discuss it?
6wedrifid10yNo, for several reasons. I have made no secret of the fact that I don't think Eliezer processes perceived risks rationally and I think this applies in this instance. This is not a claim that censorship is always a bad idea - there are other obvious cases where it would be vital. Information is power, after all. Only if there is something interesting to say on the subject. Or any interesting conversations to be had on the various related subjects that the political bias would interfere with. But the mere fact that Eliezer forbids it doesn't make it more interesting to me. In fact, the parts of Roko's posts that were most interesting to me were not even the same parts that Eliezer threw a tantrum over. As far as I know Roko has been bullied out of engaging in such conversation even elsewhere and he would have been the person most worth talking to about that kind of counterfactual. Bear in mind that the topic has moved from the realm of abstract philosophy to politics. If you make any mistakes, demonstrate any ignorance or even say things that can be conceivably twisted to appear as such then expect that will be used against you here to undermine your credibility on the subject. People like Nesov and and jimrandom care, and care aggressively. Post away, if I have something to add then I'll jump in. But warily.
4XiXiDu10yI am not sure if I understand the issue and if it is as serious as some people obviously perceive it to be. Because if I indeed understand it, then it isn't as dangerous to talk about it in public as it is portrayed to be. But that would mean that there is something wrong with otherwise smart people, which is unlikely? So should I conclude that it is more likely that I simply do not understand it? What irritates me is that people like Nesov are saying [http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33h3?c=1] that "we don't formally understand even the usual game theory, let alone acausal trade". Yet they care aggressively to censor the topic. I've been told before that it is due to people getting nightmares from it. If that is the reason then I do not think censorship is justified at all.
3wedrifid10yI wouldn't rule out the possibility that you do not fully understand it and they are still being silly. ;)

If you're genuinely unaware of the status-related implications of the way you phrased this comment, and/or of the fact that some people rate those kinds of implications negatively, let me know and I'll try to unpack them.

If you're simply objecting to them via rhetorical question, I've got nothing useful to add.

If it matters, I haven't downvoted anyone on this thread, though I reserve the right to do so later.

Roko's reply to me strongly suggested that he interpreted my message as requesting deletion, and that I was the cause of him deleting it. I doubt anyone at SIAI would have explicitly requested deletion.

5Roko10yI can confirm that I was not asked to delete the comment but did so voluntarily.
6Vladimir_Nesov10yI think you are too trigger-happy.

You can't assume a bottom line and then rationalize your way towards it faster.

Not implied by grandparent.

PMed the message I sent.

Certainly not anything like standard hiring procedure.

5waitingforgodel10yThanks Nick. Please pardon my prying, but as you've spent more time with SIAI, have you seen tendencies toward this sort of thing? Public declarations, competitions/pressure to prove devotion to reducing existential risks, scolding for not towing the party line, etc. I've seen evidence of fanaticism, but have always been confused about what the source is (did they start that way, or were they molded?). Basically, I would very much like to know what your experience has been as you've gotten closer to SIAI. I'm sure I'm not the only (past, perhaps future) donor would appreciate the air being cleared about this.
9Nick_Tarleton10yNo problem, and I welcome more such questions. No; if anything, I see explicit advocacy, as Carl describes [http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33et?c=1], against natural emergent fanaticism (see below), and people becoming less fanatical to the extent that they're influenced by group norms. I don't see emergent individual fanaticism generating significant unhealthy group dynamics like these. I do see understanding and advocacy of indirect utilitarianism [http://www.philosophyetc.net/2005/06/indirect-utilitarianism.html] as the proper way to 'shut up and multiply'. I would be surprised if I saw any of the specific things you mention clearly going on, unless non-manipulatively advising people on how to live up to ideals they've already endorsed counts. I and others have at times felt uncomfortable pressure to be more altruistic, but this is mostly pressure on oneself — having more to do with personal fanaticism and guilt than group dynamics, let alone deliberate manipulation — and creating a sense of pressure is generally recognized as harmful. I think the major source is that self-selection for taking the Singularity seriously, and for trying to do something about it, selects for bullet-biting [http://wiki.lesswrong.com/wiki/Bite_the_bullet] dispositions that predispose towards fanaticism, which is then enabled by having a cause and a group to identify with. I don't think this is qualitatively different from things that happen in other altruistic causes, just more common in SIAI due to much stronger selective pressure for bullet-biting. I also have the impression that Singularitarian fanaticism in online discussions is more common among non-affiliated well-wishers than people who have spent time with SIAI (but there are more of the former category, so it's not easy to tell).
4Larks10yI was there for a summer and don't think I was ever even asked to donate money.
4Roko10yVery little, if any.

my life needs to be generally pleasant otherwise, and the work needs to be at least somewhat meaningful. I've tried the "just grit your teeth and toil" mentality, and it doesn't work - maybe for someone else it does, but not for me.

The first part is the part I'm calling into question, not the second. Of course you need to be electrified by your work. It's hard to do great things when you're toiling instead of playing.

But your standards for general pleasantness are, as far as I can tell, the sieve for a lot of research fields. As an example, it... (read more)

Speaking as someone who is in grad school now, even with prior research, the formal track of grad school is very helpful. I am doing research that I'm interested in. I don' t know if I'm a representative sample in that regard. It may be that people have more flexibility in math than in other areas. Certainly my anecdotal impression is that people in some areas such as biology don't have this degree of freedom. I'm also learning more about how to research and how to present my results. Those seem to be the largest advantages. Incidentally, my impression is that for grad school at least in many areas, taking a semester or two off if very stressed isn't treated that badly if one is otherwise doing productive research.

Being in a similar position (also as far as aversion to moving to e.g. US is concerned), I decided to work part time (roughly 1/5 of the time of even less) in software industry and spend the remainder of the day studying relevant literature, leveling up etc. for working on the FAI problem. Since I'm not quite out of the university system yet, I'm also trying to build some connections with our AI lab staff and a few other interested people in the academia, but with no intention to actually join their show. It would eat away almost all my time, so I could wo... (read more)

Alaska might be a reasonable Finland substitute, weather-wise, but the other issues will be difficult to resolve (if you're moving to the US to make a bunch of money, Alaska is not the best place to do it).

One of my favorite professors was Brazilian, who went to graduate school at the University of Rochester. Horrified (I used to visit my ex in upstate New York, and so was familiar with the horrible winters that take up 8 months of the year without the compensations that convince people to live in Scandinavia), I asked him how he liked the transition- and ... (read more)

Kaj, why don't you add the option of getting rich in your 20s by working in finance, then paying your way into research groups in your late 30s? PalmPilot guy, uh Jeff Hawkins essentially did this. Except he was an entrepreneur.

3Kaj_Sotala10yThat doesn't sound very easy.
6wedrifid10ySounds a heck of a lot easier than doing an equivalent amount of status grabbing within academic circles over the same time. Money is a lot easier to game and status easier to buy.
8David_Gerard10yThere is the minor detail that it really helps not to hate each and every individual second of your working life in the process. A goal will only pull you along to a certain degree. (Computer types know all the money is in the City. I did six months of it. I found the people I worked with and the people whose benefit I worked for to be excellent arguments for an unnecessarily bloody socialist revolution.)
2wedrifid10yFor many people that is about half way between the Masters and PhD degrees. ;) If only being in a university was a guarantee of an enjoyable working experience.
2Roko10yAgreed. Average Prof is a nobody at 40, average financier is a millionaire. shrugs
1sark10yThank you for this. This was a profound revelation for me.
4Manfred10yUpvoted for comedy.
3Roko10yAlso, you can get a PhD in a relevant mathy discipline first, thereby satisfying the condition of having done research. And the process of dealing with the real world enough to make money will hopefully leave you with better anti-akrasia tactics, better ability to achieve real-world goals, etc. You might even be able to hire others.

Living systems including humans also thrive in cold conditions. Most species on the planet today have persisted through multiple glaciation periods, but not through pre-Pleistocene level warmth or rapid warming events.

Plus, the history of the Pleistocene, in which our record of glaciation exists, contains no events of greenhouse gas release and warming comparable to the one we're in now, this is not business as usual on the track to reglaciation. Claiming that the history of the planet is very clear that we're headed for reglaciation is flat out misleading. Last time the world had CO2 levels as high as they are now, it wasn't going through cyclical glaciation.

One important aspect of corporate reputation is what it's like to work there-- and this is important on the department level and smaller level.

Abusive work environments cause a tremendous amount of misery, and there's no reliable method of finding out whether a job is likely to land you in one.

This problem is made worse if leaving a job makes an potential employee seem less reliable.

Another aspect of a universal reputation system is that there needs to be some method of updating and verification. Credit agencies are especially notable for being sloppy.

I just have trouble understanding what you are saying. That might very well be my fault. I do not intent any hostile attack against you or the SIAI. I'm just curious, not worried at all. I do not demand anything. I'd like to learn more about you people, what you believe and how you arrived at your beliefs.

There is this particular case of the forbidden topic and I am throwing everything I got at it to see if the beliefs about it are consistent and hold water. That doesn't mean that I am against censorship or that I believe it is wrong. I believe it is righ... (read more)

4JGWeissman10yThe problem with that is that Eliezer and those who agree with him, including me, cannot speak freely about our reasoning on the issue, because we don't want to spread the idea, so we don't want to describe it and point to details about it as we describe our reasoning. If you imagine yourself in our position, believing the idea is dangerous, you could tell that you wouldn't want to spread the idea in the process of explaining its danger either. Under more normal circumstances, where the ideas we disagree about are not thought by anyone to be dangerous, we can have effective discussion by laying out our true reasons for our beliefs, and considering counter arguments that refer to the details of our arguments. Being cut off from our normal effective methods of discussion is stressful, at least for me. I have been trying to persuade people who don't know the details of the idea or don't agree that it is dangerous that we do in fact have good reasons for believing it to be dangerous, or at least that this is likely enough that they should let it go. This is a slow process, as I think of ways to express my thoughts without revealing details of the dangerous idea, or explaining them to people who know but don't understand those details. And this ends up involving talking to people who, because they don't think the idea is dangerous and don't take it seriously, express themselves faster and less carefully, and who have conflicting goals like learning or spreading the idea, or opposing censorship in general, or having judged for themselves the merits of censorship (from others just like them) in this case. This is also stressful. I engage in this stressful topic, because I think it is important, both that people do not get hurt from learning about this idea, and that SIAI/Eliezer do not get dragged through mud for doing the right thing. Sorry, but I am not here to help you get the full understanding you need to judge if the beliefs are consistent and hold water. As I ha
3Vladimir_Nesov10yNote that this shouldn't be possible other than through arguments from authority. (I've just now formed a better intuitive picture of the reasons for danger of the idea, and saw some of the comments previously made unnecessarily revealing, where the additional detail didn't actually serve the purpose of convincing people I communicated with, who lacked some of the prerequisites for being able to use that detail to understand the argument for danger, but would potentially gain (better) understanding of the idea. It does still sound silly to me, but maybe the lack of inferential stability of this conclusion should actually be felt this way - I expect that the idea will stop being dangerous in the following decades due to better understanding of decision theory.)
4timtyler10yYou are just going to piss off the management. IMO, it isn't that interesting. Yudkowsky apparently agrees that squashing it was handled badly [http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33uv?c=1]. Anyway, now Roko is out of self-imposed exile, I figure it is about time to let it drop.

When I say something is a misapplied politically reinforced heuristic, you only reinforce my point by making fully general political arguments that it is always right.

I already had Anna Salamon telling me something about politics. You sound as incomprehensible to me. Sorry, not meant as an attack.

Censorship is not the most evil thing in the universe. The consequences of transparency are allowed to be worse than censorship. Deal with it.

I stated several times in the past that I am completely in favor of censorship, I have no idea why you are telling me this.

3jimrandomh10yOur rules and intuitions about free speech and censorship are based on the types of censorship we usually see in practice. Ordinarily, if someone is trying to censor a piece of information, then that information falls into one of two categories: either it's information that would weaken them politically, by making others less likely to support them and more likely to support their opponents, or it's information that would enable people to do something that they don't want done. People often try to censor information that makes people less likely to support them, and more likely to support their opponents. For example, many governments try to censor embarrassing facts ("the Purple Party takes bribes and kicks puppies!"), the fact that opposition exists ("the Pink Party will stop the puppy-kicking!") and its strength ("you can join the Pink Party, there are 10^4 of us already!"), and organization of opposition ("the Pink Party rally is tomorrow!"). This is most obvious with political parties, but it happens anywhere people feel like there are "sides" - with religions (censorship of "blasphemy") and with public policies (censoring climate change studies, reports from the Iraq and Afghan wars). Allowing censorship in this category is bad because it enables corruption, and leaves less-worthy groups in charge. The second common instance of censorship is encouragement and instructions for doing things that certain people don't want done. Examples include cryptography, how to break DRM, pornography, and bomb-making recipes. Banning these is bad if the capability is suppressed for a bad reason (cryptography enables dissent), if it's entangled with other things (general-purpose chemistry applies to explosives), or if it requires infrastructure that can also be used for the first type of censorship (porn filters have been caught blocking politicians' campaign sites). These two cases cover 99.99% of the things we call "censorship", and within these two categories, censorship
6Jack10yEven if this is right the censorship extends to perhaps true conversations about why the post is false. Moreover, I don't see what truth has to do with it. There are plenty of false claims made on this site that nonetheless should be public because understanding why they're false and how someone might come to think that they are true are worthwhile endeavors. The question here is rather straight forward: does the harm of the censorship outweigh the harm of letting people talk about the post. I can understand how you might initially think those who disagree with you are just responding to knee-jerk anti-censorship instincts that aren't necessarily valid here. But from where I stand the arguments made by those who disagree with you do not fit this pattern. I think XiXi has been clear in the past about why the transparency concern does apply to SIAI. We've also seen arguments for why censorship in this particular case is a bad idea.
3Vaniver10yThere are clearly more than two options here. There seem to be two points under contention: It is/is not (1/2) reasonable to agree with the forbidden post. It is/is not (3/4) desirable to know the contents of the forbidden post. You seem to be restricting us to either 2+3 or 1+4. It seems that 1+3 is plausible (should we keep children from ever knowing about death because it'll upset them?), and 2+4 seems like a good argument for restriction of knowledge (the idea is costly until you work through it, and the benefits gained from reaching the other side are lower than the costs). But I personally suspect 2+3 is the best description, and that doesn't explain why people trying to spread it are doing a bad thing. Should we delete posts on Pascal's Wager because someone might believe it?
3David_Gerard10yExcluded middle, of course: incorrect criterion. (Was this intended as a test?) It would not deserve protection if it were useless (like spam), not "if it were false." The reason I consider sufficient to keep it off LessWrong is that it actually hurt actual people. That's pretty convincing to me. I wouldn't expunge it from the Internet (though I might put a warning label on it), but from LW? Appropriate. Reposting it here? Rude. Unfortunately, that's also an argument as to why it needs serious thought applied to it, because if the results of decompartmentalised thinking can lead there, humans need to be able to handle them [http://lesswrong.com/lw/39l/how_to_lose_100_karma_in_6_hours_what_just/34lc?c=1] . As Vaniver pointed out [http://lesswrong.com/lw/38u/best_career_models_for_doing_research/34m2?c=1], there are previous historical texts that have had similar effects. Rationalists need to be able to cope with such things, as they have learnt to cope with previous conceptual basilisks. So it's legitimate LessWrong material at the same time as being inappropriate for here. Tricky one. (To the ends of that "compartmentalisation" link, by the way, I'm interested in past examples of basilisks and other motifs of harmful sensation in idea form. Yes, I have the deleted Wikipedia article.) Note that I personally found the idea itself silly at best.

What kind of evidence is it?

Better evidence than I've ever seen in support of the censored idea. I have these well-founded principles, free speech and transparency, and weigh them against the evidence I have in favor of censoring the idea. That evidence is merely 1.) Yudkowsky's past achievements, 2.) his output and 3.) intelligence. That intelligent people have been and are wrong about certain ideas while still being productive and right about many other ideas is evidence to weaken #3. That people lie and deceive to get what they want is evidence agai... (read more)

And as a matter of fact it is failing to actually get much in the way of donations, compared to donations to the church which is using hell as a superstimulus...

It doesn't work. Jehovah's Witnesses don't even believe into a hell and they are gaining a lot of members each year and donations are on the rise. Donations are not even mandatory either, you are just asked to donate if possible. The only incentive they use is positive incentive.

People will do everything for their country if it asks them to give their life. Suicide bombers also do not blow them... (read more)

I generally find myself in support of people who advocate a policy of keeping people from seeing Goatse.

I'm not sure how to evaluate this statement. What do you mean by "keeping people from seeing Goatse"? Banning? Voluntarily choosing not to spread it? A filter like the one proposed in Australia that checks every request to the outside world?

That Eliezer wrote the Sequences and appears to think according to their rules and is aware of Löb's Theorem is strong evidence that he is good at error-checking himself.

That's pretty much a circular argument. How's the third-party verifiable evidence look?

Without taking a poll of anything except my memory, Eliezer+Roko+VladNesov+Alicorn are against, DavidGerard+waitingforgodel+vaniver are for.

I'm for. I believe Tim Tyler is for.

Aumann agreement works in the case of hidden information - all you need are posteriors and common knowledge of the event alone.

Human's have this unfortunate feature of not being logically omniscient. In such cases where people don't see all the logical implications of an argument we can treat those implications as hidden information. If this wasn't the case then the censorshi... (read more)

2Roko10yI had a private email conversation with Eliezer that did involve a process of logical discourse, and another with Carl. Also, when I posted the material, I hadn't thought it through. One I had thought it through, I realized that I had accidentally said more than I should have done.

I haven't read fluffy (I have named it fluffy), but I'd guess it's an equivalent of a virus in a monoculture: every mode of thought has its blind spots, and so to trick respectable people on LW, you only need an idea that sits in the right blind spots. No need for general properties like "only infectious to stupid people."

Alicorn throws a bit of a wrench in this, as I don't think she shares as many blind spots with the others you mention, but it's still entirely possible. This also explains the apparent resistance of outsiders, without need for Eliezer to be lying when he says he thinks fluffy was wrong.

Someone's been reading Terry Pratchett.

He always held that panic was the best means of survival. Back in the old days, his theory went, people faced with hungry sabre-toothed tigers could be divided into those who panicked and those who stood there saying, "What a magnificent brute!" or "Here pussy".

But still, there are some real places where danger is real, like the Bronx or scientology or organized crime or a walking across a freeway.

So, this may be a good way to approach the issue: loss to individual humans is, roughly speaking, finite. Thus, the correct approach to fear is to gauge risks by their chance of loss, and then discount if it's not fatal.

So, we should be much less worried by a 1e-6 risk than a 1e-4 risk, and a 1e-4 risk than a 1e-2 risk. If you are more scared by a 1e-6 risk than a 1e-2 risk, you're reasoning fallaciously.

Now, one mig... (read more)

But no more skeptical than is warranted by your prior probability.

Let's say that if aliens exist, a reliable Tim has a 99% probability of saying they do. If they don't, he has a 1% probability of saying they do.

An unreliable Tim has a 50/50 shot in either situation.

My prior was 50/50 reliable/unreliable, 1,000,000/1 don't exist, exist so prior weights:

reliable, exist: 1 unreliable, exist: 1 reliable, don't exist: 1,000,000 unreliable, don't exist, 1,000,000

Updates after he says they do:

reliable, exist: .99 unreliable, exist: .5 reliable, don't exist: 10,00... (read more)

That could only apply to your original post, not subsequent stuff.

4Roko10yRight. Bottle. Genie.

I disagree with most of that analysis. I assume machine intelligence will catalyse its own creation. I fully expect that some organisations will stick with secret source code. How could the probability of that possibly be as low as 0.8!?!

I figure that use of open source software is more likely to lead to a more even balance of power - and less likely to lead to a corrupt organisation in charge of the planet's most advanced machine intelligence efforts. That assessment is mostly based on the software industry to date - where many of the worst abuses app... (read more)

3jimrandomh10yThere are two problems with this reasoning. First, you have the causality backwards: makers of open-source software are less abusive than makers of closed-source software not because open-source is such a good safeguard, but because the sorts of organizations that would be abusive don't open source in the first place. And second, if there is an unethical AI running somewhere, then forking the code will not save humanity. Forking is a defense against not having good software to use yourself; it is not a defense against other people running software that does bad things to you.
[-][anonymous]10y 3

The problem with a public thorough discussion in these cases is that once you understand the reasons why the idea is dangerous, you already know it, and don't have the opportunity to choose whether to learn about it.

That's definitely the root of the problem. In general, though, if we are talking about FAI, then there shouldn't be a dangerous idea. If there is, then it means we are doing something wrong.

If you trust Eliezer's honesty, then though he may make mistakes, you should not expect him to use this policy as a cover for banning posts as part of

... (read more)
3JGWeissman10yI have a response to this that I don't actually want to say, because it could make the idea more dangerous to those who have heard about it but are currently safe due to not fully understanding it. I find that predicting that this sort of thing will happen makes me reluctant to discuss this issue, which may explain why of those who are talking about it, most seem to think the banning was wrong. Given that there has been one banned post. I think that his mistakes are much less of a problem than overwrought concern about his mistakes.
3Vladimir_Nesov10yWhy do you believe that? FAI is full of potential for dangerous ideas. In its full development, it's an idea with the power to rewrite 100 billion galaxies. That's gotta be dangerous.
9[anonymous]10yLet me try to rephrase: correct FAI theory shouldn't have dangerous ideas. If we find that the current version does have dangerous ideas, then this suggests that we are on the wrong track. The "Friendly" in "Friendly AI" should mean friendly.
9Eliezer Yudkowsky10yPretty much correct in this case. Roko's original post was, in fact, wrong; correctly programmed FAIs should not be a threat.

(FAIs shouldn't be a threat, but a theory to create a FAI will obviously have at least potential to be used to create uFAIs. FAI theory will have plenty of dangerous ideas.)

5XiXiDu10yI want to highlight at this point how you think about similar scenarios [http://lesswrong.com/lw/kn/torture_vs_dust_specks/uf7?c=1]: That isn't very reassuring. I believe that if you had the choice of either letting a Paperclip maximizer burn the cosmic commons or torture 100 people, you'd choose to torture 100 people. Wouldn't you? They are always a threat to some beings. For example beings who oppose CEV or other AI's. Any FAI who would run a human version of CEV would be a potential existential risk to any alien civilisation. If you accept all this possible oppression in the name of what is subjectively friendliness, how can I be sure that you don't favor torture for some humans that support CEV, in order to ensure it? After all you already allow for the possibility that many beings are being oppressed or possible killed.
3wedrifid10yThis seems to be true and obviously so.

Yes, I have a critique. Most of anthropics is gibberish. Until someone makes anthropics work, I refuse to update on any of it. (Apart from the bits that are commonsensical enough to derive without knowing about "anthropics", e.g. that if your fising net has holes 2 inches big, don't expect to catch fish smaller then 2 inches wide)

2timtyler10yI don't think you can really avoid anthropic ideas - or the universe stops making sense. Some anthropic ideas can be challenging - but I think we have got to try. Anyway, you did the critique - but didn't go for a supporting argument. I can't think of very much that you could say. We don't have very much idea yet about what's out there - and claims to know such things just seem over-confident.

So the thing to do in this situation is to ask them: "excuse me wtf are you doin?" And this has been done.

So far there's been no explanation, nor even acknowledgement of how profoundly stupid this looks. This does nothing to make them look smarter.

Of course, as I noted, a truly amazing Xanatos retcon is indeed not impossible.

[-][anonymous]10y 3

However likely they are, I expect intelligence explosions to be evenly distributed through space and time. If 100 years from now Earth loses by a hair, there are still plenty of folks around the universe who will win or have won by a hair. They'll make whatever use of the 80 billion galaxies that they can--will they be wasting them?

If Earth wins by a hair, or by a lot, we'll be competing with those folks. This also significantly reduces the opportunity cost Roko was referring to.

I don't believe that it's the cause. I'm generally bad at guessing what people mean, I often need being told explicitly. I don't believe it's the case with David Gerard's comments in this thread though (do you disagree?). I believe it was more the case with waitingforgodel's comments today.

Much less so with David. David also expressed himself more clearly - or perhaps instead in a more compatible idiom.

I wish there was a reproducible way of inducing such emotional experience to experiment more with those states of mind.

While such things are never going to be perfectly tailored for the desired effect MDMA invokes a related state. :)

Are you aware of any other deletions?

Here...

I'd like to ask you the following. How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it? Just imagine a detailed description of how to code an AGI or create bio weapons. Would you stay away from censoring such information in favor of free speech?

The subject matter here has a somewhat different nature that rather fits a more people - more probable pattern. The question is if it is better to discuss it as to possible resolve it or to censor it... (read more)

4TheOtherDave10yStep 1. Write down the clearest non-dangerous articulation of the boundaries of the dangerous idea that I could. If necessary, make this two articulations: one that is easy to understand (in the sense of answering "is what I'm about to say a problem?") even if it's way overinclusive, and one that is not too overinclusive even if it requires effort to understand. Think of this as a cheap test with lots of false positives, and a more expensive follow-up test. Add to this the most compelling explanation I can come up with of why violating those boundaries is dangerous that doesn't itself violate those boundaries. Step 2. Create a secondary forum, not public-access (e.g., a dangerous-idea mailing list), for the discussion of the dangerous idea. Add all the people I think belong there. If that's more than just me, run my boundary articulation(s) past the group and edit as appropriate. Step 3. Create a mechanism whereby people can request to be added to dangerous-idea. (e.g., sending dangerous-idea-request). Step 4. Publish the boundary articulations, a request that people avoid any posts or comments that violate those boundaries, an overview of what steps are being taken (if any) by those in the know, and a pointer to dangerous-idea-request for anyone who feels they really ought to be included in discussion of it (with no promise of actually adding them). Step 5. In forums where I have editorial control, censor contributions that violate those boundaries, with a pointer to the published bit in step 4. == That said, if it genuinely is the sort of thing where a suppression strategy can work, I would also breathe a huge sigh of relief for having dodged a bullet, because in most cases it just doesn't.
4David_Gerard10yA real-life example that people might accept the danger of would be the 2008 DNS flaw discovered by Dan Kaminsky [http://www.wired.com/threatlevel/2008/07/details-of-dns/] - he discovered something really scary for the Internet and promptly assembled a DNS Cabal to handle it. And, of course, it leaked before a fix was in place. But the delay did, they think, mitigate damage. Note that the solution had to be in place very quickly indeed, because Kaminsky assumed that if he could find it, others could. Always assume you aren't the only person in the whole world smart enough to find the flaw.

the disconnect between that and the real world.

Out of curiosity: what evidence would convince you that the fate of the entire universe does hang in the balance?

2Manfred10yNo human-comparable aliens, for one. Which seems awfully unlikely, the more we learn about solar systems.

The reasons weren't vague.

Of course this is just your assertion against mine since we're not going to actually discuss the reasons here.

5waitingforgodel10yThanks. Direct feedback is always appreciated. No need for you to tiptow.

Do you have any reason to believe that it's more likely that a future dictator, or anyone else, will nuke the planet if you don't send a donation to Greenpeace than if you do?

It's evident you really need to read the post. He can't get people to answer hypotheticals in almost any circumstances and thought this was a defect in the people. Approximately everyone responded pointing out that in the real world, the main use of hypotheticals is to use them against people politically. This would be precisely what happened with the factoid about Singer.

As well ask if there are hundred-dollar bills lying on sidewalks.

EDIT: 2 days after I wrote this, I was walking down the main staircase in the library and laying on the central landing, highly contrasted against the floor, in completely clear view of 4 or 5 people who walked past it, was a dollar bill. I paused for a moment reflecting on the irony that sometimes there are free lunches - and picked it up.

I was once chastized by a senior singinst member for not being prepared to be tortured or raped for the cause.

Forget entirely 'the cause' nonsense. How far would you go just to avoid not personally getting killed? How much torture per chance that your personal contribution at the margin will prevent your near term death?

Correct. However, the method I proposed does not involve redefining one's utility function, as it leaves terminal values unchanged. It simply recognizes that certain methods of achieving one's pre-existing terminal values are better than others, which leaves the utility function unaffected (it only alters instrumental values).

The method I proposed is similar to pre-commitment for a causal decision theorist on a Newcomb-like problem. For such an agent, "locking out" future decisions can improve expected utility without altering terminal values.... (read more)

Depending on what you're planning to research, lack of access to university facilities could also be a major obstacle. If you have a reputation for credible research, you might be able to collaborate with people within the university system, but I suspect that making the original break in would be pretty difficult.

How hard is it to live off the dole in Finland? Also, non-academic research positions in think tanks and the like (including, of course, SIAI).

5Kaj_Sotala10yNot very hard in principle, but I gather it tends to be rather stressful, with stuff like payments not arriving when they're supposed to happening every now and then. Also, I couldn't avoid the feeling of being a leech, justified or not. Non-academic think tanks are a possibility, but for Singularity-related matters I can't think of others than the SIAI, and their resources are limited.
3[anonymous]10yMany people would steal food to save lives of the starving, and that's illegal. Working within the national support system to increase the chance of saving everybody/everything? If you would do the first, you should probably do the second. But you need to weigh the plausibility of the get-rich-and-fund-institute option, including the positive contributions of the others you could potentially hire.
2Eugine_Nier10yGiven the current economic situation in Europe, I'm not sure that's a good long term strategy. Also, I suspect spending to long on the dole may cause you to develop habits that'll make it harder to work a paying job.

Your comment here killed the hostage.

It doesn't really address the question. In the A* algorithm the heuristic estimates of the objective function are supposed to be upper bounds on utility, not lower bounds. Furthermore, they are supposed to actually estimate the result of the complete computation - not to represent a partial computation exactly.

1timtyler10yReality check: a tree of possible futures is pruned at points before the future is completely calculated. Of course it would be nice to apply an evaluation function which represents the results of considering all possible future branches from that point on. However, getting one of those that produces results in a reasonable time would be a major miracle. If you look at things like chess algorithms, they do some things to get a more accurate utility valuation when pruning - such as check for quiescence. However, they basically just employ a standard evaluation at that point - or sometimes a faster, cheaper approximation. If is sufficiently bad, the tree gets pruned.
1Perplexed10yWe are living in the same reality. But the heuristic evaluation function still needs to be an estimate of the complete computation, rather than being something else entirely. If you want to estimate your own accumulation of pleasure over a lifetime, you cannot get an estimate of that by simply calculating the accumulation of pleasure over a shorter period - otherwise no one would undertake the pain of schooling motivated by the anticipated pleasure of high future income. The question which divides us is whether an extra 10 utils now is better or worse than an additional 11 utils 20 years from now. You claim that it is worse. Period. I claim that it may well be better, depending on the discount rate.
1timtyler10yThe point is that resource limitation makes these estimates bad estimates - and you can't do better by replacing them with better estimates because of ... resource limitation! To see how resource limitation leads to temporal discounting, consider computer chess. Powerful computers play reasonable games - but heavily resource limited ones fall for sacrifice plays, and fail to make successful sacrifice gambits. They often behave as though they are valuing short-term gain over long term results. A peek under the hood quickly reveals why. They only bother looking at a tiny section of the game tree near to the current position! More powerful programs can afford to exhaustively search that space - and then move on to positions further out. Also the limited programs employ "cheap" evaluation functions that fail to fully compensate for their short-term foresight - since they must be able to be executed rapidly. The result is short-sighted chess programs. That resource limitation leads to temporal discounting is a fairly simple and general principle which applies to all kinds of agents.
3Perplexed10yWhy do you keep trying to argue against discounting using an example where discounting is inappropriate by definition? The objective in chess is to win. It doesn't matter whether you win in 5 moves or 50 moves. There is no discounting. Looking at this example tells us nothing about whether we should discount future increments of utility in creating a utility function. Instead, you need to look at questions like this: An agent plays go in a coffee shop. He has the choice of playing slowly, in which case the games each take an hour and he wins 70% of them. Or, he can play quickly, in which case the games each take 20 minutes, but he only wins 60% of them. As soon as one game finishes, another begins. The agent plans to keep playing go forever. He gains 1 util each time he wins and loses 1 util each time he loses. The main decision he faces is whether he maximizes utility by playing slowly or quickly. Of course, he has infinite expected utility however he plays. You can redefine the objective to be maximizing utility flow per hour and still get a 'rational' solution. But this trick isn't enough for the following extended problem: The local professional offers go lessons. Lessons require a week of time away from the coffee-shop and a 50 util payment. But each week of lessons turns 1% of your losses into victories. Now the question is: Is it worth it to take lessons? How many weeks of lessons are optimal? The difficulty here is that we need to compare the values of a one-shot (50 utils plus a week not playing go) with the value of an eternal continuous flow (the extra fraction of games per hour which are victories rather than losses). But that is an infinite utility payoff from the lessons, and only a finite cost, right? Obviously, the right decision is to take a week of lessons. And then another week after that. And so on. Forever. Discounting of future utility flows is the standard and obvious way of avoiding this kind of problem and paradox. But now let us see wh
1timtyler10yTemporal discounting is about valuing something happening today more than the same thing happening tomorrow. Chess computers do, in fact discount. That is why they do prefer to mate you in twenty moves rather than a hundred. The values of a chess computer do not just tell it to win. In fact, they are complex - e.g. Deep Blue had an evaluation function that was split into 8,000 parts [http://en.wikipedia.org/wiki/Deep_Blue_%28chess_computer%29]. Operation consists of maximising the utility function, after foresight and tree pruning. Events that take place in branches after tree pruning has truncated them typically don't get valued at all - since they are not forseen. Resource-limited chess computers can find themselves preferring to promote a pawn sooner rather than later. They do so since they fail to see the benefit of sequences leading to promotion later.
1Alicorn10yCorrect me if I'm missing an important nuance, but isn't this just about whether one's utils are timeless?

He's changed his mind since. That makes it far, far less scary.

He has changed his mind about one technical point in meta-ethics. He now realizes that super-human intelligence does not automatically lead to super-human morality. He is now (IMHO) less wrong. But he retains a host of other (mis)conceptions about meta-ethics which make his intentions abhorrent to people with different (mis)conceptions. And he retains the arrogance that would make him dangerous to those he disagrees with, if he were powerful.

"... far, far less scary"? You ar... (read more)

Wow. That is scary. Do you have an estimated date on that bizarre declaration? Pre 2004 I assume?

There is no shortage of people with smoke coming out of their ears about the issue either.

Look, I don't just mention fires and floods. I mention sea-level rise, coral reef damage, desertification, heatstroke, and a number of other disadvantages. However, the disadvantages of GW are not the main focus of my article - you can find them on a million other web sites.

It's true that there are people with an unrealistic view of the dangers that global warming poses, and hyperbolic reactions may stand to hurt the cause of getting people to take it seriously, bu... (read more)

What I mean is that, in my opinion, most of the risks under discussion are not like that. Large meteorites are a bit like that - but they are not very likely to hit us soon.

What (dis)advantages does this have compared to the traditional model?

I think this thread perfectly illustrates one disadvantage of doing research in an unstructured environment. It is so easy to become distracted from the original question by irrelevant, but bright and shiny distractions. Having a good academic adviser cracking the whip helps to keep you on track.

855 comments so far, with no sign of slowing down!

"Fairly quickly"? What if we don't? Do you expect reglaciation to occur within the next 100 years, 200 years? If not we can wait until we have the knowledge to pull off climate control safely. (And if we do get hit by an asteroid, the last thing we probably want is runaway climate change started when we didn't know what we were doing either.)

Frankly, reviewing the content of your site strongly leads me to suspect that your position is not credible; you consistently fail to accurately present what scientists consider to be the reasons for concern. When you claim that global warming is mostly fluff, I already have stronger reason than usual to suspect that you haven't come by your conclusion from an unbiased review of the data.

I would care much less about bothering to convince you if you were not hosting a website for the purpose of convincing others to support furthering global warming, and de... (read more)

WAKE UP!

While I can't speak for everyone, I strongly suspect presenting things like this makes your case less persuasive.

As I already noted, as best indicated by our calculations we have already overshot the goal of preventing the next glaciation period. Moving away from the danger zone at a reasonably safe pace would mean a major reduction in greenhouse gas emissions.

3timtyler10yWe don't know that. The science of this isn't settled. The Milankovitch hypothesis of glaciation is more band-aid than theory. See: http://en.wikipedia.org/wiki/Milankovitch_cycles#Problems [http://en.wikipedia.org/wiki/Milankovitch_cycles#Problems] CO2 apparently helps - but even that is uncertain. I would want to see a very convincing case that we are far enough from the edge for the risk of reglaciation to be over before advocating hanging around on the reglaciation cliff-edge. Short of eliminating the ice caps, it is difficult to imagine what would be convincing. Those ice caps are potentially major bad news for life on the planet - and some industrial CO2 is little reassurance - since that could relatively quickly become trapped inside plants and then buried.
2Desrtopa10yThe global ice caps have been around for millions of years now. Life on earth is adapted to climates that sustain them. They do not constitute "major bad news for life on this planet." Reglaciation would pose problems for human civilization, but the onset of glaciation occurs at a much slower rate than the warming we're already subjecting the planet to, and as such even if raising CO2 levels above what they've been since before the glaciations began in the Pleistocene were not enough to prevent the next round, it would still be a less pressing issue. On a geological time scale, the amount of CO2 we've released could quickly be trapped in plants and buried, but with the state of human civilization as it is, how do you suppose that would actually happen quickly enough to be meaningful for the purposes of this discussion?
2NancyLebovitz10yIf reglaciation starts, could it be stopped by sprinkling coal dust on some of the ice?

A few things:

** I'm confused. On the one hand, you say knowing the popularity of various positions is important to you in deciding your own beliefs about something potentially dangerous to you and others. On the other hand, you say it's not worth seeking more information about and was just a throwaway line in an argument. I am having a hard time reconciling those two claims... you seem to be trying to have it both ways. I suspect I've misunderstood something important.

** I didn't think you were arguing for censorship. Or against it. Actually, I have long s... (read more)

5wedrifid10ySometimes that isn't a bad state to be in. Not having an agenda to serve frees up the mind somewhat! :)

For those curious: we do agree, but he went to quite a bit more effort in showing that than I did (and is similarly more convincing).

Tim on global warming: http://timtyler.org/end_the_ice_age/

1-line summary - I am not too worried about that either.

This is a pretty optimistic way of looking at it, but unfortunately it's quite unfounded. Current scientific consensus is that we've already released more than enough greenhouse gases to avert the next glacial period. Melting the ice sheets and thus ending the ice age entirely is an extremely bad idea if we do it too quickly for global ecosystems to adapt.

Taleb quote doesn't qualify. (I won't comment on others.)

I should have made more clearly that it is not my intention to indicate that I believe that those people, or crazy ideas in general, are wrong. But there are a lot of smart people out there who'll advocate opposing ideas. Using their reputation of being highly intelligent to follow through on their ideas is in my opinion not a very good idea in itself. I could just believe Freeman Dyson that existing simulation models of climate contain too much error to reliably predict future trends. I could bel... (read more)

Is Goatse supposed to be a big deal? Someone showed it to me and I literally said "who cares?"

Rationalism is the ability to think well and this is a dangerous idea. If it were a dangerous bacterium then immune system would be the risk factor.

Generally, if your immune system is fighting something, you're already sick. Most pathogens are benign or don't have the keys to your locks. This might be a similar situation- the idea is only troubling if your lock fits it- and it seems like then there would be rational methods to erode that fear (like the immune system mobs an infection).

Rationalism is the ability to think well and this is a dangerous idea. If it were a dangerous bacterium then immune system would be the risk factor.

Er, are you describing rationalism (I note you say that and not "rationality") as susceptible to autoimmune disorders? More so than in this post?

Honestly? Doesn't like to argue about quantum mechanics. That I've seen :D Your posts seem to be about noticing where things fit into narratives, or introspection, or things other than esoteric decision theory speculations. If I had to come up with an idea that would trick Eliezer and Vladimir N into thinking it was dangerous, it would probably be barely plausible decision theory with a dash of many worlds.

Comments only leave a stub if they have replies that aren't deleted.

The argument does seem to function, but you shouldn't have used the term in a sense conflicting with intended.

You are a truth seeker? Really?

Yes, I'd choose to eat from the tree of the knowledge of good and evil and tell God to fuck off.

8timtyler10ySo, as a gift: 63,174,774 + 6,761,374,774 = 6,824,549,548. Or - if you don't like that particular truth - care to say which truths you do like?