As promised, here is the "Q" part of the Less Wrong Video Q&A with Eliezer Yudkowsky.

The Rules

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed to a few paragraphs, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) Eliezer hasn't been subpoenaed. He will simply ignore the questions he doesn't want to answer, even if they somehow received 3^^^3 votes.

4) If you reference certain things that are online in your question, provide a link.

5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be. [Update: Today, November 18, marks the 7th day since this thread was posted. If you haven't already done so, now would be a good time to review the questions and vote for your favorites.]

Suggestions

Don't limit yourself to things that have been mentioned on OB/LW. I expect that this will be the majority of questions, but you shouldn't feel limited to these topics. I've always found that a wide variety of topics makes a Q&A more interesting. If you're uncertain, ask anyway and let the voting sort out the wheat from the chaff.

It's okay to attempt humor (but good luck, it's a tough crowd).

If a discussion breaks out about a question (f.ex. to ask for clarifications) and the original poster decides to modify the question, the top level comment should be updated with the modified question (make it easy to find your question, don't have the latest version buried in a long thread).

Update: Eliezer's video answers to 30 questions from this thread can be found here.

New Comment
701 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

What is your information diet like? Do you control it deliberately (do you have a method; is it, er, intelligently designed), or do you just let it happen naturally.

By that I mean things like: Do you have a reading schedule (x number of hours daily, etc)? Do you follow the news, or try to avoid information with a short shelf-life? Do you frequently stop yourself from doing things that you enjoy (f.ex reading certain magazines, books, watching films, etc) to focus on what is more important? etc.

0Liron
Ditto regarding your food diet?

During a panel discussion at the most recent Singularity Summit, Eliezer speculated that he might have ended up as a science fiction author, but then quickly added:

I have to remind myself that it's not what's the most fun to do, it's not even what you have talent to do, it's what you need to do that you ought to be doing.

Shortly thereafter, Peter Thiel expressed a wish that all the people currently working on string theory would shift their attention to AI or aging; no disagreement was heard from anyone present.

I would therefore like to ask Eliezer whether he in fact believes that the only two legitimate occupations for an intelligent person in our current world are (1) working directly on Singularity-related issues, and (2) making as much money as possible on Wall Street in order to donate all but minimal living expenses to SIAI/Methuselah/whatever.

How much of existing art and science would he have been willing to sacrifice so that those who created it could instead have been working on Friendly AI? If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists? And what of Michelangelo, Beethoven, and indeed science fiction? Aren't we allowed to have similar fun today? For a living, even?

[-][anonymous]150

Somewhat related, AGI is such an enormously difficult topic, requiring intimate familiarity with so many different fields, that the vast majority of people (and I count myself among them) simply aren't able to contribute effectively to it.

I'd be interested to know if he thinks there are any singularity-related issues that are important to be worked on, but somewhat more accessible, that are more in need of contributions of man-hours rather than genius-level intellect. Is the only way a person of more modest talents can contribute through donations?

7MichaelVassar
Depends on what you mean by 'modest'. Probably 60% of Americans could contribute donations without serious lifestyle consequences and 20% of Americans could contribute over a quarter of their incomes without serious consequences. By contrast, only 10% have the reasoning abilities to identify the best large category of causes and only 1% have the reasoning abilities to identify the very best cause without a large luck element being involved. By working hard, most of that 1% could also become part of the affluent 20% of Americans who could make large donations. A similar fraction might be able to help via fund-raising efforts and by aggressively networking and sharing the contacts that they are able to build with us. A smaller but only modestly smaller fraction might be able to meaningfully contribute to SIAI's effort via seriously committed offers of volunteer effort, but definitely not via volunteer efforts uncoupled to serious commitment. Almost no-one can do FAI, or even recognize talent at a level capable of doing FAI, but if more people were doing the easier things it wouldn't be nearly so hard to find people who could do the core work.

Almost no-one can do FAI, or even recognize talent at a level capable of doing FAI, but if more people were doing the easier things it wouldn't be nearly so hard to find people who could do the core work.

SIAI keeps supporting this attitude, yet I don't believe it, at least in the way it's presented. A good mathematician who gets to understand the problem statement and succeeds in weeding out the standard misunderstanding can contribute as well as anyone, at this stage where we have no field. Creating a programme that would allow people to reliably get to work on the problem requires material to build upon, and there is still nothing, no matter of what quality. Systematizing the connections with existing science, trying to locate the place of FAI project in it, is something that only requires expertise in that science and understanding of FAI problem statement. At the very least, a dozen steps in, we'll have a useful curriculum, to get folks up to speed in the right direction.

5MichaelVassar
We have some experience with this, but if you want to discuss the details more with myself or some other SIAI people we will be happy to do so, and probably to have you come visit some time and get some experience with what we do. You may have ways of contributing substantially, theoretically or managerially. We'll have to see.
0Curiouskid
Sorry for the bump, but... Perhaps what we should do is take all our funds and create a school for AI researchers. This would be an investment for the long hall. You and I may not be super geniuses, but I sure think I could raise some super geniuses. Also, I feel like this topic deserves more than one comment thread.
9John_Maxwell
I don't know about Eliezer, but I would be able to sacrifice quite a lot; perhaps all of art. If humanity spreads through the galaxy there will be way more than enough time for all that. It might. But their expected contribution would be much greater if they looked at the problem to see how they could contribute most effectively. No one's saying that you're not allowed to do something. Just that it's suboptimal under their utility function, and perhaps yours. My guess is that you overestimate how much of an altruist you are. Consider that lives can be saved using traditional methods for well under $1000. That means every time you spend $1000 on other things, your revealed preference is that having that stuff is more important to you than saving the life of another human being. If you're upset upon hearing this fact, then you're suffering from cognitive dissonance. If you're a true altruist, you'll be happy after hearing this fact, because you'll realize that you can be scoring much better on your utility function than you are currently. (Assuming for the moment that happiness corresponds with opportunities to better satisfy your utility function, which seems to be fairy common in humans.) Regardless of whether you're a true altruist, it makes sense to spend a chunk of your time on entertainment and relaxation to spend the rest more effectively. By the way, I would be interested to hear Eliezer address this topic in his video.
7MichaelVassar
It's a free country. You are allowed to do a lot, but it can only be optimal to do one thing.
4komponisto
Not necessarily; the maximum value of a function may be attained at more than one point of its domain. (Also, my use of the word "allowed" is clearly rhetorical/figurative. Obviously it's not illegal to work on things other than AI, and I don't interpret you folks as saying it should be.)
3MichaelVassar
Point taken. Also, of course, given a variety of human personalities and situations, the optimal activity for a given person can vary quite a bit. I never advocate asceticism.
1gwern
String theorists are at least somewhat plausible, but Michelangelo and Beethoven? Do you have any evidence that they actually helped the sciences progress? I've asked the same question in the past, and have not been able to adduce any evidence worth a damn. (Science fiction, at least, can try to justify itself as good propaganda.)
0komponisto
No, and I didn't claim they did. It was intended to be a separate question ("...string theorists? And [then, on another note], what of Michelangelo, Beethoven,....?").
0[anonymous]
Aren't these the same thing?

Your "Bookshelf" page is 10 years old (and contains a warning sign saying it is obsolete):

http://yudkowsky.net/obsolete/bookshelf.html

Could you tell us about some of the books and papers that you've been reading lately? I'm particularly interested in books that you've read since 1999 that you would consider to be of the highest quality and/or importance (fiction or not).

3alyssavance
See the Singularity Institute Reading List for some ideas.
[-][anonymous]320

What is a typical EY workday like? How many hours/day on average are devoted to FAI research, and how many to other things, and what are the other major activities that you devote your time to?

Could you please tell us a little about your brain? For example, what is your IQ, at what age did you learn calculus, do you use cognitive enhancing drugs or brain fitness programs, are you Neurotypical and why didn't you attend school?

0roland
Great question and I think it ties in well with my one about autodidacticism: http://lesswrong.com/lw/1f4/less_wrong_qa_with_eliezer_yudkowsky_ask_your/1942

Why exactly do majorities of academic experts in the fields that overlap your FAI topic, who have considered your main arguments, not agree with your main claims?

I also disagree with the premise of Robin's claim. I think that when our claims are worked out precisely and clearly, a majority agree with them, and a supermajority of those who agree with Robin's part (new future growth mode, get frozen...) agree.

Still, among those who take roughly Robin's position, I would say that an ideological attraction to libertarianism is BY FAR the main reason for disagreement. Advocacy of a single control system just sounds too evil for them to bite that bullet however strong the arguments for it.

5timtyler
Which claims? The SIAI collectively seems to think some pretty strange things to me. Many are to do with the scale of the risk facing the world. Since this is part of its funding pitch, one obvious explanation seems to be that the organisation is attempting to create an atmosphere of fear - in the hope of generating funding. We see a similar phenomenon surrounding global warming alarmism - those promoting the idea of there being a large risk have a big overlap with those who benefit from related funding.
7MichaelVassar
You would expect serious people who believed in a large risk to seek involvement, which would lead the leadership of any such group to benefit from funding. Just how many people do you imagine are getting rich off of AGI concerns? Or have any expectation of doing so? Or are even "getting middle class" off of them?
-1timtyler
Some DOOM peddlers manage to get by. Probably most of them are currently in Hollywood, the finance world, or ecology. Machine intelligence is only barely on the radar at the moment - but that doesn't mean it will stay that way. I don't necessarily mean to suggest that these people are all motivated by money. Some of them may really want to SAVE THE WORLD. However, that usually means spreading the word - and convincing others that the DOOM is real and immanent - since the world must first be at risk in order for there to be SALVATION. Look at Wayne Bent (aka Michael Travesser), for example: "The End of The World Cult Pt.1" * http://www.youtube.com/watch?v=CvytVhqiO6E The END OF THE WORLD - but it seems to have more to do with sex than money.
1Zack_M_Davis
Any practical advice on how to overcome this failure mode, if and only if it is in fact a failure mode?
8Eliezer Yudkowsky
Who are we talking about besides you?
4RobinHanson
I'd consider important overlapping academic fields to be AI and long term economic growth; I base my claim about academic expert opinion on my informal sampling of such folks. I would of course welcome a more formal sampling.
9Eliezer Yudkowsky
Who's considered my main arguments besides you?
2RobinHanson
I'm not comfortable publicly naming names based on informal conversations. These folks vary of course in how much of the details of your arguments they understand, and of course you could always set your bar high enough to get any particular number of folks who have understood "enough."
5Eliezer Yudkowsky
Okay. I don't know any academic besides you who's even tried to consider the arguments. And Nick Bostrom et. al., of course, but AFAIK Bostrom doesn't particularly disagree with me. I cannot refute what I have not encountered, I do set my bar high, and I have no particular reason to believe that any other academics are in the game. I could try to explain why you disagree with me and Bostrom doesn't.
5RobinHanson
Surely some on the recent AAAI Presidential Panel on Long-Term AI Futures considered your arguments to at least some degree. You could discuss why these folks disagree with you.
4Eliezer Yudkowsky
Haven't particularly looked at that - I think some other SIAI people have. I expect they'd have told me if there was any analysis that counts as serious by our standards, or anything new by our standards. If someone hasn't read my arguments specifically, then I feel very little need to explain why they might disagree with me. I find myself hardly inclined to suspect that they have reinvented the same arguments. I could talk about that, I suppose - "Why don't other people in your field invent the same arguments you do?"

You have written a lot of words. Just how many of your words would someone have had to read to make you feel a substantial need to explain the fact they are world class AI experts and disagree with your conclusions?

6Eliezer Yudkowsky
I'm sorry, but I don't really have a proper lesson plan laid out - although the ongoing work of organizing LW into sequences may certainly help with that. It would depend on the specific issue and what I thought needed to be understood about that issue. If they drew my feedback cycle of an intelligence explosion and then drew a different feedback cycle and explained why it fit the historical evidence equally well, then I would certainly sit up and take notice. It wouldn't matter if they'd done it on their own or by reading my stuff. E.g. Chalmers at the Singularity Summit is an example of an outsider who wandered in and started doing a modular analysis of the issues, who would certainly have earned the right of serious consideration and serious reply if, counterfactually, he had reached different conclusions about takeoff... with respect to only the parts that he gave a modular analysis of, though, not necessarily e.g. the statement that de novo AI is unlikely because no one will understand intelligence. If Chalmers did a modular analysis of that part, it wasn't clear from the presentation. Roughly, what I expect to happen by default is no modular analysis at all - just snap consideration and snap judgment. I feel little need to explain such.

Roughly, what I expect to happen by default is no modular analysis at all - just snap consideration and snap judgment. I feel little need to explain such.

You, or somebody anyway, could still offer a modular causal model of that snap consideration and snap judgment. For example:

  • What cached models of the planning abilities of future machine intelligences did the academics have available when they made the snap judgment?

    • What fraction of the academics are aware of any current published AI architectures which could reliably reason over plans at the level of abstraction of "implement a proxy intelligence"?

      • What fraction of them have thought carefully about when there might be future practical AI architectures that could do this?
      • What fraction use a process for answering questions about the category distinctions that will be known in the future, which uses as an unconscious default the category distinctions known in the present?
  • What false claims have been made about AI in the past? What decision rules might academics have learned to use, to protect themselves from losing prestige for being associated with false claims like those?

    • How much do those decision rule

... (read more)
3RobinHanson
From that AAAI panel's interim report: Given this description it is hard to imagine they haven't imagined the prospect of the rate of intelligence growth depending on the level of system intelligence.
9Eliezer Yudkowsky
I don't see any arguments listed, though. I know there's at least some smart people on that panel (e.g. Horvitz) so I could be wrong, but experience has taught me to be pessimistic, and pessimism says I have no particular evidence that anyone started breaking the problem down into modular pieces, as opposed to, say, stating a few snap perceptual judgments at each other and then moving on. Why are you so optimistic about this sort of thing, Robin? You're usually more cynical about what would happen when academics have no status incentive to get it right and every status incentive to dismiss the silly. We both have experience with novices encountering these problems and running straight into the brick wall of policy proposals without even trying a modular analysis. Why on this one occasion do you turn around and suppose that the case we don't know will be so unlike the cases we do know?
3RobinHanson
The point is that this is a subtle and central issue to engage, so I was suggesting that you to consider describing your analysis more explicitly. Is there is never any point in listening to academics on "silly" topics? Is there never any point in listening to academics who haven't explicitly told you how they've broken a problem down into modular parts, no matter now distinguished the are on related topics? Are people who have a modular parts analysis always a more reliable source than people who don't, no matter what else their other features? And so on.

I confess, it doesn't seem to me on a gut level like this is either healthy to obsess about, or productive to obsess about. It seems more like worrying that my status isn't high enough to do work, than actually working. If someone shows up with amazing analyses I haven't considered, I can just listen to the analyses then. Why spend time trying to guess who might have a hidden deep analysis I haven't seen, when the prior is so much in favor of them having made a snap judgment, and it's not clear why if they've got a deep analysis they wouldn't just present it?

I think that on a purely pragmatic level there's a lot to be said for the Traditional Rationalist concept of demanding that Authority show its work, even if it doesn't seem like what ideal Bayesians would do.

3RobinHanson
You have in the past thought my research on the rationality of disagreement to be interesting and spent a fair bit of time discussing it. It seemed healthy to me for you to compare your far view of disagreement in the abstract to the near view of your own particular disagreement. If it makes sense in general for rational agents to take very seriously the fact that others disagree, why does it make little sense for you in particular to do so?
5Eliezer Yudkowsky
...and I've held and stated this same position pretty much from the beginning, no? E.g. http://lesswrong.com/lw/gr/the_modesty_argument/ I was under the impression that my verbal analysis matched and cleverly excused my concrete behavior. Well (and I'm pretty sure this matches what I've been saying to you over the last few years) just because two ideal Bayesians would do something naturally, doesn't mean you can singlehandedly come closer to Bayesianism by imitating the surface behavior of agreement. I'm not sure that doing elaborate analyses to excuse your disagreement helps much either. http://wiki.lesswrong.com/wiki/Occam%27s_Imaginary_Razor I'd spend much more time worrying about the implications of Aumann agreement, if I thought the other party actually knew my arguments, took my arguments very seriously, took the Aumann problem seriously with respect to me in particular, and in general had a sense of immense gravitas about the possible consequences of abusing their power to make me update. This begins to approach the conditions for actually doing what ideal Bayesians do. Michael Vassar and I have practiced Aumann agreement a bit; I've never literally done the probability exchange-and-update thing with anyone else. (Edit: Actually on recollection I played this game a couple of times at a Less Wrong meetup.) No such condition is remotely approached by disagreeing with the AAAI panel, so I don't think I could, in real life, improve my epistemic position by pretending that they were ideal Bayesians who were fully informed about my reasons and yet disagreed with me anyway (in which case I ought to just update to match their estimates, rather than coming up with elaborate excuses to disagree with them!)
2RobinHanson
Well I disagree with you strongly that there is no point in considering the views of others if you are not sure they know the details of your arguments, or of the disagreement literature, or that those others are "rational." Guess I should elaborate my view in a separate post.

There's certainly always a point in considering specific arguments. But to be nervous merely that someone else has a different view, one ought, generally speaking, to suspect (a) that they know something you do not or at least (b) that you know no more than them (or far more rarely (c) that you are in a situation of mutual Aumann awareness and equal mutual respect for one another's meta-rationality). As far as I'm concerned, these are eminent scientists from outside the field that I work in, and I have no evidence that they did anything more than snap judgment of my own subject material. It's not that I have specific reason to distrust these people - the main name I recognize is Horvitz and a fine name it is. But the prior probabilities are not good here.

I don't actually spend time obsessing about that sort of thing except when you're asking me those sorts of questions - putting so much energy into self-justification and excuses would just slow me down if Horvitz showed up tomorrow with an argument I hadn't considered.

I'll say again: I think there's much to be said for the Traditional Rationalist ideal of - once you're at least inside a science and have enough expertise to eva... (read more)

You admit you have not done much to make it easy to show them your reasons. You have not written up your key arguments in a compact form using standard style and terminology and submitted it to standard journals. You also admit you have not contacted any of them to ask them for their reasons; Horvitz would have have to "show up" for you to listen to him. This looks a lot like a status pissing contest; the obvious interpretation: Since you think you are better than them, you won't ask them for their reasons, and you won't make it easy for them to understand your reasons, as that would admit they are higher status. They will instead have to acknowledge your higher status by coming to you and doing things your way. And of course they won't since by ordinary standard they have higher status. So you ensure there will be no conversation, and with no conversation you can invoke your "traditional" (non-Bayesian) rationality standard to declare you have no need to consider their opinions.

7Eliezer Yudkowsky
You're being slightly silly. I simply don't expect them to pay any attention to me one way or another. As it stands, if e.g. Horvitz showed up and asked questions, I'd immediately direct him to http://singinst.org/AIRisk.pdf (the chapter I did for Bostrom), and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries. Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it. FYI, I've talked with Peter Norvig a bit. He was mostly interested in the CEV / FAI-spec part of the problem - I don't think we discussed hard takeoffs much per se. I certainly wouldn't have brushed him off if he'd started asking!

"and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries."

Why? No one in the academic community would spend that much time reading all that blog material for answers that would be best given in a concise form in a published academic paper. So why not spend the time? Unless you think you are that much of an expert in the field as to not need the academic community. If that be the case where are your publications and where are your credentials, where is the proof of this expertise (expert being a term that is applied based on actual knowledge and accomplishments)?

"Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it."

Why? If you expect to make FAI you will undoubtedly need people in the academic communities' help; unless you plan to do this whole project by yourself or with purely amateur help. I think you would admit that in its current form SIAI has a 0 probability of creating FAI first. That being said your best hope is to convince others that the cause is worthwhile and if that be the case you are looking at the professional and academic AI community.

I am sorry I prefer to be blunt.. that way there is no mistaking meanings...

2Alicorn
No.
0wedrifid
That 'probably not even then' part is significant. Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied '1' and probably more than '0' too.
3mormon2
"Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it. Why? If you expect to make FAI you will undoubtedly need people in the academic communities' help; unless you plan to do this whole project by yourself or with purely amateur help. ..." "That 'probably not even then' part is significant." My implication was that the idea that he can create FAI completely outside the academic or professional world is ridiculous when you're speaking from an organization like SIAI which does not have the people or money to get the job done. In fact SIAI doesn't have enough money to pay for the computing hardware to make human level AI. "Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied '1' and probably more than '0' too." If he doesn't agree with it now, I am sure he will when he runs into the problem of not having the money to build his AI or not having enough time in the day to solve the problems that will be associated with constructing the AI. Not even mentioning the fact that when you close yourself to outside influence that much you often end up with ideas that are riddled with problems, that if someone on the outside had looked at the idea they would have pointed the problems out. If you have never taken an idea from idea to product this can be hard to understand.
9Eliezer Yudkowsky
And so the utter difference of working assumptions is revealed.
0CannibalSmith
Back of a napkin math: 10^4 neurons per supercomputer 10^11 neurons per brain 10^7 supercomputers per brain 1.3*10^6 dollars per supercomputer 1.3*10^13 dollars per brain Edit: Disclaimer: Edit: NOT!
3wedrifid
Another difference in working assumptions.
0CannibalSmith
It's a fact stated by the guy in the video, not an assumption.
0wedrifid
No need to disclaim, your figures are sound enough and I took them as a demonstration of another rather significant difference between the assumptions of Eliezer and mormon2 (or mormon2's sources).
0wedrifid
I have. I've also failed to take other ideas to products and so agree with that part of your position, just not the argument as it relates to context.
-3timtyler
If there is a status pissing contest, they started it! ;-) "On the latter, some panelists believe that the AAAI study was held amidst a perception of urgency by non-experts (e.g., a book and a forthcoming movie titled “The Singularity is Near”), and focus of attention, expectation, and concern growing among the general population." Agree with them that there is much scaremongering going on in the field - but disagree with them about there not being much chance of an intelligence explosion.
0timtyler
I wondered why these folk got so much press. My guess is that the media probably thought the "AAAI Presidential Panel on Long-Term AI Futures" had something to do with the a report commisioned indirectly for the country's president. In fact it just refers to the president of their organisation. A media-savvy move - though it probably represents deliberately misleading information.
2RobinHanson
Almost surely world class academic AI experts do "know something you do not" about the future possibilities of AI. To declare that topic to be your field and them to be "outside" it seems hubris of the first order.
4wedrifid
This conversation seems to be following what appears to me to be a trend in Robin and Eliezer's (observable by me) disagreements. This is one reason I would fascinated if Eliezer did cover Robin's initial question, informed somewhat by Eliezer's interpretation. I recall Eliezer mentioning in a tangential comment that he disagreed with Robin not just on the particular conclusion but more foundationally on how much weight should be given to certain types of evidence or argument. (Excuse my paraphrase from hazy memory, my googling failed me.) This is a difference that extends far beyond just R & E and Eliezer has hinted at insights that intrigue me.
0[anonymous]
How can you be so confident that you know so much about this topic that no world class AI expert could know something relevant that you do not? Surely they considered the fact that people like you think you a lot about this topic, and they nevertheless thought it reasonable to form a disagreeing opinion based on the attention they had given it. You want to dismiss their judgment as "snap" because they did not spend many hours considering your arguments, but they clearly disagree with that assessment of how much consideration your arguments deserve. Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians, even when such authorities do not review contrarian arguments in as much detail as contrarians think best. You want to dismiss the rationality of disagreement literature as irrelevant because you don't think those you disagree with are rational, but they probably don't think you are rational either, and you are probably both right. But the same essential logic also says that irrational people should take seriously the fact that other irrational people disagree with them.
2Eliezer Yudkowsky
Does Daphne Koller know more than I do about the future possibilities of object-oriented Bayes Nets? Almost certainly. And, um... there are various complicated ways I could put this... but, well, so what? (No disrespect intended to Koller, and OOBN/probabilistic relational models/lifted Bayes/etcetera is on my short-list of things to study next.)
3RobinHanson
How can you be so confident that you know so much about this topic that no world class AI expert could possibly know something relevant that you do not? Surely they considered the fact that people like you think you know a lot about this topic, and they nevertheless thought it reasonable to form a disagreeing opinion based on the attention they had given it. You want to dismiss their judgment as "snap" because they did not spend many hours considering your arguments, but they clearly disagree with that assessment of how much consideration your arguments deserve. Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians, even when such authorities do not review contrarian arguments in as much detail as contrarians think best. You want to dismiss the rationality of disagreement literature as irrelevant because you don't think those you disagree with are rational, but they probably don't think you are rational either, and you are probably both right. But the same essential logic also says that irrational people should take seriously the fact that other irrational people disagree with them.

How can you be so confident that you know so much about this topic that no world class AI expert could possibly know something relevant that you do not?

You changed what I said into a bizarre absolute. I am assuming no such thing. I am just assuming that, by default, world class experts on various topics in narrow AI, produce their beliefs about the Singularity by snap judgment rather than detailed modular analysis. This is a prior and hence an unstable probability - as soon as I see contrary evidence, as soon as I see the actual analysis, it gets revoked.

but they clearly disagree with that assessment of how much consideration your arguments deserve.

They have no such disagreement. They have no idea I exist. On the rare occasion when I encounter such a person who is physically aware of my existence, we often manage to have interesting though brief conversations despite their having read none of my stuff.

Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians

Science only works when you use it; scientific authority derives from science. If you've got Lord Kelvin running around saying that you can't have flying m... (read more)

8RobinHanson
This conversation is probably reaching diminishing returns, so let me sum up. I propose that it would be instructive to you and many others if you would discuss what your dispute looks like from an outside view - what uninformed neutral but intelligent and rational observers should conclude about this topic from the features of this dispute they can observe from the outside. Such features include the various credentials of each party, and the effort he or she has spent on the topic and on engaging the other parties. If you think that a reasonable outsider viewer would have far less confidence in your conclusions than you do, then you must think that you possess hidden info, such as that your arguments are in fact far more persuasive than one could reasonably expect knowing only the outside features of the situation. Then you might ask why the usual sorts of clues that tend to leak out about argument persuasiveness have failed to do so in this case.

Robin, why do most academic experts (e.g. in biology) disagree with you (and Eliezer) about cryonics? Perhaps a few have detailed theories on why it's hopeless, or simply have higher priorities than maximizing their expected survival time; but mostly it seems they've simply never given it much consideration, either because they're entirely unaware of it or assume it's some kind of sci-fi cult practice, and they don't take cult practices seriously as a rule. But clearly people in this situation can be wrong, as you yourself believe in this instance.

Similarly, I think most of the apparent "disagreement" about the Singularity is nothing more than unawareness of Yudkowsky and his arguments. As far as I can tell, academics who come into contact with him tend to take him seriously, and their disagreements are limited to matters of detail, such as how fast AI is approaching (decades vs. centuries) and the exact form it will take (uploads/enhancement vs. de novo). They mainly agree that SIAI's work is worth doing by somebody. Examples include yourself, Scott Aaronson, and David Chalmers.

3RobinHanson
Cryonics is also a good case to analyze what an outsider should think, given what they can see. But of course "they laughed at Galileo too" is hardly a strong argument for contrarian views. Yes sometimes contrarians are right - the key question is how outside observers, or self-doubting insiders, can tell when contrarians are right.
3komponisto
Outsiders can tell when contrarians are right by assessing their arguments, once they've decided the contrarians are worth listening to. This in turn can be ascertained through the usual means, such as association with credentialed or otherwise high-status folks. So for instance, you are affiliated with a respectable institution, Bostrom with an even more respectable institution, and the fact that EY was co-blogging at Overcoming Bias thus implied that if your and Bostrom's arguments were worth listening to, so were his. (This is more or less my own story; and I started reading Overcoming Bias because it appeared on Scott Aaronson's blogroll.) Hence it seems that Yudkowsky's affiliations are already strong enough to signal competence to those academics interested in the subjects he deals with, in which case we should expect to see detailed, inside-view analyses from insiders who disagree. In the absence of that, we have to conclude that insiders either agree or are simply unaware -- and the latter, if I understand correctly, is a problem whose solution falls more under the responsibility of people like Vassar rather than Yudkowsky.
3RobinHanson
No for most people it is infeasible to evaluate who is right by working through the details of the arguments. The fact that Eliezer wrote on a blog affiliated with Oxford is very surely not enough to lead one to expect detailed rebuttal analyses from academics who disagree with him.
5komponisto
Well, for most people on most topics it is infeasible to evaluate who is right, period. At the end of the day, some effort is usually required to obtain reliable information. Even surveys of expert opinion may be difficult to conduct if the field is narrow and non-"traditional". As for whatever few specialists there may be in Singularity issues, I think you expect too little of them if you don't think Eliezer currently has enough status to expect rebuttals.
-5timtyler
2timtyler
Re: "They have no idea I exist." Are you sure? You may be underestimating your own fame in this instance.
0Thomas
Say that "Yudkowsky has no real clue" and that those "AI academics are right"? Just another crackpot among many "well educated", no big thing. Not worth to mention, almost. Say, that this crackpot is of the Edisonian kind! In that case it is something well worth to mention. Important enough to at least discuss with him ON THE TOPICS, and not on some meta level. Meta level discussion is sometimes (as here IMHO), just a waste of time.
2Mike Bishop
I'm not sure what you mean by your first few sentences. But I disagree with your last two. It is good for me to see this debate.
-4Thomas
You get zilch, in the case of Hanson (and the Academia) is right. Zero in the informative sense. You get quite a bit, if Yudkowsky is right. Verifying Hanson (& the so called Academia) means no new information.
2Vladimir_Nesov
You get not needing to run around trying to save the world and a pony if Hanson is right. It's not useful to be deluded.
-5Thomas
2timtyler
From that AAAI document: "The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes". "Radical outcomes" seems like a case of avoiding refutation by being vague. However, IMO, they will need to establish the truth of their assertion before they will get very far there. Good luck to them with that.
-1timtyler
The AAAI interim report is really too vague to bother much with - but I suspect they are making another error. Many robot enthusiasts pour scorn on the idea that robots will take over the world. How To Survive A Robot Uprising is a classic presentation on this theme. A hostile takeover is a pretty unrealistic scenario - but these folk often ignore the possibility of a rapid robot rise from within society driven by mutual love. One day robots will be smart, sexy, powerful and cool - and then we will want to become more like them.
1timtyler
Why will we witness an intelligence explosion? Because nature has a long history of favouring big creatures with brains - and because the capability to satisfy those selection pressures has finally arrived. The process has already resulted in enormous data-centres, the size of factories. As I have said: http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/
1timtyler
Thinking about it, they are probably criticising the (genuinely dud) idea that an intelligence explosion will start suddenly at some future point with the invention of some machine - rather than gradually arising out of the growth of today's already self-improving economies and industries.
0Thomas
I think, both ways are still open. The intelligence explosion from a self-improving economy and the intelligence explosion from a fringe of this process.
-2timtyler
Did you take a look at my "The Intelligence Explosion Is Happening Now"? The point is surely a matter of history - not futurism.
0Thomas
Yes and you are right.
0timtyler
Great - thanks for your effort and input.
1timtyler
Re: "overall skepticism about the prospect of an intelligence explosion"...? My guess would be that they are unfamiliar with the issues or haven't thought things through very much. Or maybe they don't have a good understanding of what that concept refers to (see link to my explanation - hopefully above). They present no useful analysis of the point - so it is hard to know why they think what they think. The AAAI seem to have publicly come to these issues later than many in the community - and it seems to be playing catch-up.
0timtyler
It looks as though we will be hearing more from these folk soon: "Futurists' report reviews dangers of smart robots" http://www.pittsburghlive.com/x/pittsburghtrib/news/pittsburgh/s_651056.html It doesn't sound much better than the first time around.
1JoshuaFox
It must be possible to engage at least some of these people in some sort of conversation to understand their positions, whether a public dialog as with Scott Aaronson or in private.
1timtyler
Chalmers reached some odd conclusions. Probably not as odd as his material about zombies and consciousness, though.
2timtyler
I have a theory about why there is disagreement with the AAAI panel: The DOOM peddlers gather funding from hapless innocents - who hope to SAVE THE WORLD - while the academics see them as bringing their field into disrepute, by unjustifiably linking their field to existential risk, with their irresponsible scaremongering about THE END OF THE WORLD AS WE KNOW IT. Naturally, the academics sense a threat to their funding - and so write papers to reassure the public that spending money on this stuff is Really Not As Bad As All That.
0[anonymous]
They do?
4Eliezer Yudkowsky
Actually, on further recollection, Steve Omohundro and Peter Cheeseman would probably count as academics who know the arguments. Mostly I've talked to them about FAI stuff, so I'm actually having trouble recalling whether they have any particular disagreement with me about hard takeoff. I think that w/r/t Cheeseman, I had to talk to Cheeseman for a while before he started to appreciate the potential speed of a FOOM, as opposed to just the FOOM itself which he considered obvious. I think I tried to describe your position to Cheeseman and Cheeseman thought it was pretty implausible, but of course that could just be the fact that I was describing it from outside - that counts for nothing in my view until you talk to Cheeseman, otherwise he's not familiar enough with your arguments. (See, the part about setting the bar high works both ways - I can be just as fast to write off the fact of someone else's disagreement with you, if they're insufficiently familiar with your arguments.) I'm not sure I can recall what Omohundro thinks - he might be intermediate between yourself and myself...? I'm not sure how much I've talked hard takeoff per se with Omohundro, but he's certainly in the game.
2MichaelVassar
I think Steve Omohundro disagees about the degree to which takeoff is likely to be centralized, due to what I think is the libertarian impulses I mentioned earlier.
0[anonymous]
Does Robin Hanson fall under 'et. al'? I remember on OB that he attempted to link the 2 fields on at least 1 or 2 occasions, and those were some of the most striking examples of disagreements between you two.
1StefanPernar
Me - if I qualify as an academic expert is another matter entirely of course.
2ChrisHibbert
Do you disagree with Eliezer substantively? If so, can you summarize how much of his arguments you've analyzed, and where you reach different conclusions?
0StefanPernar
Yes - I disagree with Eliezer and have analyzed a fair bit of his writings although the style in which it is presented and collected here is not exactly conducive to that effort. Feel free to search for my blog for a detailed analysis and a summary of core similarities and differences in our premises and conclusions.
6AdeleneDawner
Assuming I have the correct blog, these two are the only entries that mention Eliezer by name. Edit: The second entry doesn't mention him, actually. It comes up in the search because his name is in a trackback.
7Furcas
From the second blog entry linked above: Heh.
8RobinZ
This quotation accurately summarizes the post as I understand it. (It's a short post.) I think I speak for many people when I say that assumption A requires some evidence. It may be perfectly obvious, but a lot of perfectly obvious things aren't true, and it is only reasonable to ask for some justification.
7AdeleneDawner
... o.O Compassion isn't even universal in the human mind-space. It's not even universal in the much smaller space of human minds that normal humans consider comprehensible. It's definitely not universal across mind-space in general. The probable source of the confusion is discussed in the comments - Stefan's only talking about minds that've been subjected to the kind of evolutionary pressure that tends to produce compassion. He even says himself, "The argument is valid in a “soft takeoff” scenario, where there is a large pool of AIs interacting over an extended period of time. In a “hard takeoff” scenario, where few or only one AI establishes control in a rapid period of time, the dynamics described do not come into play. In that scenario, we simply get a paperclip maximizer."
4RobinZ
Ah - that's interesting. I hadn't read the comments. That changes the picture, but by making the result somewhat less relevant. (Incidentally, when I said, "it may be perfectly obvious", I meant that "some people, observing the statement, may evaluate it as true without performing any complex analysis".)
2AdeleneDawner
Ah. That's not how I usually see the word used.
1RobinZ
It's my descriptivist side playing up - my (I must admit) intuition is that when people say that some thesis is "obvious", they mean that they reached this bottom line by ... well, system 1 thinking. I don't assume it means that the obvious thesis is actually correct, or even universally obvious. (For example, it's obvious to me that human beings are evolved, but that's because it's a cached thought I have confidence in through system 2 thinking.) Actually, come to think: I know you've made a habit of reinterpreting pronouncements of "good" and "evil" in some contexts - do you have some gut feeling for "obvious" that contradicts my read?
4AdeleneDawner
I generally take 'obvious' to mean 'follows from readily-available evidence or intuition, with little to no readily available evidence to contradict the idea'. The idea that compassion is universal fails on the second part of that. The definitions are close in practice, though, in that most peoples' intuitions tend to take readily available contradictions into account... I think. ETA: Oh, and 'obviously false' seems to me to be a bit of a different concept, or at least differently relevant, given that it's easier to disprove something than to prove it. If someone says that something is obviously true, there's room for non-obvious proofs that it's not, but if something is obviously false (as 'compassion is universal' is), that's generally a firm conclusion.
2RobinZ
Yes, that makes sense - even if mine is a better description of usage, from the standpoint of someone categorizing beliefs, I imagine yours would be the better metric. ETA: I'm not sure the caveat is required for "obviously false", for two reasons. 1. Any substantive thesis (a category which includes most theses that are rejected as obviously false) requires less evidence to be roundly disconfirmed than it does to be confirmed. 2. As Yvain demonstrated in Talking Snakes, well-confirmed theories can be "obviously false", by either of our definitions. It's true that it usually takes less effort to disabuse someone of an obviously-true falsity than to convince them of an obviously-false truth, but I don't think you need a special theory to support that pattern.
2AdeleneDawner
I've been thinking about the obviously true/obviously false distinction some more, and I think I've figured out why they feel like two different concepts. 'Obviously', as I use it, is very close to 'observably'. It's obviously true that the sky is blue where I am right now, and obviously false that it's orange, because I can see it. It's obviously true that the sky is usually either blue, white, or grey during the day (post-sunrise, pre-sunset), because I've observed the sky many times during the day and seen those colors, and no others. 'Apparently', as I use it, is very similar to 'obviously', but refers to information inferred from observed facts. The sky is apparently never orange during the day, because I've personally observed the sky many times during the day and never seen it be that color. I understand that it can also be inferred from certain facts about the world (composition of the atmosphere and certain facts about how light behaves, I believe) that the sky will always appear blue on cloudless days, so that's also apparently true. 'Obviously false' covers situations where the theory makes a prediction that is observably inaccurate, as this one did. 'Apparently false' covers situations where the theory makes a prediction that appears to be inaccurate given all the available information, but some of the information that's available is questionable (I consider inferences questionable by default - if nothing else, it's possible for some relevant state to have been overlooked; what if the composition of the atmosphere were to change for some reason?) or otherwise doesn't completely rule out the possibility that the theory is true. Important caveat: I do use those words interchangeably in conversation, partly because of the convention of avoiding repeating words too frequently and partly because it's just easier - if I were to try to be that accurate every time I communicated, I'd run out of spoons(pdf) and not be able to communicate at all. Also, having
2AdeleneDawner
It also has the advantage of making it clear that the chance that the statement is accurate is dependent on the competence of the person making the statement - people who are more intelligent and/or have more experience in the relevant domain will consider more, and more accurate, evidence to be readily available, and may have better intuitions, even if they are sticking to system 1 thought. I suppose they don't need different wordings, but they do feel like different concepts to me. *shrug* (As I've mentioned elsewhere, I don't think in words. This is not an uncommon side-effect of that.)
0StefanPernar
From Robin: Incidentally, when I said, "it may be perfectly obvious", I meant that "some people, observing the statement, may evaluate it as true without performing any complex analysis". I feel the other way around at the moment. Namely "some people, observing the statement, may evaluate it as false without performing any complex analysis"
-9StefanPernar
-1StefanPernar
Perfectly reasonable. But the argument - the evidence if you will - is laid out when you follow the links, Robin. Granted, I am still working on putting it all together in a neat little package that does not require clicking through and reading 20+ separate posts, but it is all there none the less.
-1RobinZ
I think I'd probably agree with Kaj Sotala's remarks if I had read the passages she^H^H^H^H xe had, and judging by your response in the linked comment, I think I would still come to the same conclusion as she^H^H^H^H xe. I don't think your argument actually cuts with the grain of reality, and I am sure it's not sufficient to eliminate concern about UFAI. Edit: I hasten to add that I would agree with assumption A in a sufficiently slow-takeoff scenario (such as, say, the evolution of human beings, or even wolves). I don't find that sufficiently reassuring when it comes to actually making AI, though. Edit 2: Correcting gender of pronouns.
1StefanPernar
Full discussion with Kaj at her http://xuenay.livejournal.com/325292.html?view=1229740 live journal with further clarifications by me.
3Cyan
Kaj is male (or something else).
3AdeleneDawner
I was going to be nice and not say anything, but, yeah.
-2StefanPernar
Since when are 'heh' and 'but, yeah' considered proper arguments guys? Where is the logical fallacy in the presented arguments beyond you not understanding the points that are being made? Follow the links, understand where I am coming from and formulate a response that goes beyond a three or four letter vocalization :-)
4wedrifid
The claim "[Compassion is a universal value] = true. (as we have every reason to believe)" was rejected, both implicitly and explicitly by various commenters. This isn't a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief. To be fair, I must admit that the quoted portion probably does not do your position justice. I will read through the paper you mention. I (very strongly) doubt it will lead me to accept B but it may be worth reading.
-1StefanPernar
"This isn't a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief." But the reasons to change ones view are provided on the site, yet rejected without consideration. How about you read the paper linked under B and should that convince you, maybe you have gained enough provisional trust that reading my writings will not waste your time to suspend your disbelief and follow some of the links in the about page of my blog. Deal?
5wedrifid
I have read B. It isn't bad. The main problem I have with it is that the language used blurs the line between "AIs will inevitably tend to" and "it is important that the AI you create will". This leaves plenty of scope for confusion. I've read through some of your blog and have found that I consistently disagree with a lot of what you say. The most significant disagreement can be traced back to the assumption of a universal absolute 'Rational' morality. This passage was a good illustration: You see, I plan to eat my cake but don't expect to be able to keep it. My set of values are utterly whimsical (in the sense that they are arbitrary and not in the sense of incomprehension that the Ayn Rand quotes you link to describe). The reasons for my desires can be described biologically, evolutionarily or with physics of a suitable resolution. But now that I have them they are mine and I need no further reason.
-6StefanPernar
5timtyler
Re: "Assumption A: Human (meta)morals are not universal/rational. Assumption B: Human (meta)morals are universal/rational. Under assumption A one would have no chance of implementing any moral framework into an AI since it would be undecidable which ones they were." (source: http://rationalmorality.info/?p=112) I think we've been over that already. For example, Joe Bloggs might choose to program Joe's preferences into an intelligent machine - to help him reach his goals. I had a look some of the other material. IMO, Stefan acts in an authoritative manner, but comes across as a not-terribly articulate newbie on this topic - and he has adopted what seems to me to be a bizarre and indefensible position. For example, consider this: "A rational agent will always continue to co-exist with other agents by respecting all agents utility functions irrespective of their rationality by striking the most rational compromise and thus minimizing opposition from all agents." http://rationalmorality.info/?p=8
1StefanPernar
"I think we've been over that already. For example, Joe Bloggs might choose to program Joe's preferences into an intelligent machine - to help him reach his goals." Sure - but it would be moral simply by virtue of circular logic and not objectively. That is my critique. I realize that one will have to drill deep into my arguments to understand and put them into the proper context. Quoting certain statements out of context is definitely not helpful, Tim. As you can see from my posts, everything is linked back to a source were a particular point is made and certain assumptions are being defended. If you have a particular problem with any of the core assumptions and conclusions I prefer you voice them not as a blatant rejection of an out of context comment here or there but based on the fundamentals. Reading my blogs in sequence will certainly help although I understand that some may consider that an unreasonable amount of time investment for what seems like superficial nonsense on the surface. Where is your argument against my points Tim? I would really love to hear one, since I am genuinely interested in refining my arguments. Simply quoting something and saying "Look at this nonsense" is not an argument. So far I only got an ad hominem and an argument from personal incredulity.
0timtyler
This isn't my favourite topic - while you have a whole blog about it - so you are probably quite prepared to discuss things for far longer than I am likely to be interested. Anyway, it seems that I do have some things to say - and we are rather off topic here. So, for my response, see: http://lesswrong.com/lw/1dt/open_thread_november_2009/19hl
0[anonymous]
I had a look over some of the other material too. It left me with the urge to hunt down these weakling Moral Rational Agents and tear them apart. Perhaps because I can create more paperclips out of their raw materials than out of their compassionate compromises but perhaps because spite is a universal value (as we have every reason to believe). From a slightly different topic on the same blog, I must assert that "Don’t start to cuddle if she likes it rough." is not a tautological statement.
[-]haig260

Is your pursuit of a theory of FAI similar to, say, Hutter's AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I'm still not 100% certain of your goals at SIAI.

1timtyler
See Eli on video, 50 seconds in: http://www.youtube.com/watch?v=0A9pGhwQbS0

What's your advice for Less Wrong readers who want to help save the human race?

I know at one point you believed in staying celibate, and currently your main page mentions you are in a relationship. What is your current take on relationships, romance, and sex, how did your views develop, and how important are those things to you? (I'd love to know as much personal detail as you are comfortable sharing.)

0anonym
If this is addressed, I'd like to know why the change of mind from the stated position. Was there a flaw in the original argument, did you get new evidence that caused updating of probabilities that changed the original conclusion, etc.?

Why do you have a strong interest in anime, and how has it affected your thinking?

-1[anonymous]
I think the answer to the first part of the question is as simple a reason as it is for most bright/abnormal people: it offers subcultural values.

Could you (Well, "you" being Eliezer in this case, rather than the OP) elaborate a bit on your "infinite set atheism"? How do you feel about the set of natural numbers? What about its power set? What about that thing's power set, etc?

From the other direction, why aren't you an ultrafinitist?

3[anonymous]
Earlier today, I pondered whether this infinite set atheism thing is something Eliezer merely claims to believe as some sort of test of basic rationality. It's a belief that, as far as I can tell, makes no prediction. But here's what I predict that I would say if I had Eliezer's opinions and my mathematical knowledge: I'm a fan of thinking of ZFC as being its countably infinite model, in which the class of all sets is enumerable, and every set has a finite representation. Of course, things like the axiom of infinity and Cantor's diagonal argument still apply; it's just that "uncountably infinite set" simply means "set whose bijection with the natural numbers is not contained in the model". (Yes, ZFC has a countable model, assuming it's consistent. I would call this weird, but I hate admitting that any aspect of math is counterintuitive.)

ZFC's countable model isn't that weird.

Imagine a computer programmer, watching a mathematician working at a blackboard. Imagine asking the computer programmer how many bytes it would take to represent the entities that the mathematician is manipulating, in a form that can support those manipulations.

The computer programmer will do a back of the envelope calculation, something like: "The set of all natural numbers" is 30 characters, and essentially all of the special symbols are already in Unicode and/or TeX, so probably hundreds, maybe thousands of bytes per blackboard, depending. That is, the computer programmer will answer "syntactically".

Of course, the mathematician might claim that the "entities" that they're manipulating are more than just the syntax, and are actually much bigger. That is, they might answer "semantically". Mathematicians are trained to see past the syntax to various mental images. They are trained to answer questions like "how big is it?" in terms of those mental images. A math professor asking "How big is it?" might accept answers like "it's a subset of the integers" or "It's a superse... (read more)

0Tyrrell_McAllister
Models are semantics. The whole point of models is to give semantic meaning to syntactic strings. I haven't studied the proof of the Löwenheim–Skolem theorem, but I would be surprised if it were as trivial as the observation that there are only countably many sentences in ZFC. It's not at all clear to me that you can convert the language in which ZFC is expressed into a model for ZFC in a way that would establish the Löwenheim–Skolem theorem.
5Johnicholas
I have studied the proof of the (downward) Lowenheim-Skolem theorem - as an undergraduate, so you should take my conclusions with some salt - but my understanding of the (downward) Lowenheim-Skolem theorem was exactly that the proof builds a model out of the syntax of the first-order theory in question. I'm not saying that the proof is trivial - what I'm saying is that holding Godel-numberability and the possibility of a strict formalist interpretation of mathematics in your mind provides a helpful intuition for the result.
6Eliezer Yudkowsky
I've said this before in many places, but I simply don't do that sort of thing. If I want to say something flawed just to see how my readers react to it, I put it into the mouth of a character in a fictional story; I don't say it in my own voice.
0[anonymous]
I swear I meant to say that, knowing you, you probably wouldn't do such a thing.
0komponisto
. Uh-oh; if he takes this up, I may finally have to write that post I promised back in June!
0Nominull
Well obviously if a set is finite, no amount of taking its power set is going to change that fact.
1Psy-Kosh
I meant "the set of all natural numbers", IIRC, he's explicitly said he's not an ultrafinitist, so either he considers that as an acceptable infinite set, or he considers the natural numbers to exist, but not the set of them, or something. I meant "if you accept countable infinities, where and how do you consider the whole cantor hierarchy to break down?"
1DanArmak
What would it even mean for the natural numbers (the entire infinity of them) to "exist"? What makes a set acceptable or not?
2cousin_it
This question sounds weird to me. I find it best not to speak about "existence", but speak instead of logical models that work. For example, we don't know if our concept of integers is consistent, but we have evolved a set of tools for reasoning about it that have been quite useful so far. Now we try to add new reasoning tools, new concepts, without breaking the system. For example, if we imagine "the set of all sets" and apply some common reasoning to it, we reach Russell's paradox; but we can't feed this paradox back into the integers to demonstrate their inconsistency, so we just throw the problematic concept away with no harm done.
1DanArmak
It sounds weird to me too, which is why I asked it - because Psy-Kosh said EY said something about integers, or the set of integers, existing or not.
0Douglas_Knight
secondary sources? bah! LMGTFY (or the full experience)

What was the story purpose and/or creative history behind the legalization and apparent general acceptance of non-consensual sex in the human society from Three Worlds Collide?

2dclayh
Excellent; I was going to ask that myself. Clearly Eliezer wanted an example to support his oft-repeated contention that the future like the past will be filled with people whose values seem abhorrent to us. But why he chose that particular example I'd like to know. Was it the most horrific(-sounding) thing he could come up with some kind of reasonable(-sounding) justification for?
1Alicorn
It's not at all clear to me that coming up with a reasonable-sounding justification was part of the project. One isn't provided in the story, one wasn't presented as part of an answer to an earlier question of mine, etc. etc.
6AdeleneDawner
here
0timtyler
Perhaps see: http://en.wikipedia.org/wiki/Traumatic_insemination
-1Alicorn
This isn't an explanation at all.
6AdeleneDawner
The purpose was to test the waters for another story he was developing; there probably wasn't an in-story purpose to it beyond the obvious one of making it clear that the younger people had a very different worldview than the one we have now. He's been unwilling to give more detail because the reaction to the concept's insertion in that story was too negative to allow him to safely (without reputational consequence, I assume) share the apparently much more questionable other story, or, seemingly, any details about it. I did upvote your question, by the way. I want to hear more about that other story.
2SilasBarta
I don't see it doing much good to his reputation to stay silent either, given the inflammatory nature of the remark. Sure, people will be able to quote that part to trash Eliezer, but that's a lot worse than if someone could link a reasonable clarification in his defense. Yes, I voted Alicorn's question up. I want to know too.
4AdeleneDawner
Actually, there's a very good clarification of his views on rape in the context of our current society later in that same comment thread that could be linked to. It didn't seem to be relevant to this conversation, though.
-1SilasBarta
That's certainly an explanation. "Very good" and "clarifying" are judgment calls here...
1AdeleneDawner
How could it be better? What parts still need clarifying?
1SilasBarta
Okay, after reading the thread and more of Eliezer's comments on the issue, it makes more sense. If I understand it correctly, in the story world, women normally initiate sex, and so men would view female-initiated sex as the norm and -- understandably -- not see what's wrong with non-consensual sex, since they wouldn't even think of the possibility of male-initiated sex. Akon, then, is speaking from the perspective of someone who wouldn't understand why men would have a problem with sex being forced on them, and not considering rape of women as a possibility at all. Is that about right? ETA: I still can't make sense of all the business about redrawing of boundaries of consent. ETA2: I also can't see how human nature could change so that women normally initate sex, AND men continue to have the same permissive attitude toward sex being forced upon them. It seems that the severity of being raped is part and parcel of being the gender that's choosier about who they have sex with.
2AdeleneDawner
Regarding the first part, I don't think we were given enough information, either in the story or in the explanation, to determine how exactly the 3WC society differs from ours in that respect - and the point wasn't how it's different so much as that it's different, so I don't consider that a problem. I could be wrong, though, about having enough information - I'm apparently wired especially oddly in ways that are relevant to understanding this aspect of the story, so there's a reasonable chance that I'm personally missing one or more pieces of information that Eliezer assumed that the readers would be bringing to the story to make sense of it. Regarding 'boundaries of consent', I'm working on an explanation of how I understood Eliezer's explanation. This is a tricky area, though, and my explanation necessarily involves some personal information that I want to present carefully, so it may be another few hours. (I've been out for the last four, or it would have been posted already.)
5Blueberry
My understanding was that any society has things that are considered consented to by default, and things that need explicit permission. For instance, among the upper class in England in the last century, it was considered improper to start a conversation with someone unless you had been formally introduced. In modern-day America, it's appropriate to start a conversation with someone you see in public, or tap someone on the shoulder, but not to grope their sexual organs, for instance. I think this is what EY meant by "boundaries of consent": for instance, imagine a society where initiating sex was the equivalent of asking the time. You could decline to answer, but it would seem odd. Even so, there's a difference between changing the default for consent, and actually allowing non-consensual behavior. For instance, if someone specifically tells me not to tap her shoulder (say she's an Orthodox Jew) it would then not be acceptable for me to do so, and in fact would legally be assault. But if a young child doesn't want to leave a toy store, it's acceptable for his parent to forcibly remove him. So there's actually two different ideas: changing the boundaries of what's acceptable, and changing the rules for when people are allowed to proceed in the face of an explicit "no".
0LauraABJ
It's also possible that people in that society have a fetish about being taken regardless of anything they do to try and stop it... Like maybe it's one of the only aspects of their lives they don't have any control over, and they like it that way. Of course, I think your explanation is more likely, but either could work.
0AdeleneDawner
I'm still working on my explanation, but I'm going to wait and see if this comment does the job before I post it.
-1SilasBarta
It seems you're still about as confused as I am. Why do you think the linked comment clarified anything?
5AdeleneDawner
I'm not confused at Eliezer's linked comments; I'm confused at your confusion. I think the linked comments clarified things because I learned relevant information from them, the following points in particular: 1. The rape comment was not intended to be a plot point, or even major worldbuilding, for 3WC. The fact that we don't have enough in-story context to understand the remark may have been purposeful (though the purpose was not 3WC-related if so), and whether it was purposeful or not, 3WC is intended to be able to work without such an explanation. 2. Eliezer believes that he understands the psychology behind rape well enough to construct a plausible alternative way for a society to handle the issue. He attempted to support the assertion that he does by explaining how our society handles the issue. I found his explanation coherent and useful - it actually helped solve a related problem I'd been working on - so I believe that he does understand it. I understand that you didn't find his explanation coherent and/or useful, but I don't know why, so I don't know if it's an issue of you not having some piece of information that Eliezer and I have and take for granted, or you noticing a problem with the explanation that Eliezer and I both missed, or perhaps some other issue. My method of solving this kind of problem is to give more information, which generally either solves the problem directly or leads the other person to be able to pinpoint the problem they've found in my (or in this case, Eliezer's) logic, but on such a touchy subject I'm choosing to do that carefully.

Here's my attempt at explaining Eliezer's explanation. It's based heavily on my experiences as someone who's apparently quite atypical in a relevant way. This may require a few rounds of back-and-forth to be useful - I have more information about the common kind of experience (which I assume you share) than you have about mine, but I don't know if I have enough information about it to pinpoint all the interesting differences. Note that this information is on the border of what I'm comfortable sharing in a public area, and may be outside some peoples' comfort zones even to read about: If anyone reading is easily squicked by sexuality talk, they may want to leave the thread now.

I'm asexual. I've had sex, and experienced orgasms (anhedonically, though I'm not anhedonic in general), but I have little to no interest in either. However, I don't object to sex on principle - it's about as emotionally relevant as any other social interaction, which can range from very welcome to very unwelcome depending on the circumstances and the individual(s) with whom I'm socializing*. Sex tends to fall on the 'less welcome' end of that scale because of how other people react to it - I'm aware that othe... (read more)

6RHollerith
One of the adverse effects of pain pills is temporarily to take away the ability of the person's emotions to inform decision-making, particularly, avoidance of harms. According to neuroscientist Antonio Damasio, for most people, the person's ability to avoid making harmful decisions depends on the ability of the person to have an emotional reaction to the consequences of a decision -- particularly an emotional reaction to imagined or anticipated consequences -- that is, a reaction that occurs before the decision is made. When on pain pills, a person tends not to have (or not to heed) these emotional reactions to consequences of decisions that have not been made yet, if I understand correctly. The reason I mention this is that you might want to wait till you are off the pain pills to continue this really, really interesting discussion of your sexuality. I do not mean to imply that your decision to comment will harm you -- I just thought a warning about pain pills might be useful to you.
5AdeleneDawner
I noticed this issue myself, last night - I'd been nervous about posting the information in the second and third paragraphs before I took the meds, and wasn't, afterwards, which was unusual enough to be slightly alarming. (I did write both paragraphs before my visit to the dentist, and didn't edit them significantly afterwards.) The warning is appreciated, though. I've spent enough time thinking about this kind of thing, though, that I'm confident I can rely on cached judgments of what is and isn't wise to share, even in my slightly impaired state. I'll wait on answering anything questionable, but I suspect that that's unlikely to be an issue - I am really very open about this kind of thing in general, when I'm not worrying about making others uncomfortable with my oddness. It's a side-effect of not having a sexual self to defend.
0CronoDAS
I assume that by "pain pills" you mean opioids and other narcotics? I suspect that asprin and other non-narcotic painkillers wouldn't impair emotional reactions...
1AdeleneDawner
I'm taking an opioid, but I suspect that the effect would be seen with anything that affects sensory impressions, since it'll also affect your ability to sense your emotions.
3[anonymous]
Bit of a repeat warning: if you don't want to read about sex stuff, don't read this. You know, given my own experiences, reading this post makes me wonder if sexual anhedonia and rationality are correlated for some reason. (Note, if you wish, that I'm a 17-year-old male, and I've never had a sexual partner. I do know what orgasm is.)
1wedrifid
I would be shocked if they weren't. The most powerful biasses are driven by hard-wired sexual signalling mechanisms.
0[anonymous]
This makes me wonder how I would be different if I weren't apparently anhedonic. Note that I don't remember whether I first found out about that or stumbled upon Eliezer Yudkowsky; it's possible that my rationality-stuff came before my knowledge. Thinking again, I have been a religious skeptic all my life (and a victim of Pascal's wager for a short period, during which I managed to read some of the Pentateuch), I've never taken a stand on abortion, and I've been mostly apolitical, though I did have a mild libertarian period after learning how the free market works, and I never figured out what was wrong with homosexuality. I don't know whether I, before puberty, was rational or just apathetic.
2Kaj_Sotala
This was a fascinating comment; thank you. By the way, the Bering at Mind blog over at Scientific American had a recent, rather lengthy post discussing asexual people.
2RobinZ
That is really, really interesting - thanks! (P.S. I do think that this is a fair elaboration on Eliezer's comment, insofar as I understood either.)
3AdeleneDawner
You're welcome. :)
1gwern
FWIW, I think people don't find it implausible because they know, even if only vaguely, that there are people out there with fetishes for everything, and I have the impression that in heavily Islamic countries with full-on burkha-usage/purdah going, things like ankles are supposed to be erotic and often are.
3AdeleneDawner
That interpretation sounds odd to me, so I checked wikipedia, which says: 'Conventional' seems to be the sticking point. Ankles are conventionally considered sexual in that culture, so it's not a fetish, in that context; it's a cultural difference. It seems to make the most sense to think of it as a kind of communication - letting someone see your ankle, in that culture, is a communication about your thoughts regarding that person (though what exactly it communicates, I don't know enough to guess on), and the content of that communication is the turn-on. In our culture, the same thing might be communicated by, say, kissing, with similar emotional results. In either case, it's not the form of the communication that seems to matter, but the meaning, whereas in the case of a fetish, the form does matter, and what the action means to the other party (if there's another person involved) doesn't appear to. (Yes, I have some experience in this area. The fetish in question wasn't actually very interesting, and I don't think talking about it specifically will add to the conversation.)
2gwern
I'm... not quite following. I gave 2 examples of why an educated modern person would not be surprised at Victorian ankles and their reception: that fetishes are known to be arbitrary and to cover just about everything, and that contemporary cultures are close or identical to the Victorians. These were 2 entirely separate examples. I wasn't suggesting that your random Saudi Arabian (or whatever) had a fetish for ankles or something, but that such a person had a genuine erotic response regardless of whether the ankle was exposed deliberately or not. A Western teenage boy might get a boner at bare breasts in porn (deliberate but not really communicating), his girlfriend undressing for him (deliberate & communicative), or - in classic high school anime fashion - a bra/swimsuit getting snagged (both not deliberate & not communicative).
1AdeleneDawner
It seems like we're using the word 'fetish' differently, and I'm worried that that might lead to confusion. My original point was about how the cultural meanings of various things can change over time - including but not limited to what would or would not be considered a fetish (i.e. 'unusual to be aroused by'). If nearly everyone in a given culture is aroused by a certain thing, then it's not unusual in that culture, and it's not a fetish for people in that culture to be aroused by that thing, at least given how I'm using the word. (Otherwise, any arousing trait would be considered a fetish if at least one culture doesn't or didn't share our opinion of it, and I suspect that idea wouldn't sit well with most people.) I propose that the useful dividing line between a fetish and an aspect of a given person's culture is whether or not the arousing thing is universal enough in that culture that it can be used communicatively - that appears to be a good indication that people in that culture are socialized to be aroused by that thing when they wouldn't naturally be aroused by it without the socialization. I also suspect that that socialization is accomplished by teaching people to see the relevant things as communication, automatically, as a deep heuristic - so that that flash of ankle or breast is taken as a signal that the flasher is sexually receptive, without any thought involved on the flashee's part. It makes much more sense to me that thinking that someone was sexually receptive would be arousing than that somehow nearly everyone in a given culture somehow wound up with an attraction to ankles for their own sake, for no apparent reason, and without other cultures experiencing the same thing. There may be another explanation, though - were you considering some other theory?
1gwern
This seems true to me. No American male would deny that he is attracted to at least one of the big three (breasts, buttocks, face), and attracted for their own sake, and for no apparent reason. (Who instructed them to like those?) Yet National Geographic is famous for all its bare-breasted photos of women who seem to neither notice nor care, and ditto for the men. The simplest explanation to me is just that cultures have regions of sexiness, with weak ties to biological facts like childbirth, and fetishes are any assessment of sexiness below a certain level of prevalence. Much simpler than all your communication.
1AdeleneDawner
It seems I was trying to answer a question that you weren't asking, then; sorry about that.
2Blueberry
Well, the awareness that there are people who have a fetish for X in this culture might make it less surprising that there is a whole culture that finds X sexy. You're at least partly right about the communication theory. One big turn on for most people is that someone is sexually interested in them, as communicated by revealing normally hidden body parts. Supposedly in Victorian times legs were typically hidden, so revealing them would be communicative. Another part of this is that the idea of a taboo is itself sexy, whether or not there is communicative intent. Just the idea of seeing something normally secret or forbidden is arousing to many people. I'm curious about your example that came up in your life, if you're willing to share.
3AdeleneDawner
I suppose that's true, though it's not obvious to me that something would have to start as a fetish to wind up considered sexual by a culture. This appears to be true - I've heard it before, anyway - but it doesn't make sense, to me, at least as a sexual thing. Except, as I'm thinking of it now, it does seem to make sense in the context of communicating. Sharing some risky (in the sense that if it were made public knowledge, you'd take a social-status hit) bit of information is a hard-to-fake signal that you're serious about the relationship, and doing something risky together is a natural way of reciprocating with each other regarding that. It seems like it'd serve more of a pair-bonding purpose than strictly a sexual one, but the two are so intertwined in humans that it's not really surprising that it'd do both. My first boyfriend had a thing for walking through puddles while wearing tennis shoes without socks. Pretty boring, as fetishes go.
0Blueberry
It wouldn't. That's not what I meant: I meant that someone considering Victorian culture, say, where it was allegedly commonplace to find ankles sexy, might not find it too surprising if he knew about people with an ankle fetish in this culture. As in "I know someone who finds ankles sexy in this culture, so it's not that weird for ankles to be considered sexy in a completely different culture." Communicating risky information is more of a pair-bonding thing than a sexual one. I was thinking about seeing something taboo or hidden as sexual. Say it's in a picture or it's unintentional, so there's no communicative intent. A lot of sexuals find it exciting just because it's "forbidden". You might be able to relate if you've ever been told you can't do something and that just made you want it more.
2AdeleneDawner
That sounds bizarre. I understand assuming that something that a higher-ranking person is allowed to have, that you're not allowed, is a good thing to try to get. It sounds like the cause and effect part of it what you described is backwards from the way that makes sense to me: 'This is good because it's not allowed', not 'this is not allowed because it's good and in limited supply'. What could being wired that way possibly accomplish besides causing you grief? ETA: I have heard of that particular mental quirk before, and probably even seen it in action. I'm not saying that it's unusual to have it, just that it seems incomprehensible and potentially harmful, to me.
1Blueberry
Well, you're really asking two questions: why is it useful, and how to comprehend it. As far as comprehending it... well, I had thought it was a human universal to be drawn to forbidden things. Have you really never felt the urge to do something forbidden, or the desire to break rules? Maybe it's just because I tend to be a thrill-seeker and a risk-taker. I think you might be misunderstanding. I don't make a logical deduction that something is a good thing because it's not allowed. I do feel emotionally drawn towards things that are forbidden. It's got nothing to do with "higher-ranking" people. It's a pretty natural human urge to go exploring and messing around in forbidden areas. It's useful because it's what helps topple dictatorships, encourages scientific inquiry, and stirs up revolutions.
2AdeleneDawner
I don't think I've ever felt the need to break a rule just for the sake of doing so. I vaguely remember being curious enough about the supposed draw of doing forbidden things to try it in some minor way, out of curiosity, as a teenager, but it's pretty obvious how that worked out. (My memory of my teenage years is horrible, so I don't have details, and could actually be incorrect altogether.) My reaction to rules in general is fairly neutral: I tend to assume that they have (or at least, were intended to have) good reasons behind them, but have no objection to breaking rules whose reasons don't seem relevant to the issue at hand. I did understand that you were talking about something different, but that different thing doesn't make sense.
1Alicorn
I am typically only drawn to forbidden things when I do not know why they are forbidden, or know that they are forbidden for stupid reasons and find the forbidden thing a desideratum for other reasons. In the first case, it's a matter of curiosity - why has someone troubled to forbid me this thing? In the second, it's just that the thing is already a desideratum and the forbiddance provides no successfully countervailing reason to avoid seeking it.
1wedrifid
Like the 'prestige' metric that has been discussed recently 'things that the powerful want to stop me from doing' is a strong indicator of potential value to someone even though it is intrinsically meaningless. Obviously having this generalised wiring leads them to desire irrelevant or even detrimental things sometimes.
0AdeleneDawner
I haven't been reading that. I'll go check it out. Maybe it'll help.
1CronoDAS
This is interesting to know and read about. Are you a-romantic as well as asexual?
4AdeleneDawner
It depends how you define 'romantic'. I have a lot of trouble with the concept of monogamy, too, so if you're asking if I pair-bond, no. I do have deeply meaningful personal relationships that involve most of the same kinds of caring-about, though. On the other hand, I don't see a strong disconnect between that kind of relationship and a friendship - the difference in degree of closeness definitely changes how things work, but it's a continuum, not different categories, and people do wind up in spots on that continuum that don't map easily to 'friends' or 'romantic partners'. (I do have names for different parts of that continuum, to make it easier to discuss the resulting issues, but they don't seem to work the same as most peoples' categories.)
0CronoDAS
Well, I was mostly referring to this feeling: http://en.wikipedia.org/wiki/Limerence From your response, I'd have to guess that, no, you don't "fall in love" either. My personal experience is that there's a sharp, obvious difference in the emotions involved in romantic relationships and in friendships, although the girls I've had crushes on have never felt similarly about me.
2AdeleneDawner
Yep, limerence is foreign to me, though not as incomprehensible as some emotions. The wikipeida entry on love styles may be useful. I'm very familiar with storge, and familiar with agape. Ludus and pragma make sense as mental states (pragma more so than ludus), but it's unclear to me why they're considered types of love. I can recognize mania, but doubt that there's any situation in which I'd experience it, so I consider it foreign. Eros is simply incomprehensible - I don't even recognize when others are experiencing it. That said, it seems completely accurate to me to describe myself as being in love with the people I'm closest with - the strength and closeness and emotional attachment of those relationships seems to be at least comparable with relationships established through more traditional patterns, once the traditional-pattern relationships are out of the initial infatuation stage.
2wedrifid
Thanks for the link. This part was fascinating:
1SilasBarta
Okay, sounds plausible. Now, I ask that you do a check. Compare the length of your explanation to the length of the confusion-generating passage in 3WC. Call this the "backpedal ratio". Now, compare this backpedal ratio to that of, say, typical reinterpretations of the Old Testament that claim it doesn't really have anything against homosexuals. If yours is about the same or higher, that's a good reason to write off your explanation with "Well, you could pretty much read anything into the text, couldn't you?"
4AdeleneDawner
I don't think the length in words is a good thing to measure by, especially given the proportion of words I used offering metaphors to assist people in understanding the presented concepts or reinforcing that I'm not actually dangerous vs. actually presenting new concepts. I also think that the strength (rationality, coherency) of the explanation is more important than the number of concepts used, but it's your heuristic.
0SilasBarta
Fine. Don't count excess metaphors or disclaimers toward your explanation, and then compute the backpedal ratio. Would that be a fair metric? Even with this favorable counting, it still doesn't look good.
1AdeleneDawner
I don't think that evaluating the length of the explanation - or the number of new concepts used - is a useful heuristic at all, as I mentioned. I can go into more detail than I have regarding why, but that explanation would also be long, so I assume you'd disregard it, therefore I don't see much point in taking the time to do so. (Unless someone else wants me to, or something.)
4SilasBarta
Given unlimited space, I can always outline plausible-sounding scenarios where someone's outlandish remarks were actually benign. This is an actual cottage industry among people who want to show adherence to the Bible while assuring others they don't actually want to murder homosexuals. For this reason, the fact that you can produce a plausible scenario where Eliezer meant something benign is weak evidence he actually meant that. And it is the power of elaborate scenarios that implies we should be suspicious of high backpedal ratios. To the extent that you find length a bad measure, you have given sceanarios where length doesn't actually correlate with backpedaling. It's a fair point, so I suggested you clip out such false positives for purposes of calculating the ratios, yet you still claim you have a good reason to ignore the backpedal ratio. That I don't get. More generally, I am still confused in that I don't see a clean, simple reason why someone in the future would be confused as to why lots of rape would be a bad thing back in the 20th century, given that he'd have historical knowledge of what that society was like.
4AdeleneDawner
I wasn't trying to explain how Eliezer's world works - I upvoted the original comment specifically because I don't know how it works, and I'm curious. If you were taking my explanation as an attempt to provide that information, I'm sure it came across as a poor attempt, because I was in fact specifically avoiding speculating about the world Eliezer created. What I was attempting to do was show - from an outsider's perspective, since that's the one I have, and it's obviously more useful than an insider's perspective in this case - the aspects how humans determine selfhood and boundaries that make such a change possible (yes, just 'possible'), and also that Eliezer had shown understanding of the existence of those aspects. If I had been trying to add more information to the story - writing fanfiction, or speculating on facts about the world itself - applying your backpedal-ratio heuristic would make some sense (though I'd still object to your use of length-in-words as a measurement, and there are details of using new-concepts as a measurement that I'm not sure you've noticed), but I wasn't. I was observing facts about the real world, specifically about humans and how dramatically different socialization can affect us. As to why the character didn't understand why people from our time react so strongly to rape, the obvious (to me) answer is a simple lack of explanation by us. There's a very strong assumption in this society that everyone shares the aspects of selfhood that make rape bad (to the point where I often have to hide the fact that I don't share them, or suffer social repercussions), and very little motivation to even to consider why it's considered bad, much less leave a record of such thoughts. Even living in this society, with every advantage but having the relevant trait in understanding why people react that way, I haven't found an explanation that really makes sense of the issue, only one that does a coherent job of organizing the reactions that I've o
1Blueberry
So does your lack of a sexual self make it so you can't see rape as bad at all, or "only" as bad as beating someone up? Presumably someone without a sexual self could still see assault as bad, and rape includes assault and violence.
9AdeleneDawner
Disregarding the extra physical and social risks of the rape (STDs, pregnancy, etc.), I expect that I wouldn't find assault-plus-unwelcome-sex more traumatic than an equivalent assault without the sex. I do agree that assault is traumatic, and I understand that most people don't agree with me about how traumatic assault-with-rape is compared to regular assault. A note, for my own personal safety: The fact that I wouldn't find it as traumatic means I'm much more likely to report it, and to be able to give a coherent report, if I do wind up being raped. It's not something I'd just let pass, traumatic or no; people who are unwilling to respect others' preferences are dangerous and should be dealt with as such.
6Blueberry
Assault by itself is pretty traumatic. Not just the physical pain, but the stress, fear, and feeling of loss of control. I was mugged at knifepoint once, and though I wasn't physically hurt at all, the worst part was just feeling totally powerless and at the mercy of someone else. I was so scared I couldn't move or speak. I don't think your views on rape are as far from the norm as you seem to think. They make sense to me.
4AdeleneDawner
Rape can happen without assault, though - I know someone to whom such a rape happened, and she found it very traumatic, to the point where it still affects her life decades later. There are also apparently other things that can evoke the same kind of traumatized reaction without involving physical contact at all; Eliezer gave 'having nude photos posted online against your will' as an example. (I mentioned that example in a discussion with the aforementioned friend, and she agreed with Eliezer that it'd be similarly traumatic, in both type and degree, for whatever one data-point might be worth.)
2Blueberry
You seem confused about several things here. Unlike Biblical exegesis, in this conversation we are trying to elaborate and discuss possibilities for the cultural features of a world that was only loosely sketched out. You realize this is a fictional world we're discussing, not a statement of morality, or a manifesto that would require "backpedaling"? The point of introducing socially acceptable non-consensual sex was to demonstrate huge cultural differences. Neither EY nor anyone else is claiming this would be a good thing, or "benign" : it's just a demonstration of cultural change over time. Someone in the future, unless he was a historian, might not be familiar with history books discussing 20th century life. He might think lots of rape in the 20th century would be good (incorrectly) because non-consensual sex is a good thing by his cultural standards. He'd be wrong, but he wouldn't realize it. Your question is analogous to "I don't see why someone now couldn't see that slavery was a good thing back in the 17th century, given that he'd have historical knowledge of what that society was like." Well, yes, slavery was seen (by some people) as a good thing back then, but it's not now. In the story, non-consensual sex is seen (incorrectly) as a good thing in the future, so people in the future interpret the past through those biases.
2Eliezer Yudkowsky
Maybe it's just my experience with Orthodox Judaism, but the backpedal exegesis ratio - if, perhaps, computed as a sense of mental weight, more than a number of words - seems to me like a pretty important quantity when explaining others.
0AdeleneDawner
I could see it being important in some situations, definitely, if I'm understanding the purpose of the measurement correctly. My understanding is that it's actually intended to measure how much the new interpretation is changing the meaning of the original passage from the meaning it was originally intended to have. That's difficult to measure, in most cases, because the original intended meaning is generally at least somewhat questionable in cases where people attempt to reinterpret a passage at all. In this case, I'm trying not to change your stated meaning (which doesn't seem ambiguous to me: You're indicating that far-future societies are likely to have changed dramatically from our own, including changing in ways that we would find offensive, and that they can function as societies after having done so) at all, just to explain why your original meaning is more plausible than it seems at first glance. If I've succeeded - and if my understanding of your meaning and my understanding of the function of the form of measurement are correct - then the ratio should reflect that.
1arundelo
Does this mean you've experienced orgasms without enjoying them, or experienced orgasms without setting out to do so for pleasure, or something else?
4AdeleneDawner
The former. It actually took some research for me to determine that I was experiencing them at all, because most descriptions focus so heavily on the pleasure aspect.
0DanArmak
Evolutionarily, it would seem that the severity of women being raped is due to the possibility of involuntary impregnation. Do we have good data on truly inborn gender differences on the severity of rape, without cultural interference?
4Tyrrell_McAllister
I don't see the need for more than this: I just figured that these humans have been biologically altered to have a different attitude towards sex. Perhaps, for them, initiating sex with someone is analogous to initiating a conversation. Sure, you wish that some people wouldn't talk to you, but you wouldn't want to live in a world where everyone needed your permission before initiating a conversation. Think of all the interesting conversations you'd miss!
2Alicorn
And if that's what's going on, that would constitute a (skeezy) answer to my question, but I'd like to hear it from the story's author. Goodness knows it would annoy me if people started drawing inaccurate conclusions about my constructed worlds when they could have just asked me and I would have explained.
1Technologos
Alicorn: On the topic of your constructed worlds, I would be fascinated to read how your background in world-building (which, iirc, was one focus of your education?) might contribute to our understanding of this one.
1Alicorn
Yes, worldbuilding was my second major (three cheers for my super-cool undergrad institution!). My initial impression of Eliezer's skills in this regard from his fiction overall are not good, but that could be because he tends not to provide very much detail. It's not impossible that the gaps could be filled in with perfectly reasonable content, so the fact that these gaps are so prevalent, distracting, and difficult to fill in might be a matter of storytelling prowess or taste rather than worldbuilding abilities. (It's certainly possible to create a great world and then do a bad job of showcasing it.) I should be able to weigh in on this one in more detail if and when I get an answer to the above question, which is a particularly good example of a distracting and difficult-to-fill-in gap.
3Johnicholas
If I understand EY's philosophy of predicting the future correctly, the gaps in the world are intentional. Suppose that you are a futurist, and you know how hard it is to predict the future, but you're convinced that the future will be large, complicated, weird, and hard to connect directly to the present. How can you provide the reader with the sensation of a large, complicated, weird, and hard-to-connect-to-the-present future? Note that as a futurist, the conjunction fallacy (more complete predictions are less likely to be correct) is extremely salient in your thinking. You put deliberate gaps into your stories, any resolution of which would require a large complicated explanation - that way the reader has the desired (distracting and difficult-to-fill-in) sensation, without committing the author to any particular resolution.
3Eliezer Yudkowsky
The author still has to know what's inside the gaps. Also, the gaps have to look coherent - they can't appear to the reader as noise, or it simply won't create the right impression, no matter what. You may be overanalyzing here. I've never published anything that I would've considered sending in to a science fiction magazine - maybe I'm holding myself to too-high standards, but still, it's not like I'm outlining the plot and building character sheets. My goal in writing online fiction is to write it quickly so it doesn't suck up too much time (and I quite failed at this w/r/t Three Worlds Collide, but I never had the spare days to work only on the novella, which apparently comes with a really large productivity penalty).
0Kutta
I think Alicorn is certainly not overanalizing in the sense that fiction is always fiction and usual methods of analysis apply regardless of the author's proclaimed intentions or the amount of resources spent at writing. On the other hand I think Eliezer's fictions are perfectly good enough for their purpose, and while the flaws pointed out by Alicorn are certainly there I think it's unreasonable to expect Eliezer to be like a professional fiction writer.
1Alicorn
Maybe he's a good futurist. That does not make him a good worldbuilder, even if he's worldbuilding about the future. Does it come as any surprise that the skills needed to write good fiction in well-thought-out settings aren't the exact same skills needed to make people confused about large, complicated, weird, disconnected things?
1Johnicholas
Taking your question as rhetorical, with the presumed answer "no", I agree with you - of course the skills are different. However, I hear an implication (and correct me if I'm wrong) that good fiction requires a well-thought-out setting. Surely you can think of good writers who write in badly-constructed or deeply incomplete worlds.
1Alicorn
Good fiction does not strictly require a well-built setting. A lot of fiction takes place in a setting so very like reality that the skill necessary to provide a good backdrop isn't worldbuilding, but research. Some fiction that isn't set in "the real world" still works with little to no sense of place, history, culture, or context, although this works mostly in stories that are very simple, very short, or (more typically) both. Eliezer writes speculative fiction (eliminating the first excuse), and his stories typically depend heavily on backdrop elements (eliminating the second excuse, except when he's writing fanficiton and can rely on prior reading of others' works to do this job for him).
1Johnicholas
I agree with you regarding the quality of his writing, but your generalizations regarding worldbuilding's relationship to quality may be overbroad or overstrong. Worldbuilding is fun and interesting and I like it in my books, but lack of worldbuilding, or deep difficult holes in the world are not killing flaws. Almost nothing cannot be rescued by a sufficient quality in other areas. Consider Madeline L'Engle's A Wrinkle in Time, Gene Wolfe's Book of the New Sun, Stanislaw Lem's Cyberiad.
2Alicorn
The only one of the books you mention that I've read is Wrinkle in Time, so I'll address that one. It isn't world-driven! It's a strongly character-driven story. The planets she invents, the species she imagines, the settings she dreams up - these do not supply the thrust of the story. The people populating the book do that, and pretty, emotionally-charged prose does most of the rest. Further, L'Engle's worldbuilding isn't awful, and moreover, its weaknesses aren't distracting. It has an element of whimsy to it and it's colored by her background values, but there's nothing much in there that is outrageous and important and unexplained. Eliezer's stories, meanwhile - I'd have to dislike them even more if I were interpreting them as being character-driven. His characters tend to be ciphers with flat voices, clothed in cliché and propped up by premise. And it's often okay to populate your stories with such characters if they aren't the point - if the point is world or premise/conceit or plot or even just raw beautiful writing. I actually think that Eliezer's fiction tends to be premise/conceit driven, not setting driven, but he backs up his premises with setting, and his settings do not appear to be up to the task. So to summarize: A bad story element (such as setting, characterization, plot, or writing quality) may be forgivable, and not preclude the work it's found in from being good, if: * The bad element is not the point of the story * The bad element isn't indispensable to help support whatever element is the point of the story (for instance, you might get away with bad writing in a character-driven story only if you don't depend on your character's written voice to convey their personality) * And it is not so bad as to distract from the point of the story. Eliezer's subpar worldbuilding slips by according to the first criterion. I don't think his stories are truly setting-driven. But it fails the second two. His settings are indispensably necessary to back
-1MatthewB
Wouldn't cannibalism be an equally horrific thing to come up with? I can envision a future world in which Cannibalism is just as accepted as non-consensual sex in The Worlds Collide. I mean, with the recent invention of pork on a petri dish, it would be just as easy to grow human on a petri dish. Or, if like in the movie(s) Surrogate and Avatar, if we have bodies that we drive from a distance, and these bodies can be replaced by just growing/building a new one, might not some people kill and eat others as a type of social comment (in the same way that some sub-cultures engage in questionable behavior in order to make a social comment. The Goth/Vampire thing for instance, or some shock artists like Karen Finley or GG Allin. I had to leave the GG Allin show(s) I attended as a kid, but sat in rapt fascination through Karen's song Yam Jam, and was really interested in just how she was going to remove those yams afterward or what might had driven her to explore that particular mode of expression to begin with)
4tut
I would not have any problems with eating human-on-a-petri-dish, as long as it never had any neurons. The problem with cannibalism is eating a person, not some cells with the wrong DNA. And cells in a petri dish are not a person.
0Cyan
The other problem with cannibalism is that you can get diseases that way far more easily than you can from eating non-human meat.
0MatthewB
I know that, and you know that... But, what would the general population say about eating meat that was the product of Human DNA? It seems to me that some of the general population would be horribly incensed about it.
3DanArmak
Most of the general population is incensed about most things, most of the time. I've stopped caring. Why don't you?
3byrnema
If a group of people donated their bodies to cannibalism when they die for a group of cannibals to then consume them, I would have no problem with that. (I submit myself as an example of someone with moderate rather than extremely liberal views.) I think the moral repugnance only comes in when people might be killed for food: the value of life and person-hood is so much greater than the value of an immediate meal. Someone speculated earlier about a civilization of humans that had nothing to consume but other humans. Has it been mentioned yet that this population would shrink exponentially, because humans are heterotrophs, and there's something like only 10% efficiency from one step in the food chain to the next? That's what was disappointing about The Matrix. If the aliens wanted to generate energy there would have been more efficient ways to do so (say, one which actually generated more energy than it required). I pretend the aliens were just farming CPU and the director got it wrong.
1DanArmak
We already have moral repugnance towards the act of killing itself. I suspect that any feelings towards already-dead bodies exist independently of this. They may be rooted in feelings of disgust which evolved in part to protect from contamination (recently dead bodies can spread disease and also provide breeding ground for flies and parasites).
0byrnema
I don't locate feelings of disgust. Perhaps we are just genetically or culturally different with respect to this sensitivity? I recall when my parakeet died, I felt a sense of awe while holding the body; and a moral obligation to be respectful and careful with its body. I suppose I wouldn't have enjoyed eating him, but only because I identified him as more of a person than food. If I thought he would have wanted me to eat him, I would. Except then I would worry about parasites, so I would have to weigh my wishes to make a symbolic gesture verses my wishes to stay healthy.
-1MatthewB
That was me who discussed a civilization that had nothing to consume but other humans. Thanks for bringing it up, but I had already dealt with that in the stories as soon as someone pointed out that problem when I was much younger (turned out to be easier to fix that I thought), but telling what the solution is would give away too much, and since I might actually be able to get these published now that cannibalism is not nearly so taboo as it was back in the 80s when I first tried to submit them (Zombie movies were not nearly so prevalent then as now). Once the solution is revealed... It makes for another grim and, some have said, twisted surprise. I too have wondered about the whole matrix thing. There are some very good arguments against it, which I tend to give more weight to than the arguments in favor. Yet, the arguments in favor did not take into account the waste generated by the humans being used to support the generation of power, nor did they take into account any possible superconducting tech the AIs may have had. I cannot recall if any of them took into account that the AIs were also not using every human farmed as a battery, but were using many of the farmed humans as food for the living humans. There is also some evidence from the games that the AIs were also using algae as a supplement for the human batteries. Also, on the point about people donating their bodies to cannibals when they die... I have often thought that it would be a horrible joke for some cranky rich old guy to play on his heirs to make them eat him if they wished to inherit any of his fortune.
3Paul Crowley
Sod that, start a religion in which people have to symbolically eat your body and drink your blood once a week. Better yet, tell them that when they do it, it magically becomes the real thing!
0MatthewB
I would love to stop caring. It is indeed a wonderful suggestion. However, many of those people who would be offended by such things, also get offended by many, much less offensive things, things which often may cause a loss of liberty to others... And they vote. I do think it would behoove me to maybe turn up my apathy just a bit, as my near term future will have a lot more to say about my survival and ultimate value than worrying about a bunch of human cattle who like to get all bothered about things as trivial as the shape of the moon (absurd example)
1Vladimir_Nesov
Does your worrying about and discussing what other people believe contribute more to changing the outcome of their voting, or to other things, like personal payoff of social interaction while having the discussions about people of lower status according to this metric? Overestimating importance of personally discussing politics for policy formation is a classical mistake. See also: Dunbar's Function
3MatthewB
I see that I may be caught up in this mistake a bit. Some of my discussing is simply to gather information about what a typical person of a demographic might believe. It's mostly confirming what I might have read about in a poll, or that data from a website shows. Some times the discussion gets to the point where I try to change an attitude, and I keep tripping over myself when I do this, as few people will change their attitudes, political and/or religious without some form of emotional connection with the reason to change. This is sort of why I am here. I wish to stop using my valuable brain time to convince people of things which I haven't a hope of changing, and do something else which may contribute to the good of society in a more direct way. I am a mess of paranoid contradictions gathered from a mis-spent youth, and I wish to untangle some of that irrationality, as it is an intellectual drag on my progress.

Your approach to AI seems to involve solving every issue perfectly (or very close to perfection). Do you see any future for more approximate, rough and ready approaches, or are these dangerous?

[-]roland210

Autodidacticism

Eliezer, first congratulations for having the intelligence and courage to voluntarily drop out of school at age 12! Was it hard to convince your parents to let you do it? AFAIK you are mostly self-taught. How did you accomplish this? Who guided you, did you have any tutor/mentor? Or did you just read/learn what was interesting and kept going for more, one field of knowledge opening pathways to the next one, etc...?

EDIT: Of course I would be interested in the details, like what books did you read when, and what further interests did they spark, etc... Tell us a little story. ;)

How did you win any of the AI-in-the-box challenges?

http://news.ycombinator.com/item?id=195959

"Oh, dear. Now I feel obliged to say something, but all the original reasons against discussing the AI-Box experiment are still in force...

All right, this much of a hint:

There's no super-clever special trick to it. I just did it the hard way.

Something of an entrepreneurial lesson there, I guess."

0bogdanb
I know that part. I was hoping for a bit more...
8Unnamed
Here's an alternative question if you don't want to answer bogdanb's: When you won AI-Box challenges, did you win them all in the same way (using the same argument/approach/tactic) or in different ways?
4Yorick_Newsome
Something tells me he won't answer this one. But I support the question! I'm awfully curious as well.
2CronoDAS
Perhaps this would be a more appropriate version of the above: What suggestions would you give to someone playing the role of an AI in an AI-Box challenge?
2SilasBarta
Voted down. Eliezer Yudkowsky has made clear he's not answering that, and it seems like an important issue for him.
3wedrifid
Voted back up. He will not answer but there's no harm in asking. In fact, asking serves to raise awareness both on the surprising (to me at least) result and also on the importance Eliezer places on the topic.
-1SilasBarta
Yes, there is harm in asking. Provoking people to break contractual agreements they've made with others and have made clear they regard as vital, generally counts as Not. Cool.
4Jordan
In this case though, it's clear that Eliezer wants people to get something out of knowing about the AI box experiments. That's my extrapolated Eliezer volition at least. Since for me and many others we can't get anything out of the experiments without knowing what happened, I feel it is justified to question Eliezer where we see a contradiction in his stated wishes and our extrapolation of his volition. In most situations I would agree that it's not cool to push.
1wedrifid
As the OP said, Eliezer hasn't been subpoenaed. The questions here are merely stimulus to which he can respond with whichever insights or signals he desires to convey. For what little it is worth my 1.58 bits is 'up'. (At least, if it is granted that a given person has read a post and that his voting decision is made actively then I think I would count it as 1.58 bits. It's a little blurry.)
1[anonymous]
It depends on the probability distribution of comments.
0wedrifid
Good point. Probability distribution of comments relative to those doing the evaluation.
0bogdanb
IIRC* the agreement was to not disclose the contents of a contest without the agreement of both participants. My hope was not that Eliezer might break his word, but that evidence of continued interest in the matter might persuade him to obtain permission from at least one of his former opponents. (And to agree himself, as the case may be.) (*: and my question was based on that supposition)

If you were to disappear (freak meteorite accident), what would the impact on FAI research be?

Do you know other people who could continue your research, or that are showing similar potential and working on the same problems? Or would you estimate that it would be a significant setback for the field (possibly because it is a very small field to begin with)?

[-]evtujo210

How young can children start being trained as rationalists? And what would the core syllabus / training regimen look like?

7taa21
Just out of curiosity, why are you asking this? And why is Yudkowsky's opinion on this matter relevant?
2spriteless
This sort of thing should have it's own thread, it deserves some brainstorming. You can start with choice of fairytales. You can make the games available to play reward understanding probabilities and logic over luck and quick reflexes. My dad got us puzzle games and reading tutors for the NES and C64 when I was a kid. (Lode Runner, LoLo, Reader Rabbit)
[-]anonym200

What progress have you made on FAI in the last five years and in the last year?

How do you characterize the success of your attempt to create rationalists?

[-]anon190

For people not directly involved with SIAI, is there specific evidence that it isn't run by a group of genuine sociopaths with the goal of taking over the universe to fulfill (a compromise of) their own personal goals, which are fundamentally at odds with those of humanity at large?

Humans have built-in adaptions for lie detection, but betting a decision like this on the chance of my sense motive roll beating the bluff roll of a person with both higher INT and CHA than myself seems quite risky.

Published writings about moral integrity and ethical injunctions count for little in this regard, because they may have been written with the specific intent to deceive people into supporting SIAI financially. The fundamental issue seems rather similar to the AIBox problem: You're dealing with a potential deceiver more intelligent than yourself, so you can't really trust anything they say.

I wouldn't be asking this for positions that call for merely human responsibility, like being elected to the highest political office in a country, having direct control over a bunch of nuclear weapons, or anything along those lines; but FAI implementation calls for much more responsibility than that.

If the a... (read more)

I guess my main answers would be, in order:

1) You'll have to do with the base probability of a highly intelligent human being a sociopath.

2) Elaborately deceptive sociopaths would probably fake something other than our own nerdery...? Even taking into account the whole "But that's what we want you to think" thing.

3) All sorts of nasty things we could be doing and could probably get away with doing if we had exclusively sociopath core personnel, at least some of which would leave visible outside traces while still being the sort of thing we could manage to excuse away by talking fast enough.

4) Why are you asking me that? Shouldn't you be asking, like, anyone else?

0anon
Re. 4, not for the way I asked the question. Obviously asking for a probability, or any empirical evidence I would have to take your word on, would have been silly. But there might have been excellent public evidence against the Evil hypothesis I just wasn't aware of (I couldn't think of any likely candidates, but that might have been a failure of my imagination); in that case, you would likely be aware of such evidence, and would have a significant icentive to present it. It was a long shot.
2anonymousss
I looked into the issue from statistical point of view. I would have to go with much higher than baseline probability of them being sociopaths on the basis of Bayesian reasoning starting with baseline probability (about 1%) as a prior and then updating on the criteria of things that sociopaths can not easily fake (such as e.g. previously inventing something that works). Ultimately, the easy way to spot a sociopath is to look for the massive dis-balance of the observable signals towards those that sociopaths can easily fake. You don't need to be smarter than sociopath to identify the sociopath. The spam filter is pretty good at filtering out the advance fee fraud and letting business correspondence through. You just need to act like statistical prediction rule on a set of criteria, without allowing for verbal excuses of any kind, no matter how logical they sound. For instance the leaders of genuine research institutions are not HS dropouts; the leaders of cults are; you can find the ratio and build evidential Bayesian rule, with which you can use 'is HS dropout' evidence to adjust your probabilities. The beauty of this method is that it is too expensive for sociopaths to fake honest signals - such as for example having spent years to make and perfect some invention that has improved lives of people, you can't send this signal without doing immense lot of work - and so even as they are aware of this method there is literally nothing they can do about it, nor do they want to do anything about it as there are enough people who do not pay attention to certainly honest signals to fakeable signals ratio (gullible people), whom sociopaths can target instead, for a better reward to work ratio. Ultimately, it boils down to the fact that genuine world saving leader is rather unlikely to have never before invented anything that did demonstrably benefit the mankind, while a sociopath is pretty likely (close to 1) to have never before invented anything that did demonstrably b
0timtyler
What would be the best way of producing such evidence? Presumably, organisational transparency - though that could - in principle - be faked. I'm not sure they will go for that - citing the same reasons previously given for not planning to open-source everything.

What are your current techniques for balancing thinking and meta-thinking?

For example, trying to solve your current problem, versus trying to improve your problem-solving capabilities.

0roland
Does meta-thinking include the gathering of more information for solving the problem? I think it should.

Could you give an uptodate estimate of how soon non-Friendly general AI might be developed? With confidence intervals, and by type of originator (research, military, industry, unplanned evolution from non-general AI...)

2MichaelVassar
Mine would be slightly less than 10% by 2030, slightly greater than 85% by 2080 conditional on a general continuity of our civilization between now and 2080, most likely method of origination depends on how far we look out. More brain inspired methods tend to come later and to be more likely absolutely.
2alyssavance
We at SIAI have been working on building a mathematical model of this since summer 2008. See Michael Anissimov's blog post at http://www.acceleratingfuture.com/michael/blog/2009/02/the-uncertain-future-simple-ai-self-improvement-models/. You (or anyone else reading this) can contact us at uncertainfuture@intelligence.org if you're interested in helping us test the model.

Say I have $1000 to donate. Can you give me your elevator pitch about why I should donate it (in totality or in part) to the SIAI instead of to the SENS Foundation?

Updating top level with expanded question:

I ask because that's my personal dilemma: SENS or SIAI, or maybe both, but in what proportions?

So far I've donated roughly 6x more to SENS than to SIAI because, while I think a friendly AGI is "bigger", it seems like SENS has a higher probability of paying off first, which would stop the massacre of aging and help ensure I'm still around when a friendly AGI is launched if it ends up taking a while (usual caveats; existential risks, etc).

It also seems to me like more dollars for SENS are almost assured to result in a faster rate of progress (more people working in labs, more compounds screened, more and better equipment, etc), while more dollars for the SIAI doesn't seem like it would have quite such a direct effect on the rate of progress (but since I know less about what the SIAI does than about what SENS does, I could be mistaken about the effect that additional money would have).

If you don't want to pitch SIAI over SENS, maybe you could discuss these points so that I, and others, are better able to make informed decisions about how to spend our philanthropic monies.

Hi there MichaelGR,

I’m glad to see you asking not just how to do good with your dollar, but how to do the most good with your dollar. Optimization is lives-saving.

Regarding what SIAI could do with a marginal $1000, the one sentence version is: “more rapidly mobilize talented or powerful people (many of them outside of SIAI) to work seriously to reduce AI risks”. My impression is that we are strongly money-limited at the moment: more donations allows us to more significantly reduce existential risk.

In more detail:

Existential risk can be reduced by (among other pathways):

  1. Getting folks with money, brains, academic influence, money-making influence, and other forms of power to take UFAI risks seriously; and
  2. Creating better strategy, and especially, creating better well-written, credible, readable strategy, for how interested people can reduce AI risks.

SIAI is currently engaged in a number of specific projects toward both #1 and #2, and we have a backlog of similar projects waiting for skilled person-hours with which to do them. Our recent efforts along these lines have gotten good returns on the money and time we invested, and I’d expect similar returns from the (similar) projec... (read more)

Please post a copy of this comment as a top-level post on the SIAI blog.

[-]Rain100

You can donate to FHI too? Dang, now I'm conflicted.

Wait... their web form only works with UK currency, and the Americas form requires FHI to be a write-in and may not get there appropriately.

Crisis averted by tiny obstacles.

[-]Kutta100

at 8 expected current lives saved per dollar donated

Even though there is a large margin of error this is at least 1500 times more effective than the best death-averting charities according to GiveWell. There's is a side note though that while normal charities are incrementally beneficial SIAI has (roughly speaking) only two possible modes, a total failure mode and a total win mode. Still, expected utility is expected utility. A paltry 150 dollars to save as many lives as Schindler... It's a shame warm fuzzies scale up so badly...

6Wei Dai
Someone should update SIAI's recent publications page, which is really out of date. In the mean time, I found two of the papers you referred using Google: * Machine Ethics and Superintelligence * Which Consequentialism? Machine Ethics and Moral Divergence
2Pablo
Those interested in the cost-effectiveness of donations to the SIAI may also want to check Alan Dawrst's donation recommendation. (Dawrst is "Utilitarian", the donor that Anna mentions above.)
1StefanPernar
Thanks for that Anna. I could only find two of the five Academic talks and journal articles you mentioned online. Would you mind posting all of them online and point me to where I will be able to access them?
0MichaelGR
Thank you very much, Anna. This will help me decide, and I'm sure that it will help others too. I second Katja's idea; a version of this should be posted on the SIAI blog.
2Kaj_Sotala
Kaj's. :P
0MichaelGR
I'm sorry, for some reason I thought you were Katja Grace. My mistake.
6Eliezer Yudkowsky
I tend to regard the SENS Foundation as major fellow travelers, and think that we both tend to benefit from each other's positive publicity. For this reason I've usually tended to avoid this kind of elevator pitch! Pass to Michael Vassar: Should I answer this?
3MichaelGR
[I've moved what was here to the top level comment]
2Eliezer Yudkowsky
I'll flag Vassar or Salamon to describe what sort of efforts SIAI would like to marginally expand into as a function of our present and expected future reliability of funding.

If Omega materialized and told you Robin was correct and you are wrong, what do you do for the next week? The next decade?

1Peter_de_Blanc
About what? Everything?
4gwern
Given the context of Eliezer's life-mission and the general agreement of Robin & Eliezer: FAI, AI's timing, and its general character.
1retired_phlebotomist
Right. Robin doesn't buy the "AI go foom" model or that formulating and instilling a foolproof morality/utility function will be necessary to save humanity. I do miss the interplay between the two at OB.

What is the probability that this is the ultimate base layer of reality?

0MichaelHoward
And then... Really? What would be a fair estimate if you were someone not especially likely to be simulated, living in a not particularly critical time, and there was only, say, a trillionth as much potential computronium lying around?
0Mike Bishop
could you explain more what this means?
2jimmy
I think he means "as opposed to living in a simulation (possibly in another simulation, and so on)" This seems to be one of those questions that seem like they should answer, but actually don't. If there's at least one copy of you in "a simulation" and at least one in "base level reality", then you're going to run into the same problems as sleeping beauty/absent minded driver/etc when you deal with 'indexical probabilities'. There are Decicion Theory answers, but the ones that work don't mention indexical probabilities. This does make the situation a bit harder than say, the sleeping beauty problem, since you have to figure out how to weight your utility function over multiple universes.

Are the book(s) based on your series of posts are OB/LW still happening? Any details on their progress (title? release date? e-book or real book? approached publishers yet? only technical books, or popular book too?), or on why they've been put on hold?

http://lesswrong.com/lw/jf/why_im_blooking/

8Eliezer Yudkowsky
Yes, that is my current project.