1086

LESSWRONG
LW

1085
Machine Intelligence Research Institute (MIRI)
Personal Blog

22

Should I believe what the SIAI claims?

by XiXiDu
12th Aug 2010
4 min read
633

22

Machine Intelligence Research Institute (MIRI)
Personal Blog

22

Should I believe what the SIAI claims?
52Rain
3multifoliaterose
2XiXiDu
7Rain
4XiXiDu
8Rain
-3XiXiDu
4Rain
2XiXiDu
1Rain
0HughRistik
0Rain
24Eliezer Yudkowsky
16Eliezer Yudkowsky
19JGWeissman
7Cyan
8thomblake
4NancyLebovitz
3Jonathan_Graehl
3wedrifid
5XiXiDu
8wedrifid
0XiXiDu
0timtyler
-1wedrifid
3[anonymous]
0[anonymous]
-1wedrifid
2jimrandomh
-1wedrifid
0[anonymous]
0[anonymous]
0XiXiDu
7Rain
3XiXiDu
4wedrifid
3XiXiDu
2wedrifid
2XiXiDu
1wedrifid
-1XiXiDu
5wedrifid
2XiXiDu
4wedrifid
3Nick_Tarleton
3XiXiDu
19gwern
7soreff
4gwern
4Sniffnoy
6gwern
0Aron2
7gwern
5CarlShulman
3gwern
-1XiXiDu
1gwern
-2XiXiDu
1gwern
6XiXiDu
8CarlShulman
1XiXiDu
1CarlShulman
0Unknowns
2CarlShulman
-1Unknowns
2CarlShulman
0Unknowns
1CarlShulman
0Unknowns
3MichaelVassar
0XiXiDu
4MichaelVassar
1XiXiDu
5timtyler
3timtyler
3timtyler
3Rain
1XiXiDu
5jimrandomh
3NancyLebovitz
2jimrandomh
0timtyler
-1timtyler
2jimrandomh
-10timtyler
5jimrandomh
-2timtyler
8jimrandomh
-9timtyler
5jimrandomh
0timtyler
0timtyler
5Rain
1Nick_Tarleton
1timtyler
0JoshuaZ
7laakeus
5jimrandomh
0JoshuaZ
0timtyler
1JoshuaZ
4shokwave
2nshepperd
2TheOtherDave
2timtyler
1timtyler
0timtyler
-1timtyler
3timtyler
1hairyfigment
0timtyler
0shokwave
1wedrifid
1XiXiDu
0[anonymous]
5JGWeissman
22Mitchell_Porter
6DSimon
1[anonymous]
13Kaj_Sotala
4JoshuaZ
4Kaj_Sotala
13Paul Crowley
8[anonymous]
7[anonymous]
10Paul Crowley
4[anonymous]
3Paul Crowley
3XiXiDu
3HughRistik
6HughRistik
16Wei Dai
2HughRistik
4Paul Crowley
1cata
1Wei Dai
13orthonormal
1[anonymous]
0orthonormal
-3XiXiDu
1kodos96
6XiXiDu
2XiXiDu
3kodos96
13Wei Dai
1XiXiDu
11CarlShulman
5XiXiDu
9CarlShulman
-7timtyler
5jimrandomh
-2timtyler
0wedrifid
-1timtyler
-1Unknowns
6XiXiDu
11utilitymonster
3multifoliaterose
4utilitymonster
3multifoliaterose
3CarlShulman
9thomblake
4thomblake
6Rain
8HughRistik
2NihilCredo
-1timtyler
5NancyLebovitz
-7thomblake
1timtyler
7XiXiDu
2jimrandomh
1timtyler
1Unknowns
11Vladimir_Nesov
4multifoliaterose
2mkehrt
0timtyler
1multifoliaterose
5Paul Crowley
0multifoliaterose
1timtyler
0multifoliaterose
1timtyler
0xamdam
0timtyler
0multifoliaterose
2timtyler
1multifoliaterose
0Vladimir_Nesov
0multifoliaterose
1Vladimir_Nesov
0multifoliaterose
2Vladimir_Nesov
0multifoliaterose
1timtyler
0Vladimir_Nesov
2multifoliaterose
6Vladimir_Nesov
6multifoliaterose
0timtyler
0multifoliaterose
2CarlShulman
0timtyler
0gwern
1CarlShulman
0gwern
0CarlShulman
0timtyler
1[anonymous]
10utilitymonster
0XiXiDu
2utilitymonster
8XiXiDu
18JGWeissman
0[anonymous]
8xamdam
7Will_Newsome
6Alicorn
2xamdam
3xamdam
1XiXiDu
8EStokes
17kodos96
9Furcas
15kodos96
3HughRistik
6Wei Dai
2Eliezer Yudkowsky
18XiXiDu
18Eliezer Yudkowsky
9xamdam
7Clippy
11thomblake
6xamdam
4CronoDAS
9NancyLebovitz
7Clippy
7XiXiDu
6Aleksei_Riikonen
6XiXiDu
1Aleksei_Riikonen
2XiXiDu
3Aleksei_Riikonen
3XiXiDu
14Anonymous9291
2kodos96
1NihilCredo
7NancyLebovitz
3NihilCredo
1RHollerith
1NihilCredo
4Wei Dai
1NihilCredo
3RHollerith
0NihilCredo
3RHollerith
2NihilCredo
0katydee
0[anonymous]
8HughRistik
11orthonormal
4XiXiDu
1Furcas
3Interpolate
-2Aleksei_Riikonen
13Wei Dai
0XiXiDu
0Risto_Saarelma
6Wei Dai
0[anonymous]
4Vladimir_Nesov
1XiXiDu
0Risto_Saarelma
7Wei Dai
6NancyLebovitz
9Liron
9simplicio
9Eliezer Yudkowsky
-3[anonymous]
2Aleksei_Riikonen
2XiXiDu
6XiXiDu
6Paul Crowley
8Aleksei_Riikonen
1Aleksei_Riikonen
1Nick_Tarleton
1XiXiDu
0Vladimir_Nesov
3Aleksei_Riikonen
0Vladimir_Nesov
0[anonymous]
-1XiXiDu
8timtyler
-7[anonymous]
5Tyrrell_McAllister
-5[anonymous]
6Emile
-5[anonymous]
5Emile
0XiXiDu
7Airedale
10XiXiDu
2WrongBot
2[anonymous]
5Aleksei_Riikonen
7Simulation_Brain
6utilitymonster
7wedrifid
3soreff
-1timtyler
6wedrifid
2timtyler
1Alexei
2timtyler
2Perplexed
0Alexei
1timtyler
4LucasSloan
-1timtyler
1Alexei
0timtyler
0Alexei
0timtyler
1Alexei
5ata
1Perplexed
3timtyler
3ata
1Perplexed
3ata
2Mitchell_Porter
1kodos96
2ata
0komponisto
0timtyler
4Simulation_Brain
2timtyler
0anon895
0Perplexed
1anon895
2Perplexed
3kodos96
6ShardPhoenix
2Simulation_Brain
2cata
0jacob_cannell
7JoshuaZ
3utilitymonster
7Johnicholas
6JamesAndrix
10CarlShulman
1CronoDAS
5XiXiDu
1Paul Crowley
5Paul Crowley
5MartinB
5NancyLebovitz
2XiXiDu
4Wei Dai
4timtyler
3Paul Crowley
8[anonymous]
3CarlShulman
0Vladimir_Nesov
0CarlShulman
2Wei Dai
1Rain
3Wei Dai
1Risto_Saarelma
4XiXiDu
10Wei Dai
5XiXiDu
5Paul Crowley
6XiXiDu
6CarlShulman
2CarlShulman
1XiXiDu
2Unknowns
4CarlShulman
-2[anonymous]
4XiXiDu
10RHollerith
8XiXiDu
1[anonymous]
4lucidfox
9JoshuaZ
7jimrandomh
-1[anonymous]
3Craig Daniel
3XiXiDu
4MichaelVassar
3CarlShulman
9timtyler
9CarlShulman
7Rain
2MartinB
2[anonymous]
7Vladimir_Nesov
6XiXiDu
2Vladimir_Nesov
1John_Maxwell
0thomblake
2John_Maxwell
0thomblake
1thomblake
5EStokes
5Vladimir_Nesov
9Alicorn
-1Vladimir_Nesov
-2thomblake
0[anonymous]
0timtyler
0NancyLebovitz
12jimrandomh
2NancyLebovitz
0timtyler
1NancyLebovitz
3timtyler
2Eliezer Yudkowsky
1NancyLebovitz
3Eliezer Yudkowsky
-5Clippy
0Eliezer Yudkowsky
35wedrifid
34Kaj_Sotala
13XiXiDu
0CronoDAS
4komponisto
14Mitchell_Porter
1A1987dM
0Mitchell_Porter
4jimrandomh
4Nick_Tarleton
5jimrandomh
23CarlShulman
16CronoDAS
8orthonormal
6CronoDAS
1orthonormal
-1Mitchell_Porter
1orthonormal
1prase
1Mitchell_Porter
5[anonymous]
16multifoliaterose
8Eliezer Yudkowsky
8rwallace
1CarlShulman
2rwallace
0CarlShulman
2rwallace
5multifoliaterose
12Eliezer Yudkowsky
14[anonymous]
38Eliezer Yudkowsky
15Unknowns
-3Eliezer Yudkowsky
4Unknowns
3Eliezer Yudkowsky
1Unknowns
9wedrifid
1Unknowns
6wedrifid
0Unknowns
0wedrifid
1Perplexed
0prase
2timtyler
0prase
1timtyler
-1jimrandomh
2cousin_it
0prase
1cousin_it
2Vladimir_M
0prase
0cousin_it
8JamesAndrix
5rwallace
5[anonymous]
4orthonormal
3[anonymous]
14Eliezer Yudkowsky
0[anonymous]
6Eliezer Yudkowsky
1[anonymous]
7Eliezer Yudkowsky
4Psy-Kosh
7Wei Dai
1Vladimir_M
1[anonymous]
0Sniffnoy
1Wei Dai
2Unknowns
7Wei Dai
3wedrifid
2cousin_it
0[anonymous]
1[anonymous]
4[anonymous]
0ata
5wedrifid
2ata
5wedrifid
0Jonathan_Graehl
2[anonymous]
0Nisan
0Vladimir_Nesov
1Vladimir_M
0[anonymous]
0Psy-Kosh
3Vladimir_M
0Psy-Kosh
3CronoDAS
12multifoliaterose
-1Eliezer Yudkowsky
8multifoliaterose
8Paul Crowley
2Vladimir_Nesov
0XiXiDu
0katydee
-2Vladimir_Nesov
3XiXiDu
3Vladimir_Nesov
4XiXiDu
4Vladimir_Nesov
0[anonymous]
5multifoliaterose
1XiXiDu
-5XiXiDu
0Eliezer Yudkowsky
9Rain
1XiXiDu
7Wei Dai
5XiXiDu
4multifoliaterose
6Eliezer Yudkowsky
8XiXiDu
1multifoliaterose
0[anonymous]
6[anonymous]
8wedrifid
8[anonymous]
4wedrifid
2[anonymous]
6[anonymous]
8Benya
4jimrandomh
2Paul Crowley
1NihilCredo
-6timtyler
1timtyler
-1[anonymous]
2[anonymous]
1[anonymous]
3[anonymous]
-1[anonymous]
3[anonymous]
2[anonymous]
5[anonymous]
1wedrifid
1Vladimir_Nesov
2[anonymous]
0Vladimir_Nesov
0timtyler
0[anonymous]
0timtyler
1[anonymous]
0timtyler
0[anonymous]
5wedrifid
2XiXiDu
3xamdam
-1XiXiDu
0xamdam
2Will_Newsome
1xamdam
0Will_Newsome
3rwallace
0timtyler
13MartinB
5XiXiDu
5MartinB
3XiXiDu
2CarlShulman
9JoshuaZ
9Eliezer Yudkowsky
2CronoDAS
-6[anonymous]
1jimrandomh
3XiXiDu
2jimrandomh
-4XiXiDu
2jimrandomh
6XiXiDu
4jimrandomh
4wedrifid
10HughRistik
-7[anonymous]
-10[anonymous]
2[anonymous]
1XiXiDu
6jimrandomh
3XiXiDu
1[anonymous]
4Mitchell_Porter
-6[anonymous]
3[anonymous]
2aaronsw
3Oscar_Cunningham
0Quantumental
1Oscar_Cunningham
-1[anonymous]
2Mitchell_Porter
-1[anonymous]
2[anonymous]
-1[anonymous]
0[anonymous]
0[anonymous]
-2[anonymous]
1nshepperd
-2Quantumental
0[anonymous]
-2Quantumental
1[anonymous]
-4[anonymous]
0[anonymous]
-2[anonymous]
0red75
-1[anonymous]
7Eliezer Yudkowsky
New Comment
633 comments, sorted by
top scoring
Click to highlight new comments since: Today at 3:07 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Rain15y520

(Disclaimer: My statements about SIAI are based upon my own views, and should in no way be interpreted as representing their stated or actual viewpoints on the subject matter. I am talking about my personal thoughts, feelings, and justifications, no one else's. For official information, please check the SIAI website.)

Although this may not answer your questions, here are my reasons for supporting SIAI:

  • I want what they're selling. I want to understand morality, intelligence, and consciousness. I want a true moral agent outside of my own thoughts, something that can help solve that awful, plaguing question, "Why?" I want something smarter than me that can understand and explain the universe, providing access to all the niches I might want to explore. I want something that will save me from death and pain and find a better way to live.

  • It's the most logical next step. In the evolution of mankind, intelligence is a driving force, so "more intelligent" seems like an incredibly good idea, a force multiplier of the highest order. No other solution captures my view of a proper future like friendly AI, not even "...in space!"

  • No one else cares about the big

... (read more)
Reply
3multifoliaterose15y
Good, informative comment.
2XiXiDu15y
Yeah, that's why I'm donating as well. Sure, but why the SIAI? I accept this. Although I'm not sure if the big picture should be a top priority right now. And as I wrote, I'm unable to survey the utility calculations at this point. So you replace a simple view that is evidende based with one that might or might not be based on really shaky ideas such as an intelligence explosion. I think you overestimate the friendliness of friendly AI. Too bad Roko's posts have been censored. I want to believe. Beware of those who agree with you? Maybe we do have enough time regarding AI and the kind of threats depicted on this site. Maybe we don't have enough time regarding other kinds of threats. I can accept that. But I'm unable to follow the process of elimination yet.
7Rain15y
Who else is working directly on creating smarter-than-human intelligence with non-commercial goals? And if there are any, are they self-reflective enough to recognize its potential failure modes? I used something I developed which I call Point-In-Time Utility to guide my thinking on this matter. It basically boils down to, 'the longest view wins', and I don't see anyone else talking about potentially real pangalactic empires. I don't think it has to be an explosion at all, just smarter-than-human. I'm willing to take things one step at a time, if necessary. Though it seems unlikely we could build a smarter-than-human intelligence without understanding what intelligence is, and thus knowing where to tweak, if even retroactively. That said, I consider intelligence tweaking itself to be a shaky idea, though I view alternatives as failure modes. I think you overestimate my estimation of the friendliness of friendly AI. Note that at the end of my post I said it is very likely SIAI will fail. My hope total is fairly small. Roko deleted his own posts, and I was able to read the article Eliezer deleted since it was still in my RSS feed. It didn't change my thinking on the matter; I'd heard arguments like it before. Hi. I'm human. At least, last I checked. I didn't say all my reasons were purely rational. This one is dangerous (reinforcement), but I do a lot of reading of opposing opinions as well, and there's still a lot I disagree with regarding SIAI's positions. The latter is what I'm worried about. I see all of these threats as being developed simultaneously, in a race to see which one passes the threshold into reality first. I'm hoping that Friendly AI beats them. I haven't seen you name any other organization you're donating to or who might compete with SIAI. Aside from the Future of Humanity Institute or the Lifeboat Foundation, both of which seem more like theoretical study groups than action-takers, people just don't seem to be working on these problems. Even
4XiXiDu15y
That there are no other does not mean we shouldn't be keen to create them, to establish competition. Or do it at all at this point. I'm not sure about this. I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips. You are right, never mind what I said. Yeah and how is their combined probability less worrying than that of AI? That doesn't speak against the effectiveness of donating all to the SIAI of course. Creating your own God to fix the problems the imagined one can't is indeed a promising and appealing idea, given it is feasible. I'm mainly concerned about my own well-being. If I was threated by something near-term within Germany, that would be my top-priority. So the matter is more complicated for me than for the people who are merely conerned about the well-being of all beings. As I said before, it is not my intention to discredit the SIAI but to steer some critical discussion for us non-expert, uneducated but concerned people.
8Rain15y
Absolutely agreed. Though I'm barely motivated enough to click on a PayPal link, so there isn't much hope of my contributing to that effort. And I'd hope they'd be created in such a way as to expand total funding, rather than cannibalizing SIAI's efforts. Certainly there are other ways to look at value / utility / whatever and how to measure it. That's why I mentioned I had a particular theory I was applying. I wouldn't expect you to come to the same conclusions, since I haven't fully outlined how it works. Sorry. I'm not sure what this is saying. I think UFAI is far more likely than FAI, and I also think that donating to SIAI contributes somewhat to UFAI, though I think it contributes more to FAI, such that in the race I was talking about, FAI should come out ahead. At least, that's the theory. There may be no way to save us. AI is one of the things on the list racing against FAI. I think AI is actually the most dangerous of them, and from what I've read, so does Eliezer, which is why he's working on that problem instead of, say, nanotech. I've mentioned before that I'm somewhat depressed, so I consider my philanthropy to be a good portion 'lack of caring about self' more than 'being concerned about the well-being of all beings'. Again, a subtractive process. Thanks! I think that's probably a good idea, though I would also appreciate more critical discussion from experts and educated people, a sort of technical minded anti-Summit, without all the useless politics of the IEET and the like.
-3XiXiDu15y
It's more likely that the Klingon warbird can overpower the USS Enterprise. Why? Because EY told you? I'm not trying to make snide remarks here but how people arrived at this conclusion was what I have been inquiring about in the first place. Me too, but I was the only one around willing to start one at this point. That's the sorry state of critical examination.
4Rain15y
To pick my own metaphor, it's more likely that randomly chosen matter will form clumps of useless crap than a shiny new laptop. As defined, UFAI is likely the default state for AGI, which is one reason I put such low hope on our future. I call myself an optimistic pessimist: I think we're going to create wonderful, cunning, incredibly powerful technology, and I think we're going to misuse it to destroy ourselves. Because intelligent beings are the most awesome and scary things I've ever seen. The History Channel is a far better guide than Eliezer in that respect. And with all our intelligence and technology, I can't see us holding back from trying to tweak intelligence itself. I view it as inevitable. I'm hoping that the Visiting Fellows program and the papers written with the money from the latest Challenge will provide peer review in other respected venues.
2XiXiDu15y
What I was trying to show you by the Star Trek metaphor is that you are making estimations within a framework of ideas of which I'm not convinced to be based on firm ground.
1Rain15y
I'm not a very good convincer. I'd suggest reading the original material.
0HughRistik15y
Can we get some links up in here? I'm not putting the burden on you in particular, but I think more linkage would be helpful in this discussion.
0Rain15y
This thread has Eliezer's request for specific links, which appear in replies.
[-]Eliezer Yudkowsky15y240

I'm currently preparing for the Summit so I'm not going to hunt down and find links. Those of you who claimed they wanted to see me do this should hunt down the links and reply with a list of them.

Given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:

You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain. We don't emphasize this very hard when people talk in concrete terms about donating to more than one organization, because charitable dollars are not substitutable from a limited pool, the main thing is the variance in the tiny fraction of their income people donate to charity in the first place and so the amount of warm glow people generate for th... (read more)

Reply
[-]Eliezer Yudkowsky15y160

An example here is the treatment and use of MWI (a.k.a. the "many-worlds interpretation") and the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that's it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least as understood within the LW community? But that's besides the point. The problem here is that such conclusions are, I believe, widely considered to be weak evidence to base further speculations and estimations on.

Reading the QM sequence (someone link) will show you that to your surprise and amazement, what seemed to you like an unjustified leap and a castle in the air, a mere interpretation, is actually nailed down with shocking solidity.

What I'm trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence

... (read more)
Reply
[-]JGWeissman15y190

Quantum Mechanics Sequence

Pluralistic Ignorance

Bystander Apathy

Scope Insensitivity

Reply
7Cyan15y
No bystander apathy here!
8thomblake15y
The relevant fallacy in 'Aristotelian' logic is probably false dilemma, though there are a few others in the neighborhood.
4NancyLebovitz15y
Probably black and white thinking.
3Jonathan_Graehl15y
I haven't done the work to understand MWI yet, but if this FAQ is accurate, almost nobody likes the Copenhagen interpretation (observers are SPECIAL) and a supermajority of "cosmologists and quantum field theorists" think MWI is true. Since MWI seems to have no practical impact on my decision making, this is good enough for me. Also, Feynman likes it :)
3wedrifid15y
Thanks for taking the time to give a direct answer. I enjoyed reading this and these replies will likely serve as useful comments to when people ask similar questions in the future.
5XiXiDu15y
Where are the formulas? What are the variables? Where is this method exemplified to reflect the decision process of someone who's already convinced, preferably of someone within the SIAI? That is part of what I call transparency and a foundational and reproducible corroboration of one's first principles. Awesome, I never came across this until now. It's not widely mentioned? Anyway, what I notice from the Wiki entry is that one of the most important ideas, recursive improvement, that might directly support the claims of existential risks posed by AI, is still missing. All this might be featured in the debate, hopefully with reference to substantial third-party research papers, I don't know yet. The whole point of the grey goo example was to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger. That is, I do not see how an encapsulated AI, even a superhuman AI, could pose the stated risks without the use of advanced nanotechnology. Is it going to use nukes, like Skynet? Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible. This is an open question and I'm inquiring about how exactly the uncertainties regarding these problems are accounted for in your probability estimations of the dangers posed by AI. What I was inquiring about is the likelihood of slow versus fast development of AI. That is, how fast after we got AGI will we see the rise of superhuman AI? The means of development by which a quick transcendence might happen is circumstantial to the meaning of my question. Where are your probability estimations that account for these uncertainties. Where are your variables and references that allow you to make any kind of estimations to balance the risks of a hard rapture with a somewhat controllable development? You misinterpreted my question. What I me
8wedrifid15y
Um... yes? Superhuman is a low bar and, more importantly, a completely arbitrary bar. Evidence based? By which you seem to mean 'some sort of experiment'? Who would be insane enough to experiment with destroying the world? This situation is exactly where you must understand that evidence is not limited to 'reference to historical experimental outcomes'. You actually will need to look at 'consistent internal logic'... just make sure the consistent internal logic is well grounded on known physics. And that, well, that is actually a reasonable point. You have been given some links (regarding human behavior) that are good answer to the question but it is nevertheless non-trivial. Unfortunately now you are actually going to have to do the work and read them.
0XiXiDu14y
Is it? That smarter(faster)-than-human intelligence is possible is well grounded on known physics? If that is the case, how does it follow that intelligence can be applied to itself effectively, to the extent that one could realistically talk about "explosive" recursive self-improvement?
0timtyler14y
Some still seem sceptical - and you probably also need some math, compsci and philosophy to best understand the case for superhuman intelligence being possible.
-1wedrifid14y
Not only is there evidence that smarter than human intelligence is possible it is something that should be trivial given a vaguely sane reductionist model. Moreover you specifically have been given evidence on previous occasions when you have asked similar questions. What you have not been given and what are not available are empirical observations of smarter than human intelligences existing now. That is evidence to which you would not be entitled.
3[anonymous]14y
Please provide a link to this effect? (Going off topic, I would suggest that a "show all threads with one or more comments by users X, Y and Z" or "show conversations between users X and Y" feature on LW might be useful.) (First reply below)
0[anonymous]14y
Please provide such a link. (Going off-topic, I additionally suggest that a "show all conversations between user X and user Y" feature on Less Wrong might be useful.)
-1wedrifid14y
It is currently not possible for me to either link or quote. I do not own a computer in this hemisphere and my android does not seem to have keys for brackets or greater than symbols. workarounds welcome.
2jimrandomh14y
The solution varies by model, but on mine, alt-shift-letter physical key combinations do special characters that aren't labelled. You can also use the on-screen keyboard, and there are more onscreen keyboards available for download if the one you're currently using is badly broken.
-1wedrifid14y
SwiftKey x beta Brilliant!
0[anonymous]14y
OK, can I have my quote(s) now? It might just be hidden somewhere in the comments to this very article.
0[anonymous]14y
Can you copy and paste characters?
0XiXiDu15y
Uhm...yes? It's just something I would expect to be integrated into any probability estimates of suspected risks. More here. Check the point that you said is a reasonable one. And I have read a lot without coming across any evidence yet. I do expect an organisation like the SIAI to have detailed references and summaries about their decision procedures and probability estimations to be transparently available and not hidden beneath thousands of posts and comments. "It's somewhere in there, line 10020035, +/- a million lines...." is not transparency! That is, an organisation who's conerned with something taking over the universe and asks for your money. And organisation I'm told of which some members get nightmares just reading about evil AI...
7Rain15y
I think you just want a brochure. We keep telling you to read archived articles explaining many of the positions and you only read the comment where we gave the pointers, pretending as if that's all that's contained in our answers. It'd be more like him saying, "I have a bunch of good arguments right over there," and then you ignore the second half of the sentence.
3XiXiDu15y
I'm not asking for arguments. I know them. I donate. I'm asking for more now. I'm using the same kind of anti-argumentation that academics would use against your arguments. Which I've encountered myself a few times while trying to convince them to take a look at the inscrutable archives of posts and comment that is LW. What do they say? "I skimmed over it, but there were no references besides some sound argumentation, an internal logic.", "You make strong claims, mere arguments and conclusions extrapolated from a few premises are insufficient to get what you ask for."
4wedrifid15y
Pardon my bluntness, but I don't believe you, and that disbelief reflects positively on you. Basically, if you do know the arguments then a not insignificant proportion of your discussion here would amount to mere logical rudeness. For example if you already understood the arguments for, or basic explanation of why 'putting all your eggs in one basket' is often the rational thing to do despite intuitions to the contrary then why on earth would you act like you didn't?
3XiXiDu15y
Oh crap, the SIAI was just a punching bag. Of course I understand the arguments for why it makes sense not to split your donations. If you have a hundred babies but only food for 10, you are not going to portion it to all of the hundred babies but feed the strongest 10. Otherwise you'd end up having a hundred dead babies in which case you could as well have eaten the food yourself before wasting it like that. It's obvious, I don't see how someone wouldn't get this. I used that idiom to illustrate that given my preferences and current state of evidence I could as well eat all the food myself rather than wasting it on something I don't care to save or that doesn't need to be saved in the first place because I missed the fact that all the babies are puppets and not real. I asked, are the babies real babies that need food and is the expected utility payoff of feeding them higher than eating the food myself right now? I'm starting to doubt that anyone actually read my OP...
2wedrifid15y
I know this is just a tangent... but that isn't actually the reason. Just to be clear, I'm not objecting to this. That's a reasonable point.
2XiXiDu15y
Ok. Is there a paper, article, post or comment that states the reason or is it spread all over LW? I've missed the reason then. Seriously, I'd love to read up on it now. Here is an example of what I want:
1wedrifid15y
Good question. If not there should be. It is just basic maths when handling expected utilities but it crops up often enough. Eliezer gave you a partial answer: ... but unfortunately only asked for a link for the 'scope insensivity' part, not a link to a 'marginal utility' tutorial. I've had a look and I actually cant find such a reference on LW. A good coverage of the subject can be found in an external paper, Heuristics and biases in charity. Section 1.1.3 Diversification covers the issue well.
-1XiXiDu15y
That's another point. As I asked, what are the variables, where do I find the data? How can I calculate this probability based on arguments to be found on LW? This IS NOT sufficient to scare people up to the point of having nightmares and ask them for most of their money.
5wedrifid15y
I'm not trying to be a nuisance here, but it is the only point I'm making right now, and the one that can be traced right back through the context. It is extremely difficult to make progress in a conversation if I cannot make a point about a specific argument without being expected to argue against an overall position that I may or may not even disagree with. It makes me feel like my arguments must come armed as soldiers.
2XiXiDu15y
I'm sorry, I perceived your comment to be mainly about decision making regarding charities. Which is completely marginal since the SIAI is the only charity concerned with the risk I'm inquiring about. Is the risk in question even real and does its likelihood justify the consequences and arguments for action? I inquired about the decisions making regarding charities because you claimed that what I stated about egg allocation is not the point being made. But I do not particularly care about that question as it is secondary.
4wedrifid15y
Leave aside SIAI specific claims here. The point Eliezer was making, was about 'all your eggs in one basket' claims in general. In situations like this (your contribution doesn't drastically change the payoff at the margin, etc) putting all your eggs in best basket is the right thing to do. You can understand that insight completely independently of your position on existential risk mitigation.
3Nick_Tarleton15y
Er, there's a post by that title.
3XiXiDu15y
Questionable. Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans? To my awareness we have no evidence to this end. Questionable. How is an encapsulated AI going to get this kind of control without already existing advanced nanotechnology? It might order something over the Internet if it hacks some bank account etc. (long chain of assumptions), but how is it going to make use of the things it orders? I believe that self-optimization is prone to be very limited. Changing anything substantial might lead Gandhi to swallow the pill that will make him want to hurt people, so to say. Sound argumentation that gives no justification to extrapolate it to an extent that you could apply it to the shaky idea of a superhuman intellect coming up with something better than science and applying it again to come up... All those ideas about possible advantages of being an entity that can reflect upon itself to the extent of being able to pinpoint its own shortcoming is again, highly speculative. This could be a disadvantage. Much of the rest is about the plateau argument, once you got a firework you can go to the moon. Well yes, I've been aware of that argument. But that's weak, that there are many hidden mysteries about reality that we completely missed yet is highly speculative. I think even EY admits that whatever happens, quantum mechanics will be a part of it. Is the AI going to invent FTL travel? I doubt it, and it's already based on the assumption that superhuman intelligence, not just faster intelligence, is possible. Like the discovery that P ≠ NP? Oh wait, that would be limiting. This argument runs in both directions. Assumption. Nice idea, but recursion does not imply performance improvement. How can he make any assumptions then about the possibility to improve them recursively, given this insight, to an extent that they empower an AI to transcendent into superhuman realms? Did he just attribute intention
[-]gwern15y190

Questionable. Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans? To my awareness we have no evidence to this end.

What would you accept as evidence?

Would you accept sophisticated machine learning algorithms like the ones in the Netflix contest, who find connections that make no sense to humans, who simply can't work with high-dimensional data?

Would you accept a circuit designed by a genetic algorithm, which doesn't work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?

Would you accept a chess program which could crush any human chess player who ever lived? Kasparov at ELO 2851, Rybka at 3265. Wikipedia says grandmaster status comes at ELO 2500. So Rybka is now even further beyond Kasparov at his peak as Kasparov was beyond a new grandmaster. And it's not like Rybka or the other chess AIs will weaken with age.

Or are you going to pull a no-true-Scotsman and assert that each one of these is mechanical or unoriginal or not really beyond human or just not different enough?

Reply
7soreff15y
I think it at least possible that much-smarter-than human intelligence might turn out to be impossible. There exist some problem domains where there appear to be a large number of solutions, but where the quality of the solutions saturate quickly as more and more resources are thrown at them. A toy example is how often records are broken in a continuous 1-D domain, with attempts drawn from a constant probability distribution: The number of records broken goes as the log of the number of attempts. If some of the tasks an AGI must solve are like this, then it might not do much better than humans - not because evolution did a wonderful job of optimizing humans for perfect intelligence, but because that part of the problem domain is a brick wall, and anything must bash into it at nearly the same point. One (admittedly weak) piece of evidence: a real example of saturation, is an optimizing compiler being used to recompile itself. It is a recursive optimizing system, and, if there is a knob to allow more effort being used on the optimization, the speed-up from the first pass can be used to allow a bit more effort to be applied to a second pass for the same cpu time. Nonetheless, the results for this specific recursion are not FOOM. The evidence in the other direction are basically existence proofs from the most intelligent people or groups of people that we know of. Something as intelligent as Einstein must be possible, since Einstein existed. Given an AI Einstein, working on improving its own intelligence - it isn't clear if it could make a little progress or a great deal.
4gwern15y
This goes for your compilers as well, doesn't it? There are still major speed-ups available in compilation technology (the closely connected areas of whole-program compilation+partial evaluation+supercompilation), but a compiler is still expected to produce isomorphic code, and that puts hard information-theoretic bounds on output.
4Sniffnoy15y
Can you provide details / link on this?
6gwern15y
I should've known someone would ask for the cite rather than just do a little googling. Oh well. Turns out it wasn't a radio, but a voice-recognition circuit. From http://www.talkorigins.org/faqs/genalg/genalg.html#examples :
0Aron215y
The analogy that AGI can be to us as we are to chimps. This is the part that needs the focus. We could have said in the 1950s that machines beat us at arithmetic by orders of magnitude. Classical AI researchers clearly were deluded by success at easy problems. The problem with winning on easy problems is that it says little about hard ones. What I see is that in the domain of problems for which human level performance is difficult to replicate, computers are capable of catching us and likely beating us, but gaining a great distance on us in performance is difficult. After all, a human can still beat the best chess programs with a mere pawn handicap. This may never get to two pawns. ever. Certainly the second pawn is massively harder than the first. It's the nature of the problem space. In terms of runaway AGI control of the planet, we have to wonder if humans will always have the equivalent of a pawn handicap via other means (mostly as a result of having their hands on the reigns of the economic, political, and legal structures). BTW, is ELO supposed to have that kind of linear interpretation?
7gwern15y
Yes, this is the important part. Chimps lag behind humans in 2 distinct ways - they differ in degree, and in kind. Chimps can do a lot of human-things, but very minimally. Painting comes to mind. They do a little, but not a lot. (Degree.) Language is another well-studied subject. IIRC, they can memorize some symbols and use them, but not in the recursive way that modern linguistics (pace Chomsky) seems to regard as key, not recursive at all. (Kind.) What can we do with this distinction? How does it apply to my three examples? O RLY? Ever is a long time. Would you like to make this a concrete prediction I could put on PredictionBook, perhaps something along the lines of 'no FIDE grandmaster will lose a 2-pawns-odds chess match(s) to a computer by 2050'? I'm not an expert on ELO by any means (do we know any LW chess experts?), but reading through http://en.wikipedia.org/wiki/Elo_rating_system#Mathematical_details doesn't show me any warning signs - ELO point differences are supposed to reflect probabilistic differences in winning, or a ratio, and so the absolute values shouldn't matter. I think.
5CarlShulman15y
This is a possibility (made more plausible if we're talking about those reins being used to incentivize early AIs to design more reliable and transparent safety mechanisms for more powerful successive AI generations), but it's greatly complicated by international competition: to the extent that careful limitation and restriction of AI capabilities and access to potential sources of power reduces economic, scientific, and military productivity it will be tough to coordinate. Not to mention that existing economic, political, and legal structures are not very reliably stable: electorates and governing incumbents often find themselves unable to retain power.
3gwern14y
It seems that whether or not it's supposed to, in practice it does. From the just released "Intrinsic Chess Ratings", which takes Rybka and does exhaustive evaluations (deep enough to be 'relatively omniscient') of many thousands of modern chess games; on page 9:
-1XiXiDu15y
You are getting much closer than any of the commenter's before you to provide some other form of evidence to substantiate one of the primary claims here. You have to list your primary propositions on which you base further argumentation, from which you draw conclusions and which you use to come up with probability estimations stating risks associated with former premises. You have to list these main principles so anyone who comes across claims of existential risks and a plead for donation, can get an overview. Then you have to provide the references you listed above, if you believe they give credence to the ideas, so that people see that all you say isn't made up but based on previous work and evidence by people that are not associated with your organisation. No, although I have heard about all of the achievements I'm not yet able to judge if they provide evidence supporting the possibility of strong superhuman AI, the kind that would pose a existential risk. Although in the case of chess I'm pretty much the opinion that this is no strong evidence as it is not sufficiently close to be able to overpower humans to an extent of posing a existential risk when extrapolated into other areas. It would be good if you could provide links to the mentioned examples. Especially the genetic algorithm (ETA: Here.). It is still questionable however if this could lead to the stated recursive improvements or will shortly hit a limit. To my knowledge genetic algorithms are merely used for optimization, based on previous design spaces and are not able to come up with something unique to the extent of leaving their design space. Whether sophisticated machine learning algorithms are able to discover valuable insights beyond statistical inferences within higher-dimensional data-sets is a very interesting idea though. As I just read, the 2009 prize of the Netflix contest was given to a team that achieved a 10.05% improvement over the previous algorithm. I'll have to examine this furth
1gwern15y
I am reluctant because you seem to ask for magical programs when you write things like: I was going to link to AIXI and approximations thereof; full AIXI is as general as an intelligence can be if you accept that there are no uncomputable phenomenon, and the approximations are already pretty powerful (from nothing to playing Pac-Man). But then it occurred to me that anyone invoking a phrase like 'leaving their design space' might then just say 'oh, those designs and models can only model Turing machines, and so they're stuck in their design space'.
-2XiXiDu15y
I've no idea (formally) of what a 'design space' actually is. This is a tactic I'm frequently using against strongholds of argumentation that are seemingly based on expertise. I use their own terminology and rearrange it into something that sounds superficially clever. I like to call it a Chinese room approach. Sometimes it turns out that all they were doing was to sound smart but cannot explain themselves when faced with their own terminology set to inquire about their pretences. I thank you however for taking the time to actually link to further third party information that will substantiate given arguments for anyone not trusting the whole of LW without it.
1gwern15y
I see. Does that actually work for you? (Note that your answer will determine whether I mentally re-categorize you from 'interested open-minded outsider' to 'troll'.)
6XiXiDu15y
It works against cults and religion in general. I don't argue with them about their religion being not even wrong but rather accept their terms and highlight inconsistencies within their own framework by going as far as I can with one of their arguments and by inquiring about certain aspects based on their own terminology until they are unable to consistently answer or explain where I am wrong. This also works with the anti GM-food bunch, data protection activists, hippies and many other fringe groups. For example, the data protection bunch concerned with information disclosure on social networks or Google Streetview. Yes, I say, that's bad, burglar could use such services to check out your house! I wonder what evidence there is for the increase of burglary in the countries where Streetview is already available for many years? Or I tell the anti-gun lobbyists how I support their cause. It's really bad if anyone can buy a gun. Can you point me to the strong correlation between gun ownership and firearm homicides? Thanks.
8CarlShulman15y
Any specific scenario is going to have burdensome details, but that's what you get if you ask for specific scenarios rather than general pressures, unless one spends a lot of time going through detailed possibilities and vulnerabilities. With respect to the specific example, regular human criminals routinely swindle or earn money anonymously online, and hack into and control millions of computers in botnets. Cloud computing resources can be rented with ill-gotten money. In the unlikely event of a powerful human-indifferent AI appearing in the present day, a smartphone held by a human could provide sensors and communication to use humans for manipulators (as computer programs direct the movements of some warehouse workers today). Humans can be paid, blackmailed, deceived (intelligence agencies regularly do these things) to perform some tasks. An AI that leverages initial capabilities could jury-rig a computer-controlled method of coercion [e.g. a cheap robot arm holding a gun, a tampered-with electronic drug-dispensing implant, etc]. And as time goes by and the cumulative probability of advanced AI becomes larger, increasing quantities of robotic vehicles and devices will be available.
1XiXiDu15y
Thanks, yes I know about those arguments. One of the reasons I'm actually donating and accept AI to be one existential risk. I'm inquiring about further supporting documents and transparency. More on that here, especially check the particle collider analogy.
1CarlShulman15y
With respect to transparency, I agree about a lack of concise, exhaustive, accessible treatments. Reading some of the linked comments about marginal evidence from hypotheses I'm not quite sure what you mean, beyond remembering and multiplying by the probability that particular premises are false. Consider Hanson's "Economic Growth Given Machine Intelligence". One might support it with generalizations from past population growth in plants and animals, from data on capital investment and past market behavior and automation, but what would you say would license drawing probabilistic inferences using it?
0Unknowns15y
Note that such methods might not result in the destruction of the world within a week (the guaranteed result of a superhuman non-Friendly AI according to Eliezer.)
2CarlShulman15y
What guarantee?.
-1Unknowns15y
With a guarantee backed by $1000.
2CarlShulman15y
The linked bet doesn't reference "a week," and the "week" reference in the main linked post is about going from infrahuman to superhuman, not using that intelligence to destroy humanity. That bet seems underspecified. Does attention to "Friendliness" mean any attention to safety whatsoever, or designing an AI with a utility function such that it's trustworthy regardless of power levels? Is "superhuman" defined relative to the then-current level of human (or upload, or trustworthy less intelligent AI) capacity with any enhancements (or upload speedups, etc)? What level of ability counts as superhuman? You two should publicly clarify the terms.
0Unknowns15y
A few comments later on the same comment thread someone asked me how much time was necessary, and I said I thought a week was enough, based on Eliezer's previous statements, and he never contradicted this, so it seems to me that he accepted it by default, since some time limit will be necessary in order for someone to win the bet. I defined superhuman to mean that everyone will agree that it is more intelligent than any human being existing at that time. I agree that the question of whether there is attention to Friendliness might be more problematic to determine. But "any attention to safety whatsoever" seems to me to be clearly stretching the idea of Friendliness-- for example, someone could pay attention to safety by trying to make sure that the AI was mostly boxed, or whatever, and this wouldn't satisfy Eliezer's idea of Friendliness.
1CarlShulman15y
Ah. So an AI could, e.g. be only slightly superhuman and require immense quantities of hardware to generate that performance in realtime.
0Unknowns15y
Right. And if this scenario happened, there would be a good chance that it would not be able to foom, or at least not within a week. Eliezer's opinion seems to be that this scenario is extremely unlikely, in other words that the first AI will already be far more intelligent than the human race, and that even if it is running on an immense amount of hardware, it will have no need to acquire more hardware, because it will be able to construct nanotechnology capable of controlling the planet through actions originating on the internet as you suggest. And as you can see, he is very confident that all this will happen within a very short period of time.
3MichaelVassar15y
Have you tried asking yourself non-rhetorically what an AI could do without MNT? That doesn't seem to me to be a very great inferential distance at all.
0XiXiDu15y
I believe that in this case an emulation would be the bigger risk because it would be sufficiently obscure and could pretend to be friendly for a long time while secretly strengthening its power. A purely artificial intelligence would be too alien and therefore would have a hard time to acquire the necessary power to transcend to a superhuman level without someone figuring out what it does, either by its actions or by looking at its code. It would also likely not have the intention to increase its intelligence infinitely anyway. I just don't see that AGI implies self-improvement beyond learning what it can while staying in scope of its resources. You'd have to deliberately implement such an intention. It would generally require its creators to solve a lot of problems much more difficult than limiting its scope. That is why I do not see run-away self-improvement as a likely failure mode. I could imagine all kinds of scenarios indeed. But I also have to assess their likelihood given my epistemic state. And my conclusion is that a purely artificial intelligence wouldn't and couldn't do much. I estimate the worst-case scenario to be on par with a local nuclear war.
4MichaelVassar15y
I simply can't see where the above beliefs might come from. I'm left assuming that you just don't mean the same thing by AI as I usually mean. My guess is that you are implicitly thinking of a fairly complicated story but are not spelling that out.
1XiXiDu15y
And I can't see where your beliefs might come from. What are you telling potential donors or AGI researchers? That AI is dangerous by definition? Well, what if they have a different definition, what should make them update in favor of your definition? That you thought about it for more than a decade now? I perceive serious flaws in any of the replies I got so far in under a minute and I am a nobody. There is too much at stake here to base the decision to neglect all other potential existential risks on the vague idea that intelligence might come up with something we haven't thought about. If that kind of intelligence is as likely as other risks then it doesn't matter what it comes up with anyway because those other risks will wipe us out just as good and with the same probability. There already are many people criticizing the SIAI right now, even on LW. Soon, once you are more popular, other people than me will scrutinize everything you ever wrote. And what do you expect them to conclude if even a professional AGI researcher, who has been a member of the SIAI, does write the following: Why would I disregard his opinion in favor of yours? Can you present any novel achievements that would make me conclude that you people are actually experts when it comes to intelligence? The LW sequences are well written but do not showcase some deep comprehension of the potential of intelligence. Yudkowsky was able to compile previously available knowledge into a coherent framework of rational conduct. That isn't sufficient to prove that he has enough expertise on the topic of AI to make me believe him regardless of any antipredictions being made that weaken the expected risks associated with AI. There is also insufficient evidence to conclude that Yudkowsky, or someone within the SIAI, is smart enough to be able to tackle the problem of friendliness mathematically. If you would at least let some experts take a look at your work and assess its effectiveness and general potential.
5timtyler15y
Douglas Hofstadter and Daniel Dennett both seem to think these issues are probably still far away. ... * http://www.americanscientist.org/bookshelf/pub/douglas-r-hofstadter
3timtyler15y
I'm not sure who is doing that. Being hit by an asteroid, nuclear war and biological war are other possible potentially major setbacks. Being eaten by machines should also have some probability assigned to it - though it seems pretty challenging to know how to do that. It's a bit of an unknown unknown. Anyway, this material probably all deserves some funding.
3timtyler15y
The short-term goal seems more modest - prove that self-improving agents can have stable goal structures. If true, that would be fascinating - and important. I don't know what the chances of success are, but Yudkowsky's pitch is along the lines of: look this stuff is pretty important, and we are spending less on it than we do on testing lipstick. That's a pitch which it is hard to argue with, IMO. Machine intelligence research does seem important and currently-underfunded. Yudkowsky is - IMHO - a pretty smart fellow. If he will work on the problem for $80K a year (or whatever) it seems as though there is a reasonable case for letting him get on with it.
3Rain15y
I'm not sure you're looking at the probability of other extinction risks with the proper weighting. The timescales are vastly different. Supervolcanoes: one every 350,000 years. Major asteroid strikes: one every 700,000 years. Gamma ray bursts: hundreds of millions of years, etc. There's a reason the word 'astronomical' means huge beyond imagining. Contrast that with the current human-caused mass extinction event: 10,000 years and accelerating. Humans operate on obscenely fast timescales compared to nature. Just with nukes we're able to take out huge chunks of Earth's life forms in 24 hours, most or all of it if we detonated everything we have in an intelligent, strategic campaign to end life. And that's today, rather than tomorrow. Regarding your professional AGI researcher and recursive self-improvement, I don't know, I'm not an AGI researcher, but it seemed to me that a prerequisite to successful AGI is an understanding and algorithmic implementation of intelligence. Therefore, any AGI will know what intelligence is (since we do), and be able to modify it. Once you've got a starting point, any algorithm that can be called 'intelligent' at all, you've got a huge leap toward mathematical improvement. Algorithms have been getting faster at a higher rate than Moore's Law and computer chips.
1XiXiDu15y
That might be true. But most of them have one solution that demands research in many areas. Space colonization. It is true that intelligent systems, if achievable in due time, play a significant role here. But not an exceptional role if you disregard the possibility of an intelligence explosion, of which I am very skeptical. Further, it appears to me that donating to the SIAI would rather impede research on such systems giving their position that such systems themselves posit an existential risk. Therefore, at the moment, the possibility of risks from AI is partially being outweighed to the extent that the SIAI should be supported yet doesn't hold an exceptional position that would necessarily make it the one charity with the highest expected impact per donation. I am unable to pinpoint another charity at the moment, e.g. space elevator projects, because I haven't looked into it. But I do not know of any comparison analysis, although you and many other people claim they have calculated it nobody ever published their efforts. As you know, I am unable to do such an analysis myself at this point as I am still learning the math. But I am eager to get the best information by means of feedback anyhow. Not intended as an excuse of course. That would surely be a very good argument if I was able to judge it. But can intelligence be captured by a discrete algorithm or is it modular and therefore not subject to overall improvements that would affect intelligence itself as a meta-solution? Also, can algorithms that could be employed in real-world scenarios be speed-up to have an effect that would warrant superhuman power? Take photosynthesis, could that particular algorithm be improved considerably, to an extent that it would be vastly better than the evolutionary one? Further, will such improvements be accomplishable fast enough to outpace human progress or the adaption of the given results? My problem is that I do not believe that intelligence is fathomable as a solution tha
5jimrandomh15y
This seems backwards - if intelligence is modular, that makes it more likely to be subject to overall improvements, since we can upgrade the modules one at a time. I'd also like to point out that we currently have two meta-algorithms, bagging and boosting, which can improve the performance of any other machine learning algorithm at the cost of using more CPU time. It seems to me that, if we reach a point where we can't improve an intelligence any further, it won't be because it's fundamentally impossible to improve, but because we've hit diminishing returns. And there's really no way to know in advance where the point of diminishing returns will be. Maybe there's one breakthrough point, after which it's easy until you get to the intelligence of an average human, then it's hard again. Maybe it doesn't become difficult until after the AI's smart enough to remake the world. Maybe the improvement is gradual the whole way up. But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful. In a sense, all thoughts are just the same words and symbols rearranged in different ways. But that is not the type of newness that matters. New software algorithms, concepts, frameworks, and programming languages are created all the time. And one new algorithm might be enough to birth an artificial general intelligence.
3NancyLebovitz15y
The AI will be much bigger than a virus. I assume this will make propagation much harder.
2jimrandomh15y
Harder, yes. Much harder, probably not, unless it's on the order of tens of gigabytes; most Internet connections are quite fast.
0timtyler15y
Anything could be possible - though the last 60 years of the machine intelligence field are far more evocative of the "blood-out of-a-stone" model of progress.
-1timtyler15y
Smart human programmers can make dark nets too. Relatively few of them want to trash their own reputations and appear in the cross-hairs of the world's security services and law-enforcement agencies, though.
2jimrandomh15y
Reputation and law enforcement are only a deterrent to the mass-copies-on-the-Internet play if the copies are needed long-term (ie, for more than a few months), because in the short term, with a little more effort, the fact that an AI was involved at all could be kept hidden. Rather than copy itself immediately, the AI would first create a botnet that does nothing but spread itself and accept commands, like any other human-made botnet. This part is inherently anonymous; on the occasions where botnet owners do get caught, it's because they try to sell use of them for money, which is harder to hide. Then it can pick and choose which computers to use for computation, and exclude those that security researchers might be watching. For added deniability, it could let a security researcher catch it using compromised hosts for password cracking, to explain the CPU usage. Maybe the state of computer security will be better in 20 years, and this won't be as much of a risk anymore. I certainly hope so. But we can't count on it.
-10timtyler15y
5Rain15y
Thank you for continuing to engage my point of view, and offering your own. That's an interesting hypothesis which easily fits into my estimated 90+ percent bucket of failure modes. I've got all kinds of such events in there, including things such as, there's no way to understand intelligence, there's no way to implement intelligence in computers, friendliness isn't meaningful, CEV is impossible, they don't have the right team to achieve it, hardware will never be fast enough, powerful corporations or governments will get there first, etc. My favorite is: no matter whether it's possible or not, we won't get there in time; basically, that it will take too long to be useful. I don't believe any of them, but I do think they have solid probabilities which add up to a great amount of difficulty. But the future isn't set, they're just probabilities, and we can change them. I think we need to explore this as much as possible, to see what the real math looks like, to see how long it takes, to see how hard it really is. Because the payoffs or results of failure are in that same realm of 'astronomical'.
1Nick_Tarleton15y
A somewhat important correction: To my knowledge, SIAI does not actually endorse neglecting all potential x-risks besides UFAI. (Analysis might recommend discounting the importance of fighting them head-on, but that analysis should still be done when resources are available.)
1timtyler15y
Not all of them - most of them. War, hunger, energy limits, resource shortages, space travel, loss of loved ones - and so on. It probably won't fix the speed of light limit, though.
0JoshuaZ15y
What makes you reach this conclusion? How can you think any of these problems can be solved by intelligence when none of them have been solved? I'm particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don't see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.
7laakeus15y
Violence has been declining on (pretty much) every timescale: Steven Pinker: Myth of Violence. I think one could argue that this is because of greater collective intelligence of human race.
5jimrandomh15y
War won't be solved by making everyone smarter, but it will be solved if a sufficiently powerful friendly AI takes over, as a singleton, because it would be powerful enough to stop everyone else from using force.
0JoshuaZ15y
Yes, that makes sense, but in context I don't think that's what was meant since Tim is one of the people here is more skeptical of that sort of result.
0timtyler15y
Tim on "one big organism": * http://alife.co.uk/essays/one_big_organism/ * http://alife.co.uk/essays/self_directed_evolution/ * http://alife.co.uk/essays/the_second_superintelligence/
1JoshuaZ15y
Thanks for clarifying (here and in the other remark).
4shokwave15y
War has already been solved to some extent by intelligence (negotiations and diplomacy significantly decreased instances of war), hunger has been solved in large chunks of the world by intelligence, energy limits have been solved several times by intelligence, resource shortages ditto, intelligence has made a good first attempt at space travel (the moon is quite far away), and intelligence has made huge bounds towards solving the problem of loss of loved ones (vaccination, medical intervention, surgery, lifespans in the high 70s, etc). This is a constraint satisfaction problem (give as many ideologies as much of what they want as possible). Intelligence solves those problems.
2nshepperd15y
I have my doubts about war, although I don't think most wars really come down to conflicts of terminal values. I'd hope not, anyway. However as for the rest, if they're solvable at all, intelligence ought to be able to solve them. Solvable means there exists a way to solve them. Intelligence is to a large degree simply "finding ways to get what you want". Do you think energy limits really couldn't be solved by simply producing through thought working designs for safe and efficient fusion power plants? ETA: ah, perhaps replace "intelligence" with "sufficient intelligence". We haven't already solved all these problems already in part because we're not really that smart. I think fusion power plants are theoretically possible, and at our current rate of progress we should reach that goal eventually, but if we were smarter we should obviously achieve it faster.
2TheOtherDave15y
As various people have said, the original context was not making everybody more intelligent and thereby changing their inclinations, but rather creating an arbitrarily powerful superintelligence that makes their inclinations irrelevant. (The presumption here is typically that we know which current human inclinations such a superintelligence would endorse and which ones it would reject.) But I'm interested in the context you imply (of humans becoming more intelligent). My $0.02: I think almost all people who value war do so instrumentally. That is, I expect that most warmongers (whether ideologues or not) want to achieve some goal (spread their ideology, or amass personal power, or whatever) and they believe starting a war is the most effective way for them to do that. If they thought something else was more effective, they would do something else. I also expect that intelligence is useful for identifying effective strategies to achieve a goal. (This comes pretty close to being true-by-definition.) So I would only expect smarter ideologues (or anyone else) to remain warmongers if if starting a war really was the most effective way to achieve their goals. And if that's true, everyone else gets to decide whether we'd rather have wars, or modify the system so that the ideologues have more effective options than starting wars (either by making other options more effective, or by making warmongering less effective, whichever approach is more efficient). So, yes, if we choose to incentivize wars, then we'll keep getting wars. It follows from this scenario that war is the least important problem we face, so we should be OK with that. Conversely, if it turns out that war really is an important problem to solve, then I'd expect fewer wars.
2timtyler15y
I was about to reply - but jimrandomh said most of what I was going to say already - though he did so using that dreadful "singleton" terminology, spit. I was also going to say that the internet should have got the 2010 Nobel peace prize.
1timtyler15y
Is that really the idea? My impression is that the SIAI think machines without morals are dangerous, and that until there is more machine morality research, it would be "nice" if progress in machine intelligence was globally slowed down. If you believe that, then any progress - including constructing machine toddlers - could easily seem rather negative.
0timtyler15y
Darwinian gradualism doesn't forbid evolution taking place rapidly. I can see evolutionary progress accelerating over the course of my own lifespan - which is pretty incredible considering that evolution usually happens on a scale of millions of years. More humans in parallel can do more science and engineering. The better their living standard, the more they can do. Then there are the machines... Maybe some of the pressures causing the speed-up will slack off - but if they don't then humanity may well face a bare-knuckle ride into inner-space - and fairly soon.
-1timtyler15y
Re: toddler-level machine intelligence. Most toddlers can't program, but many teenagers can. The toddler is a step towards the teenager - and teenagers are notorious for being difficult to manage.
3timtyler15y
The usual cite given in this area is the paper The Basic AI Drives. It suggests that open-ended goal-directed systems will tend to improve themselves - and to grab resources to help them fulfill their goals - even if their goals are superficially rather innocent-looking and make no mention of any such thing. The paper starts out like this:
1hairyfigment15y
Well, some older posts had a guy praising "goal system zero", which meant a plan to program an AI with the minimum goals it needs to function as a 'rational' optimization process and no more. I'll quote his list directly: This seems plausible to me as a set of necessary conditions. It also logically implies the intention to convert all matter the AI doesn't lay aside for other purposes (of which it has none, here) into computronium and research equipment. Unless humans for some reason make incredibly good research equipment, the zero AI would thus plan to kill us all. This would also imply some level of emulation as an initial instrumental goal. Note that sub-goal (1) implies a desire not to let instrumental goals like simulated empathy get in the way of our demise.
0timtyler15y
Perhaps, though if we can construct such a thing in the first place we may be able to deep-scan its brain and read its thoughts pretty well - or at least see if it is lying to us and being deceptive. IMO, the main problem there is with making such a thing in the first place before we have engineered intelligence. Brain emulations won't come first - even though some people seem to think they will.
0shokwave15y
Seconding this question.
1wedrifid15y
Writing the word 'assumption' has its limits as a form of argument. At some stage you are going to have to read the links given.
1XiXiDu15y
This was a short critique of one of the links given. The first I skimmed over. I wasn't impressed yet. At least to the extent of having nightmares when someone tells me about bad AI's.
0[anonymous]15y
I like how Nick Bostrom put it re: probabilities and interesting future phenomena:
5JGWeissman15y
Index to the FOOM debate Antipredictions
[-]Mitchell_Porter15y220

Can I say, first of all, that if you want to think realistically about a matter like this, you will have to find better authorities than science-fiction writers. Their ideas are generally not their own, but come from scientific and technological culture or from "futurologists" (who are also a very mixed bunch in terms of intellect, realism, and credibility); their stories present speculation or even falsehood as fact. It may be worthwhile going "cold turkey" on all the SF you have ever read, bearing in mind that it's all fiction that was ground out, word by word, by some human being living a very ordinary life, in a place and time not very far from you. Purge all the imaginary experience of transcendence from your system and see what's left.

Of course science-fictional thinking, treating favorite authors as gurus, and so forth is endemic in this subculture. The very name, "Singularity Institute", springs from science fiction. And SF occasionally gets things right. But it is far more a phenomenon of the time, a symptom of real things, rather than a key to understanding reality. Plain old science is a lot closer to being a reliable guide to reality, thou... (read more)

Reply
6DSimon15y
Voted up for this argument. I think the SIAI would be well-served for accruing donations, support, etc. by emphasizing this point more. Space organizations might similarly argue: "You might think our wilder ideas are full of it, but even if we can't ever colonize Mars, you'll still be getting your satellite communications network."
1[anonymous]15y
I hadn't thought of it this way, but on reflection of course it's true.
[-]Kaj_Sotala15y130

Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).

This claim can be broken into two separate parts:

  1. Will we have human-level AI?
  2. Once we have human-level AI, will it develop to become superhuman AI?

For 1: looking at current technology trends, Sandberg & Bostrom estimate that we should have the technology needed for whole brain emulation around 2030-2050 or so, at least assuming that it gets enough funding and that Moore's law keeps up. Even if there isn't much of an actual interest in whole brain emulations, improving scanning tools are likely to revolutionize neuroscience. Of course, respected neuroscientists are already talking about reverse-engineering of the brain as being within reach. If we are successful at reverse engineering the brain, then AI is a natural result.

As for two, as Eliezer mentioned, this is pretty much an antiprediction. Human minds are a particular type of architecture, running on a particular type of hardware: it would be an amazing coincidence if it just happened that our intelligence couldn't be drastically improved upon. We already know that we're insanely biased, to the point of people ... (read more)

Reply
4JoshuaZ15y
Do you have a citation for this? You can get certain biochemical compounds synthesized for you (there's a fair bit of a market for DNA synthesis) but that's pretty far from synthesizing microorganisms.
4Kaj_Sotala15y
Right, sorry. I believe the claim (which I heard from a biologist) was that you can get DNA synthesized for you, and in principle an AI or anyone who knew enough could use those services to create their own viruses or bacteria (though no human yet has that required knowledge). I'll e-mail the person I think I heard it from and ask for a clarification.
[-]Paul Crowley15y130

Is there more to this than "I can't be bothered to read the Sequences - please justify everything you've ever said in a few paragraphs for me"?

Reply
8[anonymous]15y
My charitable reading is that he is arguing there will be other people like him and if SIAI wishes to continue growing there does need to be easily digested material.
7[anonymous]15y
From my experience as a long-time lurker and occasional poster, LW is not easily accessible to new users. The Sequences are indeed very long and time consuming, and most of them have multiple links to other posts you are supposed to have already read, creating confusion if you should happen to forget the gist of a particular post. Besides, Eliezer draws a number of huge philosophical conclusions (reductionism, computationalism, MWI, the Singularity, etc.), and a lot of people aren't comfortable swallowing all of that at once. Indeed, the "why should I buy all this?" question has popped into my head many times while reading. Furthermore, I think criticism like this is good, and the LW crowd should not have such a negative reaction to it. After all, the Sequences do go on and on about not getting unduly emotionally attached to beliefs; if the community can't take criticism, that is probably a sign that it is getting a little too cozy with its current worldview.
[-]Paul Crowley15y100

Criticism is good, but this criticism isn't all that useful. Ultimately, what SIAI does is the conclusion of a chain of reasoning; the Sequences largely present that reasoning. Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying "justify yourselves!" doesn't advance the debate.

Reply
4[anonymous]15y
Agreed--criticism of this sort vaguely reminds me of criticism of evolution in that it attacks a particular part of the desired target rather than its fundamental assumptions (my apologies to the original poster). Still, I think we should question the Sequences as much as possible, and even misguided criticism can be useful. I'm not saying we should welcome an unending series of top-level posts like this, but I for one would like to see critical essays on of some of LW's most treasured posts. (There goes my afternoon...)
3Paul Crowley15y
Of course, substantive criticism of specific arguments is always welcome.
3XiXiDu15y
My primary point was to inquire about the foundation and credibility of named chain of reasoning. Is it a coherent internal logic that is reasoning about itself or is it based on firm ground? Take the following example: A recursively evolving AGI is quickly reaching a level that can be considered superhuman. As no advanced nanotechnology was necessary for its construction it is so far awful limited in what it can accomplish given its vast and fast intellect. Thus it solves all open problems associated with advanced nanotechnology and secretely mails its solutions a researcher. This researcher is very excited and consequently builds a corporation around this new technology. Later the AGI buys the stocks of that company and plants a front man. Due to some superhuman social engineering it finally obtains control of the technology... At this point we are already deep into subsequent reasoning about something shaky that at the same time is used as evidence of the very reasoning involving it. Taking a conclusion and running with it building a huge framework of further conclusions around it is in my opinion questionable. First this conclusion has to yield marginal evidence of its feasibility, then you are able to create a further hypothesis engaged with further consequences. You are making estimations within a framework that is itself not based on firm ground. The gist of what I was trying to say is not to subsequently base conclusions and actions on other conclusions which themselves do not bear evidence. I was inquiring about the supportive evidence at the origin of your complex multi-step extrapolations argued to be from inductive generalizations. If there isn't any, what difference is there between writing fiction and complex multi-step extrapolations argued to be from inductive generalizations? I've read and heard enough to be in doubt since I haven't come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor
3HughRistik15y
Disagree. If you are asking people for money (and they are paying you), the burden is on you to provide justification at multiple levels of detail to your prospective or current donors. But, but... then you'll have to, like, repeat yourself a lot! No shit. If you want to change the world, be prepared to repeat yourself a lot.
6HughRistik15y
If so... is that request bad? If you are running a program where you are trying to convince people on a large scale, then you need to be able to provide overviews of what you are saying at various levels of resolution. Getting annoyed (at one of your own donors!) for such a request is not a way to win. Edit: At the time, Eliezer didn't realize that XiXiDu was a donor.
[-]Wei Dai15y160

Getting annoyed (at one of your own donors!) for such a request is not a way to win.

I don't begrudge SIAI at all for using Less Wrong as a platform for increasing its donor base, but I can definitely see myself getting annoyed sooner or later, if SIAI donors keep posting low-quality comments or posts, and then expecting special treatment for being a donor. You can ask Eliezer to not get annoyed, but is it fair to expect all the other LW regulars to do the same as well?

I'm not sure what the solution is to this problem, but I'm hoping that somebody is thinking about it.

Reply
2HughRistik15y
Me too. The reason I upvoted this post was because I hoped it would stimulate higher quality discussion (whether complimentary, critical, or both) of SIAI in the future. I've been hoping to see such a discussion on LW for a while to help me think through some things.
4Paul Crowley15y
In other words, you see XiXiDu's post as the defector in the Asch experiment who chooses C when the group chooses B but the right answer is A?
1cata15y
To be fair, I don't think XiXiDu expected special treatment for being a donor; he didn't even mention it until Eliezer basically claimed that he was being insincere about his interest. (EDIT: Thanks to Wei Dai, I see he did mention it. No comment on motivations, then.) I think that Eliezer's statement is not an expression of a desire to give donors special treatment in general; it's a reflection of the fact that, knowing Xi is a donor and proven supporter of SIAI, he then ought to give Xi's criticism of SIAI more credit for being sincere and worth addressing somehow. If Xi were talking about anything else, it wouldn't be relevant.
1Wei Dai15y
He mentioned it earlier in a comment reply to Eliezer, and then again in the post itself:
[-]orthonormal15y130

These are reasonable questions to ask. Here are my thoughts:

  • Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).
  • Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).

Virtually certain that these things are possible in our physics. It's possible that transhuman AI is too difficult for human beings to feasibly program, in the same way that we're sure chimps couldn't program trans-simian AI. But this possibility seems slimmer when you consider that humans will start boosting their own intelligence pretty soon by other means (drugs, surgery, genetic engineering, uploading) and it's hard to imagine that recursive improvement would cap out any time soon. At some point we'll have a descendant who can figure out self-improving AI; it's just a question of when.

  • The likelihood of exponential growth versus a slow development over many centuries.
  • That it is worth it to spend most on a future whose likelihood I cannot judge.

These are more about decision theory than logical uncertainty, IMO. If a self-improving AI isn't actually possible for a long time, then funding ... (read more)

Reply
1[anonymous]15y
What do you mean by plausible in this instance? Not currently refuted by our theories of intelligence or chemistry? Or something stronger.
0orthonormal15y
Oh yeah, oops, I meant to say "possible in our physics". Edited accordingly.
-3XiXiDu15y
Where is the evidence that does support the claims that it is not only possible, but that it will also turn out to be MUCH smarter than a human being, not just more rational or faster. Where is the evidence for an intelligence explosion? Is action justified simply based on the mere possibility that it might be physical possible? Not even your master believes this. Yes, once they turned themselves into superhuman intelligences? Isn't this what Kurzweil believes? No risks by superhuman AI because we'll go the same way anyway? Yep. Yes, but to allocate all my egs to them? Remember, they ask for more than simple support. I want to maximize my expected survival. If there are medium midterm risks that could kill me with a higher probability than AI in future, that is as important as the AI killing me later. Highly interesting. Sadly it is not a priority. I could, for example, start my own campaign to make people aware of possible risks. I could talk to people. I bet there's a lot more you smart people could do besides supporting EY. The SIAI and specially EY does not have the best reputation within the x-risk community and I bet that's the same in the AI community. That might very well be the case given how they handle public relations. He wasn't the first smart person who came to these conclusions. And he sure isn't charismatic. I've read and heard enough to be in doubt since I haven't come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor of some basic principles of unknown accuracy. And even those arguments are sufficiently vague that you cannot differentiate them from mere philosophical musing. And if you feel stupid because I haven't read hundreds of articles to find a single piece of third party evidence in favor of the outstanding premises used to ask for donations, then you should feel stupid.
1kodos9615y
Since I've now posted several comments on this thread defending and/or "siding with" XiXiDu, I feel I should state, for the record, that I think this last comment is a bit over the line, and I don't want to be associated with the kind of unnecessarily antagonistic tone displayed here. Although there are a couple pieces of the SIAI thesis that I'm not yet 100% sold on, I don't reject it in its entirety, as it now sounds like XiXiDu does - I just want to hear some more thorough explanation on a couple of sticking points before I buy in. Also, charisma is in the eye of the beholder ;)
6XiXiDu15y
I think I should say more about this. That EY has no charisma is, I believe, a reasonable estimation. Someone who says of himself that he's not neurotypical likely isn't a very appealing person in the eye of the average person. Then I got much evidence in the form of direct comments about EY that show that many people do not like him personally. Now let's examine if I am hostile to EY and his movement. First a comment I made regarding Michael Anissimov' 26th birthday. I wrote: Let's examine my opinion about Eliezer Yudkowsky. * Here I suggest EY to be the most admirable person. * When I recommended reading Good and Real to a professional philosopher I wrote, "Don't know of a review, a recommendation by Eliezer Yudkowsky as 'great' is more than enough for me right now." * Here a long discussion with some physicists in which I try to defend MWI by linking them to EY' writings. Note: It is a backup since I deleted my comments there as I was angered by their hostile tone. There is a lot more which I'm too lazy to look up now. You can check it for yourself, I'm promoting EY and the SIAI all the time, everywhere. And I'm pretty disappointed that rather than answering my questions or linking me up to some supportive background information, I mainly seem to be dealing with a bunch of puffed up adherents.
2XiXiDu15y
Have you seen me complaining about the antagonistic tone that EY is exhibiting in his comments? Here are the first two replies of people in the academics I wrote about this post, addressing EY: and
3kodos9615y
I have been pointing that out as well - although I would describe his reactions more as "defensive" than "antagonistic". Regardless, it seemed to be out of character for Eliezer. Do the two of you have some kind of history I'm not aware of?
[-]Wei Dai15y130

I think Vernor Vinge at least has made a substantial effort to convince people of the risks ahead. What do you think A Fire Upon the Deep is? Or, here is a more explicit version:

If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility. Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet.... In a Post- Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self- aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [16] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others

... (read more)
Reply
1XiXiDu15y
As I wrote in another comment, Eliezer Yudkowsky hasn't come up with anything unique. And there is no argument in saying that he's simply he smartest fellow around since clearly, other people have come up with the same ideas before him. And that was my question, why are they not signaling their support for the SIAI. Or in case they don't know about the SIAI, why are they not using all their resources and publicity and try to stop the otherwise inevitable apocalypse? It looks like there might be arguments against the kind of fearmongering that can be found within this community. So why is nobody out to inquire about the reasons for the great silence within the group of those aware of a possible singularity but who nevertheless keep quiet? Maybe they know something you don't, or are you people so sure of your phenomenal intelligence?
[-]CarlShulman15y110

David Chalmers has been writing and presenting to philosophers about AI and intelligence explosion since giving his talk at last year's Singularity Summit. He estimates the probability of human-level AI by 2100 at "somewhat more than one-half," thinks an intelligence explosion following that quite likely, and considers possible disastrous consequences quite important relative to other major causes today. However, he had not written or publicly spoken about his views, and probably would not have for quite some time had he not been invited to the Singularity Summit.

He reports a stigma around the topic as a result of the combination of science-fiction associations and the early failures of AI, and the need for some impetus to brave that. Within the AI field, there is also a fear that discussion of long-term risks, or unlikely short-term risks may provoke hostile reactions against the field thanks to public ignorance and affect heuristic. Comparisons are made to genetic engineering of agricultural crops, where public attention seems to be harmful on net in unduly slowing the development of more productive plants.

Reply
5XiXiDu15y
Thanks. This is more, I think you call it rational evidence, from an outsider. But it doesn't answer the primary question of my post. How do you people arrive at the estimations you state? Where can I find the details of how you arrived at your conclusions about the likelihood of those events? If all this was supposed to be mere philosophy, I wouldn't inquire about it to such an extent. But the SIAI is asking for the better part of your income and resources. There are strong claims being made by Eliezer Yudkowsky and calls for action. Is it reasonable to follow given the current state of evidence?
9CarlShulman15y
If you are a hard-core consequentialist altruist who doesn't balance against other less impartial desires you'll wind up doing that eventually for something. Peter Singer's "Famine, Affluence, and Morality" is decades old, and there's still a lot of suffering to relieve. Not to mention the Nuclear Threat Initiative, or funding research into DNA vaccines, or political lobbying, etc. The question of how much you're willing to sacrifice in exchange for helping various numbers of people or influencing extinction risks in various ways is separate from data about the various options. No one is forcing you to reduce existential risk (except insofar as tax dollars go to doing so), certainly not to donate. I'll have more to say on substance tomorrow, but it's getting pretty late. My tl;dr take would be that with pretty conservative estimates on total AI risk, combined with the lack of short term motives to address it (the threat of near-term and moderate scale bioterrorism drives research into defenses, not the fear of extinction-level engineered plagues; asteroid defense is more motivated by the threat of civilization or country-wreckers than the less common extinction-level events; nuclear risk reduction was really strong only in the face of the Soviets, and today the focus is still more on nuclear terrorism, proliferation, and small scale wars; climate change benefits from visibly already happening and a social movement built over decades in tandem with the existing environmentalist movement), there are still low-hanging fruit to be plucked. [That parenthetical aside somewhat disrupted the tl;dr billing, oh well...] When we get to the point where a sizable contingent of skilled folk in academia and elsewhere have gotten well into those low-hanging fruit, and key decision-makers in the relevant places are likely to have access to them in the event of surprisingly quick progress, that calculus will change.
-7timtyler15y
-1Unknowns15y
http://www.overcomingbias.com/2007/02/what_evidence_i.html
6XiXiDu15y
Absence of evidence is not evidence of absence? There's simply no good reason to argue against cryonics. It is a chance in case of the worst case scenario and it is considerably higher than rotting six feet under. Have you thought about the possibility that most experts simply are reluctant to come up with detailed critics about specific issues posed by the SIAI, EY and LW? Maybe they consider it not worth the effort as the data that is already available does not justify given claims in the first place. Anyway, I think I might write some experts and all of the people mentioned in my post, if I'm not too lazy. I've already got one reply, whom I'm not going to name right now. But let's first consider Yudkowsky' attitude of adressing other people: Now the first of those people I contacted about it: ETA I was told the person I quoted above is stating full ad hominem falsehoods regarding Eliezer. I think it is appropriate to edit the message to show that indeed the person might not has been honest, or clueful. Otherwise I'll unnecessary end up perpetuating possible ad hominem attacks.
[-]utilitymonster15y110

I feel some of the force of this...I do think we should take the opinions of other experts seriously, even if their arguments don't seem good.

I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you're going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you're just interested in maximizing expected utility, the complaint that we don't have a lot of evidence about what will be best for the future, or the complaint that we just don't really know whether SIAI's mission and methodology are going to work seems to lose a lot of force.

Reply
3multifoliaterose15y
I have some sympathy for your remark. The real question is just whether SIAI has greatly overestimated at least one of the relevant probabilities. I have high confidence that the SIAI staff have greatly overestimated their ability to have a systematically positive impact on existential risk reduction.
4utilitymonster15y
Have you read Nick Bostrom's paper, Astronomical Waste? You don't have to be able to affect the probabilities by very much for existential risk to be the thing to worry about, especially if you have a decent dose of credence in utilitarianism. Is there a decent chance, in your view, of decrease x-risk by 10^-18 if you put all of your resources into it? That could be enough. (I agree that this kind of argument is worrisome; maybe expected utility theory or utilitarianism breaks down with these huge numbers and tiny probabilities, but it is worth thinking about.) If you're sold on x-risk, are there some candidate other things that might have higher expectations of x-risk reductions on the margin (after due reflection)? (I'm not saying SIAI clearly wins, I just want to know what else you're thinking about.)
3multifoliaterose15y
I agree with you about what you say above. I personally believe that it is possible to individuals to decrease existential risk by more than 10^(-18) (though I know reasonable people who have at one time or other thought otherwise). Two points to make here: (i) Though there's huge uncertainty in judging these sorts of things and I'm by no means confident in my view on this matter, I presently believe that SIAI is increasing existential risk through unintended negative consequences. I've written about this in various comments, for example here, here and here. (ii) I've thought a fair amount about other ways in which one might hope to reduce existential risk. I would cite the promotion and funding of an asteroid strike prevention program as a possible candidate. As I discuss here, placing money in a donor advised fund may be the best option. I wrote out much more detailed thoughts on these points which I can send you by email if you want (just PM me) but which are not yet ready for posting in public.
3CarlShulman15y
I agree that 'poisoning the meme' is a real danger, and that SIAI has historically had both positives and negatives with respect to its reputational effects. My net expectation for it at the moment is positive, but I'll be interested to hear your analysis when it's ready. [Edit: apparently the analysis was about asteroids, not reputation.] Here's the Fidelity Charitable Gift Fund for Americans. I'm skeptical about asteroid in light of recent investments in that area and technology curve, although there is potential for demonstration effects (good and bad) with respect to more likely risks.
9thomblake15y
It's hardly that. Moral Machines is basically a survey; it doesn't go in-depth into anything, but it can point you in the direction of the various attempts to implement robot / AI morality. And Eliezer is one of the people it mentions, so I'm not sure how that recommendation was supposed to advise against taking him seriously. (Moral Machines, page 192)
4thomblake15y
To follow up on this, Wendell specifically mentions EY's "friendly AI" in the intro to his new article in the Ethics and Information Technology special issue on "Robot ethics and human ethics".
6Rain15y
I am unable to take this criticism seriously. It's just a bunch of ad hominem and hand-waving. What are the reasons to doubt? How are they ignoring the uncertainties when they list them on their webpage and bring them up in every interview? How is a fiercely atheist group religious at all? How is it a cult (there are lots of posts about this in the LessWrong archive)? How is it irrational? Edit: And I'm downvoted. You actually think a reply that's 50% insult and emotionally loaded language has substance that I should be engaging with? I thought it was a highly irrational response on par with anti-cryonics writing of the worst order. Maybe you should point out the constructive portion.
8HughRistik15y
The response by this individual seems like a summary, rather than an argument. The fact that someone writes a polemical summary of their views on a subject doesn't tell us much about whether their views are well-reasoned or not. A polemical summary is consistent with being full of hot air, but it's also consistent with having some damning arguments. Of course, to know either way, we would have to hear this person's actual arguments, which we haven't, in this case. Just because a certain topic is raised, doesn't mean that it is discussed correctly. The argument is that their thinking has some similarities to religion. It's a common rhetorical move to compare any alleged ideology to religion, even if that ideology is secular. The fact that EY displays an awareness of cultish dynamics doesn't necessarily mean that SIAI avoids them. Personally, I buy most of Eliezer's discussion that "every cause wants to become a cult," and I don't like the common practice of labeled movements as "cults." The net for "cult" is being drawn far too widely. Yet I wouldn't say that the use of the word "cult" means that the individual is engaging in bad reasoning. While I think "cult" is generally a misnomer, it's generally used as short-hand for a group having certain problematic social-psychological qualities (e.g. conformity, obedience to authority). The individual could well be able to back those criticisms up. Who knows. We would need to hear this individual's actual arguments to be able to evaluate whether the polemical summary is well-founded. P.S. I wasn't the one who downvoted you. Edit: I don't know the truth of these statements. The second one seems dubious, but it might not be meant to be taken literally ("Hello World" is a program). If Eliezer isn't a high school dropout, and has written major applications, then the credibility of this writer is lowered.
2NihilCredo15y
I believe you weren't supposed to engage that reply, which is a dismissal more than criticism. I believe you were supposed to take a step back and use it as a hint as to why the SIAI's yearly budget is 5 x 10^5 rather than 5 x 10^9 USD.
-1timtyler15y
Re: "How is it a cult?" It looks a lot like an END OF THE WORLD cult. That is a well-known subspecies of cult - e.g. see: http://en.wikipedia.org/wiki/Doomsday_cult "The End of the World Cult" * http://www.youtube.com/watch?v=-3uDmyGq8Ok The END OF THE WORLD acts as a superstimulus to human fear mechanisms - and causes caring people rush to warn their friends of the impending DOOM - spreading the panic virally. END OF THE WORLD cults typically act by simulating this energy - and then feeding from it. The actual value of p(DOOM) is not particularly critical for all this. The net effect on society of the FEARMONGERING that usually results from such organisations seems pretty questionable. Some of those who become convinced that THE END IS NIGH may try and prevent it - but others will neglect their future plans, and are more likely to rape and pillage. My "DOOM" video has more - http://www.youtube.com/watch?v=kH31AcOmSjs
5NancyLebovitz15y
Slight sidetrack: There is, of course, one DOOM scenario (ok, one other DOOM scenario) which is entirely respectable here-- that the earth will be engulfed when the sun becomes a red giant. That fate for the planet haunted me when I was a kid. People would say "But that's billions of years in the future" and I'd feel as though they were missing the point. It's possible that a more detailed discussion would have helped.... Recently, I've read that school teachers have a standard answer for kids who are troubled by the red giant scenario [1]-- that people will have found a solution by then. This seems less intellectually honest than "The human race will be long gone anyway", but not awful. I think the most meticulous answer (aside from "that's the far future and there's nothing to be done about it now") is "that's so far in the future that we don't know whether people will be around, but if they are, they may well find a solution." [1] I count this as evidence for the Flynn Effect.
-7thomblake15y
1timtyler15y
Re: "haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI." This opinion sounds poorly researched - e.g.: "This document was created by html2html, a Python script written by Eliezer S. Yudkowsky." - http://yudkowsky.net/obsolete/plan.html
7XiXiDu15y
I posted that quote to put it into perspective as to what others think of EY and his movement compared to what he thinks about them. Given that he thinks the same about those people, i.e. their opinion isn't worth much and that the LW crowd is much smarter anyway, it highlights an important aspect of the almost non-existing cooperation between him and the academics.
2jimrandomh15y
I don't think one possibly-trivial Python script (to which I am unable to find source code) counts as much evidence. It sets a lower bound, but a very loose one. I have no idea whether Eliezer can program, and my prior says that any given person is extremely unlikely to have real programming ability unless proven otherwise. So I assume he can't. He could change my mind by either publishing a large software project, or taking a standardized programming test such as a TopCoder SRM and publishing his score. EDIT: This is not meant to be a defense of obvious wrong hyperbole like "has never written a single computer program".
1timtyler15y
Eliezer has faced this criticism before and responded (somewhere!). I expect he will figure out coding. I got better at programming over the first 15 years I was doing it. So: he may also take a while to get up to speed. He was involved in this: http://flarelang.sourceforge.net/
1Unknowns15y
This isn't contrary to Robin's post (except what you say about cryonics.) Robin was saying that there is a reluctance to criticize those things in part because the experts think they are not worth bothering with.
[-]Vladimir_Nesov15y110

The questions of speed/power of AGI and possibility of its creation in the near future are not very important. If AGI is fast and near, we must work on FAI faster, but we must work on FAI anyway.

The reason to work on FAI is to prevent any non-Friendly process from eventually taking control over the future, however fast or slow, suddenly powerful or gradual it happens to be. And the reason to work on FAI now is because the fate of the world is at stake. The main anti-prediction to get is that the future won't be Friendly if it's not specifically made Friendly, even if it happens slowly. We can as easily slowly drift away from things we value. You can't optimize for something you don't understand.

It doesn't matter if it takes another thousand years, we still have to think about this hugely important problem. And since we can't guarantee that the deadline is not near, expected utility calculation says we must still work as fast as possible, just in case. If AGI won't be feasible for a long while, that's great news, more time to prepare, to understand what we want.

(To be clear, I do believe that AGIs FOOM, and that we are at risk in the near future, but the arguments for that are informal and difficult to communicate, while accepting these claims is not necessary to come to the same conclusion about policy.)

Reply
4multifoliaterose15y
As I've said elsewhere: (a) There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created. I have not seen anybody present a coherent argument that AGI is likely to be developed before any other existential risk hits us, (b) Even if AGI deserves top priority, there's still the important question of how to go about working toward a FAI. As far as I can tell, working to build an AGI right now now makes sense only if AGI is actually near (a few decades away). (c) Even if AGI is near, there are still serious issues of accountability and transparency connected with SIAI. How do we know that they're making a careful effort to use donations in an optimal way? As things stand, I believe that it would be better to start a organization which exhibits high transparency and accountability, fund that, and let SIAI fold. I might change my mind on this point if SIAI decided to strive toward transparency and accountability.
2mkehrt15y
I really agree with both a and b (although I do not care about c). I am glad to see other people around here who think both these things.
0timtyler15y
Re: "There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created hits." The humans are going to be obliterated soon?!? Alas, you don't present your supporting reasoning.
1multifoliaterose15y
No, no, I'm not at all confident that humans will be obliterated soon. But why, for example, is it more likely that humans will go extinct due to AGI than that humans will go extinct due to a large scale nuclear war? It could be that AGI deserves top priority, but I haven't seen a good argument for why.
5Paul Crowley15y
I think AGI wiping out humanity is far more likely than nuclear war doing so (it's hard to kill everyone with a nuclear war) but even if I didn't, I'd still want to work on the issue which is getting the least attention, since the marginal contribution I can make is greater.
0multifoliaterose15y
Yes, I actually agree with you about nuclear war (and did before I mentioned it!) - I should have picked a better example. How about existential risk from asteroid strikes? Several points: (1) Nuclear war could still cause an astronomical waste in the form that I discuss here. (2) Are you sure that the marginal contribution that you can make to the issue which is getting the least attention is the greatest? The issues getting the least attention may be getting little attention precisely because people know that there's nothing that can be done about them. (3) If you satisfactorially address my point (a), points (b) and (c) will remain.
1timtyler15y
p(asteroid strike/year) is pretty low. Most are not too worried.
0multifoliaterose15y
The question is whether at present it's possible to lower existential risk more by funding and advocating FAI research than than it is to lower existential risk by funding and advocating an asteroid strike prevention program. Despite the low probability of an asteroid strike, I don't think that the answer to this question is obvious.
1timtyler15y
I figure a pretty important thing is to get out of the current vulnerable position as soon as possible. To do that, a major thing we will need is intelligent machines - and so we should allocate resources to their development. Inevitably, that will include consideration of safety features. We can already see some damage when today's companies decide to duke it out - and today's companies are not very powerful compared to what is coming. The situation seems relatively pressing and urgent.
0xamdam15y
that=asteroids? If yes, I highly doubt we need machines significantly more intelligent than existing military technology adopted for the purpose.
0timtyler15y
That would hardly be a way to "get out of the current vulnerable position as soon as possible".
0multifoliaterose15y
I agree that friendly intelligent machines would be a great asset to assuaging future existential risk. My current position is that at present, it's so unlikely that devoting resources to developing safe intelligent machines will substantially increase the probability that we'll develop safe intelligent machines that funding and advocating an asteroid strike program is likely to reduce existential risk more than funding and advocating FAI research is. I may be wrong, but would require a careful argument for the opposite position before changing my mind.
2timtyler15y
Asteroid strikes are very unlikely - so beating them is a really low standard, which IMO, machine intelligence projects do with ease. Funding the area sensibly would help make it happen - by most accounts. Detailed justification is beyond the scope of this comment, though.
1multifoliaterose15y
Assuming that an asteroid strike prevention program costs no more than a few hundred million dollars, I don't think that it's easy to do better to assuage existential risk than funding an asteroid strike prevention program (though it may be possible). I intend to explain why I think it's so hard to lower existential risk through funding FAI research later on (not sure when, but within a few months). I'd be interested in hearing your detailed justification. Maybe you can make a string top level posts at some point.
0Vladimir_Nesov15y
Considering the larger problem statement, technically understanding what we value as opposed to actually building an AGI with those values, what do you see as distinguishing a situation where we are ready to consider the problem, from a situation where we are not? How can one come to such conclusion without actually considering the problem?
0multifoliaterose15y
I think that understanding what we value is very important. I'm not convinced that developing a technical understanding of what we value is the most important thing right now. I imagine that for some people, working on a developing a technical understanding understanding what we value is the best thing that they could be doing. Different people have different strengths, and this leads to the utilitarian thing varying from person to person.. I don't believe that the best thing for me to do is to study human values. I also don't believe that at the margin, funding researchers who study human values is the best use of money. Of course, my thinking on these matters is subject to change with incoming information. But if what I think you're saying is true, I'd need to see a more detailed argument than the one that you've offered so far to be convinced. If you'd like to correspond by email about these things, I'd be happy to say more about my thinking about these things. Feel free to PM me with your email address.
1Vladimir_Nesov15y
I didn't ask about perceived importance (that has already taken feasibility into account), I asked about your belief that it's not a productive enterprise (that is the feasibility component of importance, considered alone), that we are not ready to efficiently work on the problem yet. If you believe that we are not ready now, but believe that we must work on the problem eventually, you need to have a notion of what conditions are necessary to conclude that it's productive to work on the problem under those conditions. And that's my question: what are those conditions, or how can one figure them out without actually attempting to study the problem (by a proxy of a small team devoted to professionally studying the problem; I'm not yet arguing to start a program on the scale of what's expended on study of string theory).
0multifoliaterose15y
I think that research of the type that you describe is productive. Unless I've erred, my statements above are statements about the relative efficacy of funding research of the type that you describe rather than suggestions that research of the type that you describe has no value. I personally still feel the way that I did in June despite having read Fake Fake Utility Functions, etc. I don't think that it's very likely the case that we will eventually have to do research of the type that you describe to ensure an ideal outcome. Relatedly, I believe that at the margin, at the moment funding other projects has higher expected value than funding research of the type that you describe. But I may be wrong and don't have an argument against your position. I think that this is something that reasonable people can disagree on. I have no problem with you funding, engaging in and advocating research of the type that you describe. You and I may have a difference which cannot be rationally resolved in a timely fashion on account of the information that we have access to being in a forms that makes it difficult or impossible to share. Having different people fund different projects according to their differing beliefs about the world serves as some sort of real world approximation to funding what should be funded according to the result of Bayesian averaging over all people and then funding what should be funded based on that. So, anyway, I think you've given satisfactory answers to how you feel about questions (a) and (b) raised in my comment. I remain curious how you feel about point (c).
2Vladimir_Nesov15y
I did answer to (c) before: any reasonable effort in that direction should start with trying to get SIAI itself to change or justify the way it behaves.
0multifoliaterose15y
Yes, I agree with you. I didn't remember that you had answered this question before. Incidentally, I did correspond with Michael Vassar. More on this to follow later.
1timtyler15y
p(machine intelligence) is going up annually - while p(nuclear holocaust) has been going down for a long time now. Neither are likely to obliterate civilisation - but machine intelligence could nontheless be disruptive.
0Vladimir_Nesov15y
My comment was specifically about importance of FAI irrespective of existential risks, AGI or not. If we manage to survive at all, this is what we must succeed at. It also prevents all existential risks on completion, where theoretically possible.
2multifoliaterose15y
Okay, we had this back and forth before and I didn't understand you then and now I do. I guess I was being dense before. Anyway, the probability of current action leading to FAI might still be sufficiently small so that it makes sense to focus on other existential risks for the moment. And my other points remain.
6Vladimir_Nesov15y
This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it's not the question usually posed. Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all. SIAI doesn't work on building AGI right now, no no no. We need understanding, not robots. Like this post, say.
6multifoliaterose15y
I agree that in general people should be more concerned about existential risk and that it's worthwhile to promote general awareness of existential risk. But there is a zero-sum aspect to philanthropic efforts. See the GiveWell blog entry titled Denying The Choice. More to the point, I think that one of the major factors keeping people away from studying existential risk is the fact that the many of the people who are interested in existential risk (including Eliezer) have low credibility on account expressing confident, apparently sensationalist claims without supporting them with careful, well reasoned arguments. I'm seriously concerned about this issue. If Eliezer can't explain why it's pretty obvious to him that AGI will be developed within the next century, then he should explicitly say something like "I believe that AGI will be developed over the next 100 years but it's hard for me to express why so it's understandable that people don't believe me" or "I'm uncertain as to whether or not AGI will be developed over the next 100 years" When he makes unsupported claims that sound like the sort of thing that somebody would say just to get attention, he's actively damaging the cause of existential risk.
0timtyler15y
Re: "AGI will be developed over the next 100 years" I list various estimates from those interested enough in the issue to bother giving probabality density functions at the bottom of: http://alife.co.uk/essays/how_long_before_superintelligence/
0multifoliaterose15y
Thanks, I'll check this out when I get a chance. I don't know whether I'll agree with your conclusions, but it looks like you've at least attempted to answer one of my main questions concerning the feasibility of SIAI's approach.
2CarlShulman15y
Those surveys suffer from selection bias. Nick Bostrom is going to try to get a similar survey instrument administered to a less-selected AI audience. There was also a poll at the AI@50 conference.
0timtyler15y
http://www.engagingexperience.com/2006/07/ai50_first_poll.html If the raw data was ever published, that might be of some interest.
0gwern15y
Any chance of piggybacking questions relevant to Maes-Garreau on that survey? As you point out on that page, better stats are badly needed.
1CarlShulman15y
And indeed, I suggested to SIAI folk that all public record predictions of AI timelines be collected for that purpose, and such a project is underway.
0gwern15y
Hm, I had not heard about that. SIAI doesn't seem to do a very good job of publicizing its projects or perhaps doesn't do a good job of finishing and releasing them.
0CarlShulman15y
It just started this month, at the same time as Summit preparation.
0timtyler15y
Re: "Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all." The marginal benefit of making machines smarter seems large - e.g. see automobile safety applications: http://www.youtube.com/watch?v=I4EY9_mOvO8 I don't really see that situation changing much anytime soon - there will probably be such marginal benefits for a long time to come.
1[anonymous]15y
Slowly gives the option of figuring out some things about the space of possible AIs with experimentation. Which might then constrain the possible ways to make them friendly. To use the tired flying metaphor. The type of stabilisation you need for flying depends on the method of generating lift. If fixed wing aircraft are impossible there is not much point looking at ailerons and tails. If helicopters are possible then we should be looking at tail rotors.
[-]utilitymonster15y100

I'm not exactly an SIAI true believer, but I think they might be right. Here are some questions I've thought about that might help you out. I think it would help others out if you told us exactly where you'd be interested in getting off the boat.

  1. How much of your energy are you willing to spend on benefiting others, if the expected benefits to others will be very great? (It needn't be great for you to support SIAI.)
  2. Are you willing to pursue a diversified altruistic strategy if it saves fewer expected lives (it almost always will for donors giving less than $1 million or so)?
  3. Do you think mitigating x-risk is more important than giving to down-to-earth charities (GiveWell style)? (This will largely turn on how you feel about supporting causes with key probabilities that are tough to estimate, and how you feel about low-probability, high expected utility prospects.)
  4. Do you think that trying to negotiate a positive singularity is the best way to mitigate x-risk?
  5. Is any known organization likely to do better than SIAI in terms of negotiating a positive singularity (in terms of decreasing x-risk) on the margin?
  6. Are you likely to find an organization that beats SIAI in the future?

Judging from your post, you seem most skeptical about putting your efforts into causes whose probability of success is very difficult to estimate, and perhaps low.

Reply
0XiXiDu15y
1. Maximal utlity for everyone is a preference but secondary. Most of all in whatever I support my personal short and long-term benefit is a priority. 2. No 3. Yes (Edit) 4. Uncertain/Unable to judge. 5. Maybe, but I don't know of one. That doesn't mean that we shouldn't create one, if only for the uncertainty of Eliezer Yudkowsky' possible unstated goals. 6. Uncertain/Unable to judge. See 5.
2utilitymonster15y
Given your answers to 1-3, you should spend all of your altruistic efforts on mitigating x-risk (unless you're just trying to feel good, entertain yourself, etc.). For 4, I shouldn't have asked you whether you "think" something beats negotiating a positive singularity in terms of x-risk reduction. Better: Is there some other fairly natural class of interventions (or list of potential examples) such that, given your credences, has a higher expected value? What might such things be? For 5-6, perhaps you should think about what such organizations might be. Those interested in convincing XiXiDu might try listing some alternative best x-risk mitigating groups and provide arguments that they don't do as well. As for me, my credences are highly unstable in this area, so info is appreciated on my part as well.
[-]XiXiDu15y80

Dawkins agrees with EY

Richard Dawkins states that he is frightened by the prospect of superhuman AI and even mentions recursion and intelligence explosion.

Reply
[-]JGWeissman15y180

I was disappointed watching the video relative to the expectations I had from your description.

Dawkins talked about recursion as in a function calling itself, as an example of the sort of the thing that may be the final innovation that makes AI work, not an intelligence explosion as a result of recursive self-improvement.

Reply
0[anonymous]15y
True, I just wanted to appeal to the majority here. And it worked, 7 upvotes. Whereas this won't work, even if true.
[-]xamdam15y80

I was not sure whether to downvote this post for its epistemic value or upvote for instrumental (stimulating good discussion).

I ended up downvoting, I think this forum deserves better epistemic quality (I paused top-posting myself for this reason). I also donated to SIAI, because its value was once again validated to me by the discussion (though I have some reservations about apparent eccentricity of the SIAI folks, which is understandable (dropping out of high school is to me evidence of high rationality) but couterproductive (not having enough accepted a... (read more)

Reply
7Will_Newsome15y
Are you talking about me? I believe I'm the only person that could sorta kinda be affiliated with the Singularity Institute who has dropped out of high school, and I'm a lowly volunteer, not at all representative of the average credentials of the people who come through SIAI. Eliezer demonstrated his superior rationality to me by never going to high school in the first place. Damn him.
6Alicorn15y
I dropped out of high school... to go to college early.
2xamdam15y
I finished high school early (16) by American standards, with college credit. By the more sane standards of Soviet education 16 is, well, standard (and you learn a lot more).
3xamdam15y
talking about this comment.
1XiXiDu15y
Here are a few comments where I advance on that particular point: * Comment 1 * Comment 2 * Comment 3 * Comment 4
[-]EStokes15y80

I don't think this post was well-written, at the least. I didn't even understand the tl;dr?

tldr; Is the SIAI evidence-based or merely following a certain philosophy? I'm currently unable to judge if the Less Wrong community and the SIAI are updating on fictional evidence or if the propositions, i.e. the basis for the strong arguments for action that are proclaimed on this site, are based on fact.

I don't see much precise expansion on this, except for MWI? There's a sequence on it.

And that is my problem. Given my current educational background and know

... (read more)
Reply
[-]kodos9615y170

I don't understand why this post has upvotes.

I think the obvious answer to this is that there are a significant number of people out there, even out there in the LW community, who share XiXiDu's doubts about some of SIAIs premises and conclusions, but perhaps don't speak up with their concerns either because a) they don't know quite how to put them into words, or b) they are afraid of being ridiculed/looked down on.

Unfortunately, the tone of a lot of the responses to this thread lead me to believe that those motivated by the latter option may have been right to worry.

Reply
9Furcas15y
Personally, I upvoted the OP because I wanted to help motivate Eliezer to reply to it. I don't actually think it's any good.
[-]kodos9615y150

Yeah, I agree (no offense XiXiDu) that it probably could have been better written, cited more specific objections etc. But the core sentiment is one that I think a lot of people share, and so it's therefore an important discussion to have. That's why it's so disappointing that Eliezer seems to have responded with such an uncharacteristically thin skin, and basically resorted to calling people stupid (sorry, "low g-factor") if they have trouble swallowing certain parts of the SIAI position.

Reply
3HughRistik15y
This was exactly my impression, also.
6Wei Dai15y
I think your upvote probably backfired, because (I'm guessing) Eliezer got frustrated that such a badly written post got upvoted so quickly (implying that his efforts to build a rationalist community were less successful than he had thought/hoped) and therefore responded with less patience than he otherwise might have.
2Eliezer Yudkowsky15y
Then you should have written your own version of it. Bad posts that get upvoted just annoy me on a visceral level and make me think that explaining things is hopeless, if LWers still think that bad posts deserve upvotes. People like XiXiDu are ones I've learned to classify as noisemakers who suck up lots of attention but who never actually change their minds enough to start pitching in, no matter how much you argue with them. My perceptual system claims to be able to classify pretty quickly whether someone is really trying or not, and I have no concrete reason to doubt it. I guess next time I'll try to remember not to reply at all. Everyone else, please stop upvoting posts that aren't good. If you're interested in the topic, write your own version of the question.
[-]XiXiDu15y180

What are you considering as pitching in? That I'm donating as I am, or that I am promoting you, LW and the SIAI all over the web, as I am doing?

You simply seem to take my post as hostile attack rather than the inquiring of someone who happened not to be lucky enough to get a decent education in time.

Reply
[-]Eliezer Yudkowsky15y180

All right, I'll note that my perceptual system misclassified you completely and consider that concrete reason to doubt it from now on.

Sorry.

If you are writing a post like that one it is really important to tell me that you are an SIAI donor. It gets a lot more consideration if I know that I'm dealing with "the sort of thing said by someone who actually helps" and not "the sort of thing said by someone who wants an excuse to stay on the sidelines, and who will just find another excuse after you reply to them", which is how my perceptual system classified that post.

The Summit is coming up and I've got lots of stuff to do right at this minute, but I'll top-comment my very quick attempt at pointing to information sources for replies.

Reply
9xamdam15y
It was actually in the post So you might suggest to your perceptual system to read the post first (at least before issuing a strong reply).
7Clippy15y
I also donated to SIAI, and it was almost all the USD I had at the time, so I hope posters here take my questions seriously. (I would donate even more if someone would just tell me how to make USD.) Also, I don't like when this internet website is overloaded with noise posts that don't accomplish anything.
[-]thomblake15y11