(Disclaimer: My statements about SIAI are based upon my own views, and should in no way be interpreted as representing their stated or actual viewpoints on the subject matter. I am talking about my personal thoughts, feelings, and justifications, no one else's. For official information, please check the SIAI website.)
Although this may not answer your questions, here are my reasons for supporting SIAI:
I want what they're selling. I want to understand morality, intelligence, and consciousness. I want a true moral agent outside of my own thoughts, something that can help solve that awful, plaguing question, "Why?" I want something smarter than me that can understand and explain the universe, providing access to all the niches I might want to explore. I want something that will save me from death and pain and find a better way to live.
It's the most logical next step. In the evolution of mankind, intelligence is a driving force, so "more intelligent" seems like an incredibly good idea, a force multiplier of the highest order. No other solution captures my view of a proper future like friendly AI, not even "...in space!"
No one else cares about the big
I'm currently preparing for the Summit so I'm not going to hunt down and find links. Those of you who claimed they wanted to see me do this should hunt down the links and reply with a list of them.
Given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:
You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain. We don't emphasize this very hard when people talk in concrete terms about donating to more than one organization, because charitable dollars are not substitutable from a limited pool, the main thing is the variance in the tiny fraction of their income people donate to charity in the first place and so the amount of warm glow people generate for th...
An example here is the treatment and use of MWI (a.k.a. the "many-worlds interpretation") and the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that's it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least as understood within the LW community? But that's besides the point. The problem here is that such conclusions are, I believe, widely considered to be weak evidence to base further speculations and estimations on.
Reading the QM sequence (someone link) will show you that to your surprise and amazement, what seemed to you like an unjustified leap and a castle in the air, a mere interpretation, is actually nailed down with shocking solidity.
...What I'm trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence
Questionable. Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans? To my awareness we have no evidence to this end.
What would you accept as evidence?
Would you accept sophisticated machine learning algorithms like the ones in the Netflix contest, who find connections that make no sense to humans, who simply can't work with high-dimensional data?
Would you accept a circuit designed by a genetic algorithm, which doesn't work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?
Would you accept a chess program which could crush any human chess player who ever lived? Kasparov at ELO 2851, Rybka at 3265. Wikipedia says grandmaster status comes at ELO 2500. So Rybka is now even further beyond Kasparov at his peak as Kasparov was beyond a new grandmaster. And it's not like Rybka or the other chess AIs will weaken with age.
Or are you going to pull a no-true-Scotsman and assert that each one of these is mechanical or unoriginal or not really beyond human or just not different enough?
Can I say, first of all, that if you want to think realistically about a matter like this, you will have to find better authorities than science-fiction writers. Their ideas are generally not their own, but come from scientific and technological culture or from "futurologists" (who are also a very mixed bunch in terms of intellect, realism, and credibility); their stories present speculation or even falsehood as fact. It may be worthwhile going "cold turkey" on all the SF you have ever read, bearing in mind that it's all fiction that was ground out, word by word, by some human being living a very ordinary life, in a place and time not very far from you. Purge all the imaginary experience of transcendence from your system and see what's left.
Of course science-fictional thinking, treating favorite authors as gurus, and so forth is endemic in this subculture. The very name, "Singularity Institute", springs from science fiction. And SF occasionally gets things right. But it is far more a phenomenon of the time, a symptom of real things, rather than a key to understanding reality. Plain old science is a lot closer to being a reliable guide to reality, thou...
Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).
This claim can be broken into two separate parts:
For 1: looking at current technology trends, Sandberg & Bostrom estimate that we should have the technology needed for whole brain emulation around 2030-2050 or so, at least assuming that it gets enough funding and that Moore's law keeps up. Even if there isn't much of an actual interest in whole brain emulations, improving scanning tools are likely to revolutionize neuroscience. Of course, respected neuroscientists are already talking about reverse-engineering of the brain as being within reach. If we are successful at reverse engineering the brain, then AI is a natural result.
As for two, as Eliezer mentioned, this is pretty much an antiprediction. Human minds are a particular type of architecture, running on a particular type of hardware: it would be an amazing coincidence if it just happened that our intelligence couldn't be drastically improved upon. We already know that we're insanely biased, to the point of people ...
Is there more to this than "I can't be bothered to read the Sequences - please justify everything you've ever said in a few paragraphs for me"?
Criticism is good, but this criticism isn't all that useful. Ultimately, what SIAI does is the conclusion of a chain of reasoning; the Sequences largely present that reasoning. Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying "justify yourselves!" doesn't advance the debate.
Getting annoyed (at one of your own donors!) for such a request is not a way to win.
I don't begrudge SIAI at all for using Less Wrong as a platform for increasing its donor base, but I can definitely see myself getting annoyed sooner or later, if SIAI donors keep posting low-quality comments or posts, and then expecting special treatment for being a donor. You can ask Eliezer to not get annoyed, but is it fair to expect all the other LW regulars to do the same as well?
I'm not sure what the solution is to this problem, but I'm hoping that somebody is thinking about it.
These are reasonable questions to ask. Here are my thoughts:
- Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).
- Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).
Virtually certain that these things are possible in our physics. It's possible that transhuman AI is too difficult for human beings to feasibly program, in the same way that we're sure chimps couldn't program trans-simian AI. But this possibility seems slimmer when you consider that humans will start boosting their own intelligence pretty soon by other means (drugs, surgery, genetic engineering, uploading) and it's hard to imagine that recursive improvement would cap out any time soon. At some point we'll have a descendant who can figure out self-improving AI; it's just a question of when.
- The likelihood of exponential growth versus a slow development over many centuries.
- That it is worth it to spend most on a future whose likelihood I cannot judge.
These are more about decision theory than logical uncertainty, IMO. If a self-improving AI isn't actually possible for a long time, then funding ...
I think Vernor Vinge at least has made a substantial effort to convince people of the risks ahead. What do you think A Fire Upon the Deep is? Or, here is a more explicit version:
...If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility. Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet.... In a Post- Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self- aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [16] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others
David Chalmers has been writing and presenting to philosophers about AI and intelligence explosion since giving his talk at last year's Singularity Summit. He estimates the probability of human-level AI by 2100 at "somewhat more than one-half," thinks an intelligence explosion following that quite likely, and considers possible disastrous consequences quite important relative to other major causes today. However, he had not written or publicly spoken about his views, and probably would not have for quite some time had he not been invited to the Singularity Summit.
He reports a stigma around the topic as a result of the combination of science-fiction associations and the early failures of AI, and the need for some impetus to brave that. Within the AI field, there is also a fear that discussion of long-term risks, or unlikely short-term risks may provoke hostile reactions against the field thanks to public ignorance and affect heuristic. Comparisons are made to genetic engineering of agricultural crops, where public attention seems to be harmful on net in unduly slowing the development of more productive plants.
I feel some of the force of this...I do think we should take the opinions of other experts seriously, even if their arguments don't seem good.
I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you're going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you're just interested in maximizing expected utility, the complaint that we don't have a lot of evidence about what will be best for the future, or the complaint that we just don't really know whether SIAI's mission and methodology are going to work seems to lose a lot of force.
The questions of speed/power of AGI and possibility of its creation in the near future are not very important. If AGI is fast and near, we must work on FAI faster, but we must work on FAI anyway.
The reason to work on FAI is to prevent any non-Friendly process from eventually taking control over the future, however fast or slow, suddenly powerful or gradual it happens to be. And the reason to work on FAI now is because the fate of the world is at stake. The main anti-prediction to get is that the future won't be Friendly if it's not specifically made Friendly, even if it happens slowly. We can as easily slowly drift away from things we value. You can't optimize for something you don't understand.
It doesn't matter if it takes another thousand years, we still have to think about this hugely important problem. And since we can't guarantee that the deadline is not near, expected utility calculation says we must still work as fast as possible, just in case. If AGI won't be feasible for a long while, that's great news, more time to prepare, to understand what we want.
(To be clear, I do believe that AGIs FOOM, and that we are at risk in the near future, but the arguments for that are informal and difficult to communicate, while accepting these claims is not necessary to come to the same conclusion about policy.)
I'm not exactly an SIAI true believer, but I think they might be right. Here are some questions I've thought about that might help you out. I think it would help others out if you told us exactly where you'd be interested in getting off the boat.
Judging from your post, you seem most skeptical about putting your efforts into causes whose probability of success is very difficult to estimate, and perhaps low.
Dawkins agrees with EY
Richard Dawkins states that he is frightened by the prospect of superhuman AI and even mentions recursion and intelligence explosion.
I was disappointed watching the video relative to the expectations I had from your description.
Dawkins talked about recursion as in a function calling itself, as an example of the sort of the thing that may be the final innovation that makes AI work, not an intelligence explosion as a result of recursive self-improvement.
I was not sure whether to downvote this post for its epistemic value or upvote for instrumental (stimulating good discussion).
I ended up downvoting, I think this forum deserves better epistemic quality (I paused top-posting myself for this reason). I also donated to SIAI, because its value was once again validated to me by the discussion (though I have some reservations about apparent eccentricity of the SIAI folks, which is understandable (dropping out of high school is to me evidence of high rationality) but couterproductive (not having enough accepted a...
I don't think this post was well-written, at the least. I didn't even understand the tl;dr?
tldr; Is the SIAI evidence-based or merely following a certain philosophy? I'm currently unable to judge if the Less Wrong community and the SIAI are updating on fictional evidence or if the propositions, i.e. the basis for the strong arguments for action that are proclaimed on this site, are based on fact.
I don't see much precise expansion on this, except for MWI? There's a sequence on it.
...And that is my problem. Given my current educational background and know
I don't understand why this post has upvotes.
I think the obvious answer to this is that there are a significant number of people out there, even out there in the LW community, who share XiXiDu's doubts about some of SIAIs premises and conclusions, but perhaps don't speak up with their concerns either because a) they don't know quite how to put them into words, or b) they are afraid of being ridiculed/looked down on.
Unfortunately, the tone of a lot of the responses to this thread lead me to believe that those motivated by the latter option may have been right to worry.
Yeah, I agree (no offense XiXiDu) that it probably could have been better written, cited more specific objections etc. But the core sentiment is one that I think a lot of people share, and so it's therefore an important discussion to have. That's why it's so disappointing that Eliezer seems to have responded with such an uncharacteristically thin skin, and basically resorted to calling people stupid (sorry, "low g-factor") if they have trouble swallowing certain parts of the SIAI position.
What are you considering as pitching in? That I'm donating as I am, or that I am promoting you, LW and the SIAI all over the web, as I am doing?
You simply seem to take my post as hostile attack rather than the inquiring of someone who happened not to be lucky enough to get a decent education in time.
All right, I'll note that my perceptual system misclassified you completely and consider that concrete reason to doubt it from now on.
Sorry.
If you are writing a post like that one it is really important to tell me that you are an SIAI donor. It gets a lot more consideration if I know that I'm dealing with "the sort of thing said by someone who actually helps" and not "the sort of thing said by someone who wants an excuse to stay on the sidelines, and who will just find another excuse after you reply to them", which is how my perceptual system classified that post.
The Summit is coming up and I've got lots of stuff to do right at this minute, but I'll top-comment my very quick attempt at pointing to information sources for replies.