Major update here.
The state of affairs regarding the SIAI and its underlying rationale and rules of operation are insufficiently clear.
Most of the arguments involve a few propositions and the use of probability and utility calculations to legitimate action. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. Even if you tell me, where is the data on which you base those estimations?
There seems to be an highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call that a castle in the air.
I know that what I'm saying may simply be due to a lack of knowledge and education, that is why I am inquiring about it. How many of you, who currently support the SIAI, are able to analyse the reasoning that led you to support the SIAI in the first place, or at least substantiate your estimations with other kinds of evidence than a coherent internal logic?
I can follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. Are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground? There seems to be no critical inspection or examination by a third party. There is no peer review. Yet people are willing to donate considerable amounts of money.
I'm concerned that, although consistently so, the SIAI and its supporters are updating on fictional evidence. This post is meant to inquire about the foundations of your basic premises. Are you creating models to treat subsequent models or are your propositions based on fact?
An example here is the use of the Many-worlds interpretation. Itself a logical implication, can it be used to make further inferences and estimations without additional evidence? MWI might be the only consistent non-magic interpretation of quantum mechanics. The problem here is that such conclusions are, I believe, widely considered not to be enough to base further speculations and estimations on. Isn't that similar to what you are doing when speculating about the possibility of superhuman AI and its consequences? What I'm trying to say here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not to say that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, on ideas that are themselves not based on firm ground.
The gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Imagination allows for endless possibilities while scientific evidence provides hints of what might be possible and what impossible. Science does provide the ability to assess your data. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of what might be possible. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction, and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed by the SIAI.
Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who's aware of something that might shatter the universe? Why is it that people like Vernor Vinge, Robin Hanson or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI? Why aren't Eric Drexler, Gary Drescher or AI researches like Marvin Minsky worried to the extent that they signal their support?
I'm talking to quite a few educated people outside this community. They do not doubt all those claims for no particular reason. Rather they tell me that there are too many open questions to focus on the possibilities depicted by the SIAI and to neglect other near-term risks that might wipe us out as well.
I believe that many people out there know a lot more than I do, so far, about related topics and yet they seem not to be nearly as concerned about the relevant issues than the average Less Wrong member. I could have named other people. That's besides the point though, it's not just Hanson or Vinge but everyone versus Eliezer Yudkowsky and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?
What do you expect me to do, just believe Eliezer Yudkowsky? Like I believed so much in the past which made sense but turned out to be wrong? Maybe after a few years of study I'll know more.
...
2011-01-06: As this post received over 500 comments I am reluctant to delete it. But I feel that it is outdated and that I could do much better today. This post has however been slightly improved to account for some shortcomings but has not been completely rewritten, neither have its conclusions been changed. Please account for this when reading comments that were written before this update.
2012-08-04: A list of some of my critical posts can be found here: SIAI/lesswrong Critiques: Index
(Disclaimer: My statements about SIAI are based upon my own views, and should in no way be interpreted as representing their stated or actual viewpoints on the subject matter. I am talking about my personal thoughts, feelings, and justifications, no one else's. For official information, please check the SIAI website.)
Although this may not answer your questions, here are my reasons for supporting SIAI:
I want what they're selling. I want to understand morality, intelligence, and consciousness. I want a true moral agent outside of my own thoughts, something that can help solve that awful, plaguing question, "Why?" I want something smarter than me that can understand and explain the universe, providing access to all the niches I might want to explore. I want something that will save me from death and pain and find a better way to live.
It's the most logical next step. In the evolution of mankind, intelligence is a driving force, so "more intelligent" seems like an incredibly good idea, a force multiplier of the highest order. No other solution captures my view of a proper future like friendly AI, not even "...in space!"
No one else cares about the big
I respectfully disagree. I am someone who was convinced by your MWI explanations but even so I am not comfortable with outright associating reserved judgement with lack of g.
This is a subject that relies on an awful lot of crystalized knowledge about physics. For someone to come to a blog knowing only what they can recall of high school physics and be persuaded to accept a contrarian position on what is colloquially considered the most difficult part of science is a huge step.
The trickiest part is correctly accounting for meta-uncertainty. There are a lot of things that seem extremely obvious but turn out to be wrong. I would even suggest that the trustworthiness of someone's own thoughts is not always proportionate to g-factor. That leaves people with some situations where they need to trust social processes more than their own g. That may prompt them to go and explore the topic from various other sources until such time that they can trust that their confidence is not just naivety.
On a subject like physics and MWI, I wouldn't take the explanation of any non-professional as enough to establish that a contrarian position is "obviously correct". Even if they genuinely believed in what they said, they'll still only be presenting the evidence from their own point of view. Or they might be missing something essential and I wouldn't have the expertise to realize that. Heck, I wouldn't even go on the word of a full-time researcher in the field before I'd heard what their opponents had to say.
On a subject matter like cryonics I was relatively convinced from simply hearing what the cryonics advocates had to say, because it meshed with my understanding of human anatomy and biology, and it seemed like nobody was very actively arguing the opposite. But to the best of my knowledge, people are arguing against MWI, and I simply wouldn't have enough domain knowledge to evaluate either sort of claim. You could argue your case of "this is obviously true" with completely made-up claims, and I'd have no way to tell.
This is probably the best comment so far:
Rounds it up pretty well. Thank you.
Physicists have something else, however, and that is domain expertise. As far as I am concerned, MWI is completely at odds with the spirit of relativity. There is no model of the world-splitting process that is relativistically invariant. Either you reexpress MWI in a form where there is no splitting, just self-contained histories each of which is internally relativistic, or you have locally propagating splitting at every point of spacetime in every branch, in which case you don't have "worlds" any more, you just have infinitely many copies of infinitely many infinitesimal patches of space-time which are glued together in some complicated way. You can't even talk about extended objects in this picture, because the ends are spacelike separated and there's no inherent connection between the state at one end and the state at the other end. It's a complete muddle, even before we try to recover the Born probabilities.
Rather than seeing MWI as the simple and elegant way to understand QM, I... (read more)
I'm currently preparing for the Summit so I'm not going to hunt down and find links. Those of you who claimed they wanted to see me do this should hunt down the links and reply with a list of them.
You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain. We don't emphasize this very hard when people talk in concrete terms about donating to more than one organization, because charitable dollars are not substitutable from a limited pool, the main thing is the variance in the tiny fraction of their income people donate to charity in the first place and so the amount of warm glow people generate for th... (read more)
Reading the QM sequence (someone link) will show you that to your surprise and amazement, what seemed to you like an unjustified leap and a castle in the air, a mere interpretation, is actually nailed down with shocking solidity.
... (read more)Quantum Mechanics Sequence
Pluralistic Ignorance
Bystander Apathy
Scope Insensitivity
What would you accept as evidence?
Would you accept sophisticated machine learning algorithms like the ones in the Netflix contest, who find connections that make no sense to humans, who simply can't work with high-dimensional data?
Would you accept a circuit designed by a genetic algorithm, which doesn't work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?
Would you accept a chess program which could crush any human chess player who ever lived? Kasparov at ELO 2851, Rybka at 3265. Wikipedia says grandmaster status comes at ELO 2500. So Rybka is now even further beyond Kasparov at his peak as Kasparov was beyond a new grandmaster. And it's not like Rybka or the other chess AIs will weaken with age.
Or are you going to pull a no-true-Scotsman and assert that each one of these is mechanical or unoriginal or not really beyond human or just not different enough?
This is rude (although I realize there is now name-calling and gratuitous insult being mustered on both sides) , and high g-factor does not make those MWI arguments automatically convincing. High g-factor combined with bullet-biting, a lack of what David Lewis called the argument of the incredulous stare, does seem to drive MWI pretty strongly. I happen to think that weighting the incredulous stare as an epistemic factor independent of its connections with evolution, knowledge in society, etc, is pretty mistaken, but bullet-dodgers often don't. Accusing someone of being low-g rather than a non-bullet-biter is the insulting possibility.
Just recently I encountered someone very high IQ/SAT/GRE scores who bought partial quantitative parsimony/Speed Prior type views, and biases against the unseen. This person claimed that the power of parsimony was not enough to defeat the evidence for galaxies and quarks, but was sufficient to defeat a Big World much beyond our Hubble Bubble, and to favor Bohm's interpretation over MWI. I think that view isn't quite consistent without a lot of additional jury-rigging, but it isn't reliably prevented by high g and exposure to the arguments from theoretical simplicity, non-FTL, etc.
Can I say, first of all, that if you want to think realistically about a matter like this, you will have to find better authorities than science-fiction writers. Their ideas are generally not their own, but come from scientific and technological culture or from "futurologists" (who are also a very mixed bunch in terms of intellect, realism, and credibility); their stories present speculation or even falsehood as fact. It may be worthwhile going "cold turkey" on all the SF you have ever read, bearing in mind that it's all fiction that was ground out, word by word, by some human being living a very ordinary life, in a place and time not very far from you. Purge all the imaginary experience of transcendence from your system and see what's left.
Of course science-fictional thinking, treating favorite authors as gurus, and so forth is endemic in this subculture. The very name, "Singularity Institute", springs from science fiction. And SF occasionally gets things right. But it is far more a phenomenon of the time, a symptom of real things, rather than a key to understanding reality. Plain old science is a lot closer to being a reliable guide to reality, thou... (read more)
It seems to me that a sufficiently cunning arguer can come up with what appears to be a slam-dunk argument for just about anything. As far as I can tell, I follow the arguments in the MWI sequence perfectly, and the conclusion does pretty much follow from the premises. I just don't know if those premises are actually true. Is MWI what you get if you take the Schrodinger equation literally? (Never mind that the basic Schrodinger equation is non-relativistic; I know that there are relativistic formulations of QM.) I can't tell you, because I don't know the underlying math. And, indeed, the "Copenhagen interpretation" seems like patent nonsense, but what about all the others? I don't know enough to answer the question, and I'm not going to bother doing much more research because I just don't really care what the answer is.
It looks to me as though you've focused in on one of the weaker points in XiXiDu's post rather than engaging with the (logically independent) stronger points.
Like what? Why he should believe in exponential growth? When by "exponential" he actually means "fast" and no one at SIAI actually advocates for exponentials, those being a strictly Kurzweilian obsession and not even very dangerous by our standards? When he picks MWI, of all things, to accuse us of overconfidence (not "I didn't understand that" but "I know something you don't about how to integrate the evidence on MWI, clearly you folks are overconfident")? When there's lots of little things scattered through the post like that ("I'm engaging in pluralistic ignorance based on Charles Stross's nonreaction") it doesn't make me want to plunge into engaging the many different little "substantive" parts, get back more replies along the same line, and recapitulate half of Less Wrong in the process. The first thing I need to know is whether XiXiDu did the reading and the reading failed, or did he not do the reading? If he didn't do the reading, then my answer is simply, "If you haven't done enough reading to notice that Stross isn't in our league, then of course you don't trust SIAI". That looks to me like the real issue. For substantive arguments, pick a single point and point out where the existing argument fails on it - don't throw a huge handful of small "huh?"s at me.
Castles in the air. Your claims are based on long chains of reasoning that you do not write down in a formal style. Is the probability of correctness of each link in that chain of reasoning so close to 1, that their product is also close to 1?
I can think of a couple of ways you could respond:
Yes, you are that confident in your reasoning. In that case you could explain why XiXiDu should be similarly confident, or why it's not of interest to you whether he is similarly confident.
It's not a chain of reasoning, it's a web of reasoning, and robust against certain arguments being off. If that's the case, then we lay readers might benefit if you would make more specific and relevant references to your writings depending on context, instead of encouraging people to read the whole thing before bringing criticisms.
Most of the long arguments are concerned with refuting fallacies and defeating counterarguments, which flawed reasoning will always be able to supply in infinite quantity. The key predictions, when you look at them, generally turn out to be antipredictions, and the long arguments just defeat the flawed priors that concentrate probability into anthropomorphic areas. The positive arguments are simple, only defeating complicated counterarguments is complicated.
"Fast AI" is simply "Most possible artificial minds are unlikely to run at human speed, the slow ones that never speed up will drop out of consideration, and the fast ones are what we're worried about."
"UnFriendly AI" is simply "Most possible artificial minds are unFriendly, most intuitive methods you can think of for constructing one run into flaws in your intuitions and fail."
MWI is simply "Schrodinger's equation is the simplest fit to the evidence"; there are people who think that you should do something with this equation other than taking it at face value, like arguing that gravity can't be real and so needs to be interpreted differently, and the long arguments are just th... (read more)
One problem I have with your argument here is that you appear to be saying that if XiXiDu doesn't agree with you, he must be stupid (the stuff about low g etc.). Do you think Robin Hanson is stupid too, since he wasn't convinced?
"There is no intangible stuff of goodness that you can divorce from life and love and happiness in order to ask why things like that are good. They are simply what you are talking about in the first place when you talk about goodness."
And then the long arguments are about why your brain makes you think anything different.
Okay, I can see how XiXiDu's post might come across that way. I think I can clarify what I think that XiXiDu is trying to get at by asking some better questions of my own.
"Eliezer Yudkowsky facts" is meant to be fun and entertainment. Do you agree that there is a large subjective component to what a person will think is fun, and that different people will be amused by different types of jokes? Obviously many people did find the post amusing (judging from its 47 votes), even if you didn't. If those jokes were not posted, then something of real value would have been lost.
The situation with XiXiDu's post's is different because almost everyone seems to agree that it's bad, and those who voted it up did so only to "stimulate discussion". But if they didn't vote up XiXiDu's post, it's quite likely that someone would eventually write up a better post asking similar questions and generating a higher quality discussion, so the outcome would likely be a net improvement. Or alternatively, those who wanted to "stimulate discussion" could have just looked in the LW archives and found all the discussion they could ever hope for.
This claim can be broken into two separate parts:
For 1: looking at current technology trends, Sandberg & Bostrom estimate that we should have the technology needed for whole brain emulation around 2030-2050 or so, at least assuming that it gets enough funding and that Moore's law keeps up. Even if there isn't much of an actual interest in whole brain emulations, improving scanning tools are likely to revolutionize neuroscience. Of course, respected neuroscientists are already talking about reverse-engineering of the brain as being within reach. If we are successful at reverse engineering the brain, then AI is a natural result.
As for two, as Eliezer mentioned, this is pretty much an antiprediction. Human minds are a particular type of architecture, running on a particular type of hardware: it would be an amazing coincidence if it just happened that our intelligence couldn't be drastically improved upon. We already know that we're insanely biased, to the point of people ... (read more)
Is there more to this than "I can't be bothered to read the Sequences - please justify everything you've ever said in a few paragraphs for me"?
Criticism is good, but this criticism isn't all that useful. Ultimately, what SIAI does is the conclusion of a chain of reasoning; the Sequences largely present that reasoning. Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying "justify yourselves!" doesn't advance the debate.
I don't begrudge SIAI at all for using Less Wrong as a platform for increasing its donor base, but I can definitely see myself getting annoyed sooner or later, if SIAI donors keep posting low-quality comments or posts, and then expecting special treatment for being a donor. You can ask Eliezer to not get annoyed, but is it fair to expect all the other LW regulars to do the same as well?
I'm not sure what the solution is to this problem, but I'm hoping that somebody is thinking about it.
These are reasonable questions to ask. Here are my thoughts:
Virtually certain that these things are possible in our physics. It's possible that transhuman AI is too difficult for human beings to feasibly program, in the same way that we're sure chimps couldn't program trans-simian AI. But this possibility seems slimmer when you consider that humans will start boosting their own intelligence pretty soon by other means (drugs, surgery, genetic engineering, uploading) and it's hard to imagine that recursive improvement would cap out any time soon. At some point we'll have a descendant who can figure out self-improving AI; it's just a question of when.
These are more about decision theory than logical uncertainty, IMO. If a self-improving AI isn't actually possible for a long time, then funding ... (read more)
I think Vernor Vinge at least has made a substantial effort to convince people of the risks ahead. What do you think A Fire Upon the Deep is? Or, here is a more explicit version:
... (read more)David Chalmers has been writing and presenting to philosophers about AI and intelligence explosion since giving his talk at last year's Singularity Summit. He estimates the probability of human-level AI by 2100 at "somewhat more than one-half," thinks an intelligence explosion following that quite likely, and considers possible disastrous consequences quite important relative to other major causes today. However, he had not written or publicly spoken about his views, and probably would not have for quite some time had he not been invited to the Singularity Summit.
He reports a stigma around the topic as a result of the combination of science-fiction associations and the early failures of AI, and the need for some impetus to brave that. Within the AI field, there is also a fear that discussion of long-term risks, or unlikely short-term risks may provoke hostile reactions against the field thanks to public ignorance and affect heuristic. Comparisons are made to genetic engineering of agricultural crops, where public attention seems to be harmful on net in unduly slowing the development of more productive plants.
[This comment is a response to the original post, but seemed to fit here most.] I upvoted the OP for raising interesting questions that will arise often and deserve an accessible answer. If someone can maybe put out or point to a reading guide with references.
On the crackpot index the claim that everyone else got it wrong deserves to raise a red flag, but that does not mean it is wrong. There are way to many examples on that in the world. (To quote Eliezer:'yes, people really are that stupid') Read "The Checklist Manifesto" by Atul Gawande for a real life example that is ridiculously simple to understand. (Really read that. It is also entertaining!) Look at the history of science. Consider the treatment that Semmelweis got for suggesting that doctors wash their hands before operations. You find lots of samples were plain simple ideas where ridiculed. So yes it can happen that a whole profession goes blind on one spot and for every change there has to be someone trying it out in the first place. The degree on which research is not done well is subject to judgment . Now it might be helpful to start out with more applicable ideas, like improving the tool set for real life problems. You don't have to care about the singularity to care about other LW content like self-debiasing, or winning.
Regarding the donation aspect, it seems like rationalist are particularly bad at supporting their own causes. You might estimate how much effort you spend in checking out any charity you do support, and then try to not demand higher standards of this one.
This argument can't be valid, because it also implies that biological life can't work either. At best, this implies a limit on the growth rate; but without doing the math, there is no particular reason to think that limit is slow.
The questions of speed/power of AGI and possibility of its creation in the near future are not very important. If AGI is fast and near, we must work on FAI faster, but we must work on FAI anyway.
The reason to work on FAI is to prevent any non-Friendly process from eventually taking control over the future, however fast or slow, suddenly powerful or gradual it happens to be. And the reason to work on FAI now is because the fate of the world is at stake. The main anti-prediction to get is that the future won't be Friendly if it's not specifically made Friendly, even if it happens slowly. We can as easily slowly drift away from things we value. You can't optimize for something you don't understand.
It doesn't matter if it takes another thousand years, we still have to think about this hugely important problem. And since we can't guarantee that the deadline is not near, expected utility calculation says we must still work as fast as possible, just in case. If AGI won't be feasible for a long while, that's great news, more time to prepare, to understand what we want.
(To be clear, I do believe that AGIs FOOM, and that we are at risk in the near future, but the arguments for that are informal and difficult to communicate, while accepting these claims is not necessary to come to the same conclusion about policy.)
I'm not exactly an SIAI true believer, but I think they might be right. Here are some questions I've thought about that might help you out. I think it would help others out if you told us exactly where you'd be interested in getting off the boat.
Judging from your post, you seem most skeptical about putting your efforts into causes whose probability of success is very difficult to estimate, and perhaps low.
Fact: Evaluating humor about Eliezer Yudkowsky always results in an interplay between levels of meta-humor such that the analysis itself is funny precisely when the original joke isn't.
They are very good examples of the genre (Chuck Norris-style jokes). I for one could not contain my levity.
I was embarrassed by most of the facts. The one about my holding up a blank sheet of paper and saying "a blank map does not correspond to a blank territory" and thus creating the universe is one I still tell at parties.
To put my own spin on XiXiDu's questions: What quality or position does Charles Stross possess that should cause us to leave him out of this conversation (other than the quality 'Eliezer doesn't think he should be mentioned')?
I'm curious what evidence you actually have that "You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality ... (read more)
The purpose I had in mind (stated directly in that post's grandparent, which you replied to) was to stop an artificial general intelligence from stealing vast computational resources. Since exploits in major software packages are still commonly discovered, including fairly frequent 0-day exploits which anyo... (read more)
A) doesn't seem to be quoted verbatim from the supplied reference!
There is some somewhat similar material there - but E.Y. is reading out a question that has been submitted by a reader! Misquoting him while he is quoting someone else doesn't seem to be very fair!
[Edit: please note the parent has been dramatically edited since this response was made]
Dawkins agrees with EY
Richard Dawkins states that he is frightened by the prospect of superhuman AI and even mentions recursion and intelligence explosion.
I was disappointed watching the video relative to the expectations I had from your description.
Dawkins talked about recursion as in a function calling itself, as an example of the sort of the thing that may be the final innovation that makes AI work, not an intelligence explosion as a result of recursive self-improvement.
I was not sure whether to downvote this post for its epistemic value or upvote for instrumental (stimulating good discussion).
I ended up downvoting, I think this forum deserves better epistemic quality (I paused top-posting myself for this reason). I also donated to SIAI, because its value was once again validated to me by the discussion (though I have some reservations about apparent eccentricity of the SIAI folks, which is understandable (dropping out of high school is to me evidence of high rationality) but couterproductive (not having enough accepted a... (read more)
I don't think this post was well-written, at the least. I didn't even understand the tl;dr?
I don't see much precise expansion on this, except for MWI? There's a sequence on it.
... (read more)I think the obvious answer to this is that there are a significant number of people out there, even out there in the LW community, who share XiXiDu's doubts about some of SIAIs premises and conclusions, but perhaps don't speak up with their concerns either because a) they don't know quite how to put them into words, or b) they are afraid of being ridiculed/looked down on.
Unfortunately, the tone of a lot of the responses to this thread lead me to believe that those motivated by the latter option may have been right to worry.
Yeah, I agree (no offense XiXiDu) that it probably could have been better written, cited more specific objections etc. But the core sentiment is one that I think a lot of people share, and so it's therefore an important discussion to have. That's why it's so disappointing that Eliezer seems to have responded with such an uncharacteristically thin skin, and basically resorted to calling people stupid (sorry, "low g-factor") if they have trouble swallowing certain parts of the SIAI position.
What are you considering as pitching in? That I'm donating as I am, or that I am promoting you, LW and the SIAI all over the web, as I am doing?
You simply seem to take my post as hostile attack rather than the inquiring of someone who happened not to be lucky enough to get a decent education in time.
All right, I'll note that my perceptual system misclassified you completely and consider that concrete reason to doubt it from now on.
Sorry.
If you are writing a post like that one it is really important to tell me that you are an SIAI donor. It gets a lot more consideration if I know that I'm dealing with "the sort of thing said by someone who actually helps" and not "the sort of thing said by someone who wants an excuse to stay on the sidelines, and who will just find another excuse after you reply to them", which is how my perceptual system classified that post.
The Summit is coming up and I've got lots of stuff to do right at this minute, but I'll top-comment my very quick attempt at pointing to information sources for replies.
Clippy, you represent a concept that is often used to demonstrate what a true enemy of goodness in the universe would look like, and you've managed to accrue 890 karma. I think you've gotten a remarkably good reception so far.
SCIENTOLOGY IS DANGEROUS. Scientology is not a joke and joining them is not something to be joked about. The fifth level of precaution is absolutely required in all dealings with the Church of Scientology and its members. A few minutes of research with Google will turn up extraordinarily serious allegations against the Church of Scientology and its top leadership, including allegations of brainwashing, abducting members into slavery in their private navy, framing their critics for crimes, and large-scale espionage against government agencies that might investigate them.
I am a regular Less Wrong commenter, but I'm making this comment anonymously because Scientology has a policy of singling out critics, especially prominent ones but also some simply chosen at random, for harrassment and attacks. They are very clever and vicious in the nature of the attacks they use, which have included libel, abusing the legal system,... (read more)
It has seemed to me for a while that a number of people will upvote any post that goes against the LW 'consensus' position on cryonics/Singularity/Friendliness, so long as it's not laughably badly written.
I don't think anything Eliezer can say will change that trend, for obvious reasons.
However, most of us could do better in downvoting badly argued or fatally flawed posts. It amazes me that many of the worst posts here won't drop below 0 for any stated amount of time, and even then not very far. Docking someone's karma isn't going to kill them, folks. Do everyone a favor and use those downvotes.
You've claimed that in your blogging heads divlog with Scott Aaronson that you think that it's pretty obvious that there will be an AGI within the next century. As far as I know you have not offered a detailed description of the reasoning that led you to this conclusion that can be checked by others.
I see this as significant for the reasons given in my comment here.
... (read more)This post makes very weird claims regarding what SIAI's positions would be.
"Spend most on a particular future"? "Eliezer Yudkowsky is the right and only person who should be leading"?
It doesn't at all seem to me that stuff such as these would be SIAI's position. Why doesn't the poster provide references for these weird claims?
Here's a good reference for what SIAI's position actually is:
http://singinst.org/riskintro/index.html
Yes, this is the important part. Chimps lag behind humans in 2 distinct ways - they differ in degree, and in kind. Chimps can do a lot of human-things, but very minimally. Painting comes to mind. They do a little, but not a lot. (Degree.) Language is another well-studied subject. IIRC, they can memorize some symbols and use them, but not in the recursive way that modern linguistics (pace Chomsky) seems to regard as key, not recursive at all. (Kind.)
What can we d... (read more)
Ok, I think I have an explanation for what's going on here. Those of us "old hands" who went through the period where LW was OB, and Eliezer and Robin were the only main posters, saw Eliezer as initially having very high status, and considered the "facts" post as a fun way of taking him down a notch or two. Newcomers who arrived after LW became a community blog, on the other hand, don't have the initial high status in mind, and instead see that post as itself assigning Eliezer a very high status, which they see as unjustified/weird/embarrassing. Makes sense, right?
(Voted parent up from -1, btw. That kind of report seems useful, even if the commenter couldn't explain why he felt that way.)
I think there are very good questions in here. Let me try to simplify the logic:
First, the sociological logic: if this is so obviously serious, why is no one else proclaiming it? I think the simple answer is that a) most people haven't considered it deeply and b) someone has to be first in making a fuss. Kurzweil, Stross, and Vinge (to name a few that have thought about it at least a little) seem to acknowledge a real possibility of AI disaster (they don't make probability estimates).
Now to the logical argument itself:
a) We are probably at risk from the... (read more)
The Charlie Stross example seems to be less than ideal. Much of what Stross has wrote about touches upon or deals intensely with issues connected to runaway AI. For example, the central premise of "Singularity Sky" involves an AI in the mid 20th century going from stuck in a lab to godlike in possibly a few seconds. His short story "Antibodies" focuses on the idea that very bad fast burns occur very frequently. He also has at least one (unpublished) story the central premises of which is that Von Neumann and Turing proved that P=NP and ... (read more)
Your skepticism is aimed in the wrong direction and MWI does not say what you think it does. Read the sequence. When you're done you'll have a much better gut sense of the gap between SIAI and Charles Stross.
This is an attempt (against my preference) to defend SIAI's reasoning.
Let's characterize the predictions of the future into two broad groups: 1. business as usual, or steady-state. 2. aware of various alarmingly exponential trends broadly summarized as "Moore's law". Let's subdivide the second category into two broad groups: 1. attempting to take advantage of the trends in roughly a (kin-) selfish manner 2. attempting to behave extremely unselfishly.
If you study how the world works, the lack of steady-state-ness is everywhere. We cannot use fossi... (read more)
I haven't followed the ins and the outs of this pointless drama, but I had assumed those were things Eliezer actually said. I'm pretty miffed to learn that those weren't actually quotes, but rather something you had "inferred from revealed self-evident wisdom".
That kind of stuff makes it tempting to pretty much ignore anything you write.
I have a theory: all the jokes parse out to "Eliezer is brilliant, and we have a bunch of esoteric in-jokes to show how smart we are". This isn't making fun of an authority figure.
This doesn't mean the article was a bad idea, or that I didn't think it was funny. I also don't think it's strong evidence that LW and SIAI aren't cults.
ETA: XiXiDu's comment that this is the community making fun of itself seems correct.
Can you please explain why you think those jokes shouldn't have been made? I thought that making fun of authority figures is socially accepted in general, and in this case shows that we don't take Eliezer too seriously. Do you disagree?
My observation is that small ambitions can become 'runaway disasters' unless a lot of the problems of FAI are solved.
That sounds as 'safe' as giving Harry Potter rules to follow.
I understand that this is an area in which we fundamentally disagree. I have previously disagreed about the wisdom of using human legal systems to control AI behaviour and I assume that our disagreement will be similar on this subject.
Wow, I thought it was one of the best. By that post I actually introduced a philosopher (who teaches in Sweden), who's been skeptic about EY, to read up on the MWI sequence and afterwards agree that EY is right.
I like that post - of course, few of the jokes are funny, but you read such a thing for the few gems they do contain. I think of it as hanging a lampshade (warning, TV tropes) on one of the problems with this website.
Absence of evidence is not evidence of absence?
There's simply no good reason to argue against cryonics. It is a chance in case of the worst case scenario and it is considerably higher than rotting six feet under.
Have you thought about the possibility that most experts simply are reluctant to come up with detailed critics about specific issues posed by the SIAI, EY and LW? Maybe they consider it not worth the effort as the data that is already available does not justify given claims in the first place.
Anyway, I think I might write some experts and all of ... (read more)
I feel some of the force of this...I do think we should take the opinions of other experts seriously, even if their arguments don't seem good.
I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you're going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you're just interested in maximizing expected utility, the complaint that we don't have a lot of evidence about what will be best for the future, or the complaint that we just don't really know whether SIAI's mission and methodology are going to work seems to lose a lot of force.
I'm a cognitivist. Sentences about goodness have truth values after you translate them into being about life and happiness etc. As a general strategy, I make the queerness go away, rather than taking the queerness as a property of a thing and using it to deduce that thing does not exist; it's a confusion to resolve, not an existence to argue over.
I don't believe that is necessarily true, just that no one else is doing it. I think other teams working on FAI Specifically would be a good thing, provided they were competent enough not to be dangerous.
Likewise, Lesswrong (then Overcoming bias) is just the only place I've found that actually looked at the morality problem is a non-obviously wrong way. When I arrived I had a different view on mora... (read more)
That's just a weird claim. When Richard Posner or David Chalmers does writing in the area SIAI folk cheer, not boo. And I don't know anyone at SIAI who thinks that the Future of Humanity Institute's work in the area isn't a tremendously good thing.
Have you looked into the philosophical literature?
Perhaps it was presumptuous and antagonistic, perhaps I could have been more tactful, and I'm sorry if I offended you. But I stand by my original statement, because it was true.
Crocker's Rules for me. Will you do the same?
You seem to be seriously misinformed about the present state of computer security. The resources on the side of good are vastly insufficient because offense is inherently easier than defense.
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) (Thanks Kevin)
... (read more)I listened to the video. He said that while reading aloud a question someone was asking him.
I'm not objecting to the reformulation in your now modified post. I'm just pissed that you made me believe that it was an actual Eliezer Yudkowsky quote.
Look, I'm quite often an idiot. I was looking for excuses while being psychologically overwhelmed today. If people here perceived that I did something wrong there, I probably did. I was just being lazy and imprudent so I wrote what I thought was the answer given by EY to the question posed in that part of a Q&A video series. There was no fraudulent intent on my part.
So please accept my apologies for possible misguidance and impoliteness. I'm just going to delete the comment now. (ETA: I'll just leave the links to the videos.)
I'm trying for some time now to not getting involved on here anymore and to get back to being a passive reader. But replies make me feel constrained to answer once more.
If there is something else, let me know and I'll just delete it.
You should not use quotation marks unless the quotes are verbatim. The "gist" does not suffice.
I'm not trying to be a nuisance here, but it is the only point I'm making right now, and the one that can be traced right back through the context. It is extremely difficult to make progress in a conversation if I cannot make a point about a specific argument without being expected to argue against an overall position that I may or may not even disagree with. It makes me feel like my arguments must come armed as soldiers.
This is a possibility (made more plausible if we're talking about those reins being used to incentivize early AIs to design more reliable and transparent safety mechanisms for more powerful successive AI generations), but it's greatly complicated by international competition: to the extent that careful limitation and restriction of AI capabilities and acce... (read more)
This reads like an attack on utilitarian ethics, but there's an extra inferential step in the middle which makes it compatible with utilitarian ethics being correct. Are you claiming that utilitarian ethics are wrong? Are you claiming that most charities are actually fraudulent and don't help people?
... (read more)Slight sidetrack:
There is, of course, one DOOM scenario (ok, one other DOOM scenario) which is entirely respectable here-- that the earth will be engulfed when the sun becomes a red giant.
That fate for the planet haunted me when I was a kid. People would say "But that's billions of years in the future" and I'd feel as though they were missing the point. It's possible that a more detailed discussion would have helped....
Recently, I've read that school teachers have a standard answer for kids who are troubled by the red giant scenario [1]-- that pe... (read more)
It is? That's a worry. Consider this a +1 for "That thesis is totally false and only serves signalling purposes!"
Overall I'd feel a lot more comfortable if you just said "there's a huge amount of uncertainty as to when existential risks will strike and which ones will strike, I don't know whether or not I'm on the right track in focusing on Friendly AI or whether I'm right about when the Singularity will occur, I'm just doing the best that I can."
This is largely because of the issue that I raise here
I should emphasize that I don't think that you'd ever knowingly do something that raised existential risk, I think that you're a kind and noble spirit. But I ... (read more)
How do your quotes claim that Eliezer Yudkowsky is the only person who should be leading?
(I would say that factually, there are also other people in leadership positions within SIAI, and Eliezer is extremely glad that this is so, instead of thinking that it should be only him.)
How do they demonstrate that donating to SIAI is "spending on a particular future"?
(I see it as trying to prevent a particular risk.)
Do you have any reason to suppose that Charlie Stross has even considered SIAI's claims?
If you're referring to Gary Drescher, I forwarded him a link of your post, and asked him what his views of SIAI actually are. He said that he's tied up for the next couple of days, but will reply by the weekend.
Good thing at least some people here are willing to think critically.
I know these are unpopular views around here, but for the record:
You seemed to seriously imply that Eliezer didn't understand that the "facts" thread was a joke, while actually he was sarcastically joking by hinting at not getting the joke in the comment you replied to. I downvoted the comment to punish stupidity on LW (nothing personal, believe it or not, in other words it's a one-step decision based on the comment alone and not on impression made by your other comments). Wei didn't talk about that.
To pick my own metaphor, it's more likely that randomly chosen matter will form clumps of useless crap than a shiny new laptop. As defined, UFAI is likely the default state for AGI, which is one reason I put such low hope on our future. I call myself an optimistic pessimist: I think we're going to create wonderful, cunning, incredibly powerful technology, and I think we're going to misuse it to destroy ourselves.
Because intelligent beings... (read more)
Me looking for some form of peer review is deemed to be bizarre? It is not my desire to crush the SIAI but to figure out what is the right thing to do.
You know what I would call bizarre? That someone writes in bold and all caps calling someone an idiot and afterwards banning his post. All that based on ideas that themselves are resulting from and based on unsupported claims. That is what EY is doing and I am trying to assess the credibility of such reactions.
EY is one of the smartest people on the planet and this has been his life's work for about 14 years. (He started SIAI in 2000.) By your own admission, you do not have the educational achievements necessary to evaluate his work, so it is not surprising that a small fraction of his public statements will seem bizarre to you because 14 years is plenty of time for Eliezer and his friends to have arrived at beliefs at very great inferential distance from any of your beliefs.
Humans are designed (by natural selection) to mistrust statements at large inferential distances from what they already believe. Human were not designed for a world (like the world of today) where there exists so much accurate knowledge of reality no one can know it all, and people have to specialize. Part of the process of becoming educated is learning to ignore your natural human incredulity at stateme... (read more)
I haven't found the text during a two minute search or so, but I think I remember Robin assigning a substantial probability, say, 30% or so, to the possibility that MWI is false, even if he thinks most likely (i.e. the remaining 70%) that it's true.
Much as you argued in the post about Einstein's arrogance, there seems to be a small enough difference between a 30% chance of being false, and a 90% chance of being false, if the latter would imply that Robin was stupid, the former would imply it too.
It seems that whether or not it's supposed to, in practice it does. From the just released "Intrinsic Chess Ratings", which takes Rybka and does exhaustive evaluations (deep enough to be 'relatively omniscient') of many thousands of modern chess games; on page 9:
... (read more)Please provide a link to this effect? (Going off topic, I would suggest that a "show all threads with one or more comments by users X, Y and Z" or "show conversations between users X and Y" feature on LW might be useful.)
(First reply below)
Greg Egan and the SIAI?
I completey forgot about this interview, so I already knew why Greg Egan isn't that worried:
... (read more)Yes, I am a little embarassed that I took the thread on such a sharp and lengthy tangent. I don't have time to move my comment though.
Different people (and cultures) seem to put very different weights on these things.
Here's an example:
You're a government minister who has to decide who to hire to do a specific task. There are two applicants. One is your brother, who is marginally competent at the task. The other is a stranger with better qualifications who will probably be much better at the task.
The answer is "obvious."
In some places, "obviously" you hire your brother. What kind of heartless bastard won't help out his own brother by giving him a job?
In others, "... (read more)
The grey goo example was named to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger.
I consider your comment an expression of personal disgust. No way you could possible misinterpret my original point and subsequent explanation to this extent.
Here's the Future of Humanity Institute's survey results from their Global Catastrophic Risks conference. The median estimate of extinction risk by 2100 is 19%, with 5% for AI-driven extinction by 2100:
http://www.fhi.ox.ac.uk/selected_outputs/fohi_publications/global_catastrophic_risks_survey
Unfortunately, the survey didn't ask for probabilities of AI development by 2100, so one can't get probability of catastrophe conditional on AI development from there.
Eric Drexler decided it was implausible some time ago:
"Nanotech guru turns back on 'goo'"
However, some still flirt with the corresponding machine intelligence scenarios - though those don't seem much more likely to me.
Psy-Kosh:
I would say that it's a crucial assumption, which should be emphasized clearly even in the briefest summary of this viewpoint. It is certainly not obvious, to say the least. (And, for full disclosure, I don't believe that it's a sufficiently close approximation of reality to avoid the problem I emphasized above.)
It's linked from the sidepanel here at least:
http://singinst.org/overview
But indeed it's not very prominently featured on the site. It's a problem of most of the site having been written substantially earlier than this particular summary, and there not (yet) having been a comprehensive change from that earlier state of how the site is organized.
How do you consider "formalizing friendliness" to be different from "building safeguards"?
Something feels very wrong about this sentence... I get a nagging feeling that you believe he has a valid argument, but he should have been nice to people who are irrationally clinging to alternative interpretations, via such irrational ways as nitpicking on the unim... (read more)
There's more to Bohmian mechanics than you may think. There are actually observables whose expectation values correspond to the Bohmian trajectories - "weak-valued" position measurements. This is a mathematical fact that ought to mean something, but I don't know what. Also, you can eliminate the pilot wave from Bohmian mechanics. If you start with a particular choice of universal wavefunction, that will be equivalent to adding a particular nonlocal potential to a classical equation of motion. That nonlocal potential might be the product of a holo... (read more)
Right in the beginning of the sequence you managed to get phases wrong. Quick search turns up:
http://www.ex-parrot.com/~pete/quantum-wrong.html
http://www.poe-news.com/forums/spshort.php?pi=1002430803&ti=1002430709
http://physics.stackexchange.com/a/23833/4967
Ouch.
Rest of the argument... given relativistic issues in QM as described, QM is just approximation which does not work at the relevant scale, and so concluding existence of multiple worlds from it is very silly.
Indeed.
... (read more)The solution varies by model, but on mine, alt-shift-letter physical key combinations do special characters that aren't labelled. You can also use the on-screen keyboard, and there are more onscreen keyboards available for download if the one you're currently using is badly broken.
Reputation and law enforcement are only a deterrent to the mass-copies-on-the-Internet play if the copies are needed long-term (ie, for more than a few months), because in the short term, with a little more effort, the fact that an AI was involved at all could be kept hidden.
Rather than copy itself immediately, the AI would first create a botnet that does nothing but spread itself and accept commands, like any other human-made botnet. This part is inherently anonymous; on the occasions where botnet owners do get caught, it's because they try to sell use of... (read more)
Robert A. Heinlein was an Engineer and SF writer, who created many stories that hold up quite well. He put in his understanding of human interaction, and of engineering to make stories that are somewhat realistic. But no one should confuse him with someone researching the actual likelyhood of any particular future. He did not build anything that improved the world, but he wrote interesting about the possibilities and encouraged many others to per-sue technical careers. SF has often bad usage of logic, and the well known hero bias, or scientists that put to... (read more)
You really, really, really should have noted that; as it is, your comment is an outright lie. (Thanks for catching, Tim.)
The linked bet doesn't reference "a week," and the "week" reference in the main linked post is about going from infrahuman to superhuman, not using that intelligence to destroy humanity.
That bet seems underspecified. Does attention to "Friendliness" mean any attention to safety whatsoever, or designing an AI with a utility function such that it's trustworthy regardless of power levels? Is "superhuman" defined relative to the then-current level of human (or upload, or trustworthy less intelligent AI) capacity with any enhancements (or upload speedups, etc)? What level of ability counts as superhuman? You two should publicly clarify the terms.
From THE EVERETT FAQ:
"Is many-worlds (just) an interpretation?"
"What unique predictions does many-worlds make?"
"Could we detect other Everett-worlds?"
The math works the same in all interpretations, but some experiments are difficult to understand intuitively without the MWI. I usually give people the example of the Elitzur-Vaidman bomb tester where the easy MWI explanation says "we know the bomb works because it exploded in another world", but other interpretations must resort to clever intellectual gymnastics.
This comment is my last comment for at least the rest of 2010.
You can always e-Mail me: ak[at]xixidu.net
I did answer to (c) before: any reasonable effort in that direction should start with trying to get SIAI itself to change or justify the way it behaves.
What, why are you talking about a hostile attack?
Of course I didn't feel that it would be that. It's quite the opposite, it felt to me like communicating an unhealthy air of hero worship.
Asteroid strikes are very unlikely - so beating them is a really low standard, which IMO, machine intelligence projects do with ease. Funding the area sensibly would help make it happen - by most accounts. Detailed justification is beyond the scope of this comment, though.
I'm curious, because I like to collect this sort of data: what is your median estimate?
(If you don't want to say because you don't want to defend a specific number or list off a thousand disclaimers I completely understand.)
"good" means increasing the utility functions of others.
More precise: "Perhaps you are maximizing expected utility, but your utility function is equal to some logarithm of some number representing the amount you've increased values assumed by the utility functions of others."
You wrote this and afterwards sending me a private message on how you are telling me this so that I shut up.
Why would I expect honest argumentation from someone who makes use of such tactics? Especially when I talked about the very same topic with you before just to find out that you do this deliberately?
I do not appreciate being told that I "obviously" have not read something that I have, in fact, read. And if you were keeping track, I have previously sent you private messages correcting your misconceptions on that topic, so you should have known that. And now that I've hinted at why you think it's connected to MWI, I can see that that's just another misconception.
Your tone is antagonistic and I had to restrain myself from saying some very hurtful things that I would've regretted. You need to take a step back and think about what you're doing here, before you burn any more social bridges.
EDIT: Argh, restraint fail. That's what the two deleted comments below this are.
That was my impression, also. As a result, I found many elements of the responses to XiXiDu to be disappointing. While there were a few errors in his post (e.g. attributing Kurweil views to SIAI), in general it should have been taken as an opportunity to clarify and throw down some useful links, rather than treat XiXiDu (who is also an SIAI donor!) as a low-g interloper.
Those would seem likely to be helpful indeed. Better programming tools might also help, as would additional computing power (not so much because computing power is actually a limiting factor today, as because we tend to scale our intuition about available computing power to what we physically deal with on an everyday basis -- which for most of us, is a cheap desktop PC -- and we tend to flinch away from designs whose projected requirements would exceed such a cheap PC; increasing the baseline makes us less likely to flinch away from good designs).
Look at his answer for The Singularity:
... (read more)Those surveys suffer from selection bias. Nick Bostrom is going to try to get a similar survey instrument administered to a less-selected AI audience. There was also a poll at the AI@50 conference.
Now this is a startling claim.
Be more specific!
Google "global ecophagy".
I'd start here to get an overview.
My summary would be: there are huge numbers of types of minds and motivations, so if we pick one at random from the space of minds then it likely to be contrary to our values because it will have a different sense of what is good or worthwhile. This moderately relies on the speed/singleton issue, because evolution pressure between AI might force them in the same direction as us. We would likely be out-competed before this happens though, if we rely on competition between AIs.
I think various people associated with SIAI mean... (read more)
Okay, we had this back and forth before and I didn't understand you then and now I do. I guess I was being dense before. Anyway, the probability of current action leading to FAI might still be sufficiently small so that it makes sense to focus on other existential risks for the moment. And my other points remain.
Given your answers to 1-3, you should spend all of your altruistic efforts on mitigating x-risk (unless you're just trying to feel good, entertain yourself, etc.).
For 4, I shouldn't have asked you whether you "think" something beats negotiating a positive singularity in terms of x-risk reduction. Better: Is there some other fairly natural class of interventions (or list of potential examples) such that, given your credences, has a higher expected value? What might such things be?
For 5-6, perhaps you should think about what such organizations m... (read more)
From the one comment on Bohm I can find, it seems that he actually dislikes Bohm because the particles are "epiphenomena" to the pilot wave. Meaning the particles don't actually do anything except follow the pilot wave, and it's actually the the pilot wave itself that does all the computation (of minds and hence observers).
Maybe I should have said "reading him in general..."
The rest is quibbling over definitions.
I am looking for the evidence in "supported by evidence". I am further trying to figure how you anticipate your beliefs to pay rent, what you anticipate to see if explosive recursive self-improvement is possible, and how that belief could be surprised by data.
If you just say, "I predict we will likely be wiped out by badly done AI.", how do you expect to update on evidence? What would constitute such evidence?
I don't think we really have a disagreement here. If you are building a normal program to do whatever, then by all means, do your best and try to implement safety features. Any failure would most likely be local.
However! If we are talking about building AI, which will go through many iterations, will modify its own code, and will become super-intelligent, then for all our sakes I hope you will have mathematically proven that the AI is Friendly. Otherwise you are betting the fate of this world on a hunch. If you don't agree with this point, I invite you to read Eliezer's paper on AI risks.
I can imagine two different meanings for "not convinced about MWI"
It refers to someone who is not convinced that MWI is as good as any other model of reality, and better than most.
It refers to someone who is not convinced that MWI describes the structure of reality.
If we are meant to understand the meaning as #1, then it may well indicate that someone is stupid. Though, more charitably, it might more likely indicate that he is ignorant.
If we are meant to understand the meaning as #2, then I think tha... (read more)
Well, that depends. Have you actually tried to do the mental gymnastics and explain the linked experiment using the Copenhagen interpretation? I suspect that going through with that may influence your final opinion.
Sorry, but no. The Dirac equation is invariant under Lorentz transformations, so what's local in one inertial reference frame is local in any other as well.
I see. Does that actually work for you? (Note that your answer will determine whether I mentally re-categorize you from 'interested open-minded outsider' to 'troll'.)
I am reluctant because you seem to ask for magical programs when you write things like:
I was going to link to AIXI and approximations thereof; full AIXI is as general as an intelligence can be if you accept that there are no uncomputable phenomenon, and the approximations are already pretty powerful (from nothing to playing Pac-Man).
But then it occu... (read more)
I didn't ask about perceived importance (that has already taken feasibility into account), I asked about your belief that it's not a productive enterprise (that is the feasibility component of importance, considered alone), that we are not ready to efficiently work on the problem yet.
If you believe that we are not ready now, but believe that we must work on the problem eventually, you need to have a notion of what conditions are necessary to conclude that it's productive to work on the problem under those conditions.
And that's my question: what are those... (read more)
I figure a pretty important thing is to get out of the current vulnerable position as soon as possible. To do that, a major thing we will need is intelligent machines - and so we should allocate resources to their development. Inevitably, that will include consideration of safety features. We can already see some damage when today's companies decide to duke it out - and today's companies are not very powerful compared to what is coming. The situation seems relatively pressing and urgent.
p(asteroid strike/year) is pretty low. Most are not too worried.
It doesn't sound to me as though you're maximizing expected utility. If you were maximizing expected utility, you would put all of your eggs in the most promising basket.
Or perhaps you are maximizing expected utility, but your utility function is equal to the number of digits in some number representing the amount of good you've done for the world. This is a pretty selfish/egotistical utility function to have, and it might be mine as well, but if you have it it's better to be honest and admit it. We're hardly the only ones:
http://www.slate.com/id/2034
Anyway, I herewith apologize unconditionally for any offence and deleted my previous comment.
Going to watch a movie now and eat ice cream. Have fun :-)
What subsequent conclusions are based on MWI?
And indeed, I suggested to SIAI folk that all public record predictions of AI timelines be collected for that purpose, and such a project is underway.
No, no, I'm not at all confident that humans will be obliterated soon. But why, for example, is it more likely that humans will go extinct due to AGI than that humans will go extinct due to a large scale nuclear war? It could be that AGI deserves top priority, but I haven't seen a good argument for why.
Since I've now posted several comments on this thread defending and/or "siding with" XiXiDu, I feel I should state, for the record, that I think this last comment is a bit over the line, and I don't want to be associated with the kind of unnecessarily antagonistic tone displayed here.
Although there are a couple pieces of the SIAI thesis that I'm not yet 100% sold on, I don't reject it in its entirety, as it now sounds like XiXiDu does - I just want to hear some more thorough explanation on a couple of sticking points before I buy in.
Also, charisma is in the eye of the beholder ;)
Do we pick a side of a coin "at random" from the two possibilities when we flip it?
Epistemically, yes, we don't have sufficient information to predict it*. However if we do the same thing twice it has the same outcome so it is not physically random.
So while the process that decides what the first AI is like is not physically random, it is epistemically random until we have a good idea of what AIs produce good outcomes and get humans to follow those theories. For this we need something that looks like a theory of friendliness, to some degree.
Consi... (read more)
The point is that they know they're doing it for the fun of it rather than actually coming up with anything that needs to be prevented.
Another vacuous statement. I expected more.
You might have gotten confused because I quoted Psy-Kosh's phrase "specific algorithm/dynamic for judging values" whereas Eliezer's original idea I think was more like an algorithm for changing one's values in response to moral arguments. Here are Eliezer's own words:
This was a very good job of taking a number of your comments and turning them into a coherent post. It raised my estimation that Eliezer will be able to do something similar with turning his blog posts into a book.
Why do you think so? Quantum mechanics is complicated, and questions of what is a 'better' theory are very subtle.
On the other hand, figuring out what claim your arguments actually support, is rather simple. You have an argument which: gets wrong elementary facts, gets wrong terminology, gets wrong the very claim. All the easy stuff is wrong. You still believe that it gets right some hard stuff. Why?
He should have left a line of retreat for himself.
His monologue on color, for instance.
This assumption is made by every other interpretation of quantum mechanics I know. On the other hand, I'm not a physicist; I'm clearly not up to date on things.