New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 6:37 PM

Should I believe what the SIAI claims? I'm still not sure, although I learnt some things since that post. But what I know is how serious the people here take this stuff. Also read the comments on this post for how people associated with LW overreact to completely harmless AI research.

The issue with potential risks posed by unfriendly AI are numerous. The only organisation that takes those issues serious is the SIAI, as its name already implies. But I believe most people simply don't see a difference between the SIAI and one or a few highly intelligent people telling them that a particle collider could destroy the world while all experts working directly on it claim there's no risk. Now I think I understand the argument that if the whole world is at stake it does outweigh the low probability of the event. But does it? I think it is completely justified to have at least one organisation working on FAI, but is the risk as serious as portrayed and perceived within the SIAI? Right now if I had to hazard a guess I'd say that it will probably be a gradual development of many exponential growth phases. That is, we'll have this conceptual revolution and optimize it very rapidly. Then the next revolution will be necessary. Sure, I might be wrong there, as the plateau argument of self-improvement recursion might hold. But even if that is true, I think we'll need at least two paradigm-shattering conceptual revolutions before we get there. But what does that mean though? How quickly can such revolutions happen? I'm guessing that this could take a long time, if it isn't completely impossible. That is, if we are not the equivalent of Universal Turing Machine of abstract reasoning. Just imagine we are merely better chimps. Maybe it doesn't matter if a billion humans does science for a million years, we won't come up with the AI equivalent of Shakespeare's plays. This would mean that we are doomed to evolve slowly, to tweak ourselves incrementally into a posthuman state. Yet, there are also other possibilities, that AGI might for example be a gradual development over many centuries. Human intelligence might turn out to be close to the maximum.

There is so much we do not know yet (http://bit.ly/ckeQo6). Take for example a constrained well-understood domain like Go. AI does still perform awfully at Go. Or take P vs. NP.:

P vs. NP is an absolutely enormous problem, and one way of seeing that is that there are already vastly, vastly easier questions that would be implied by P not equal to NP but that we already don’t know how to answer. So basically, if someone is claiming to prove P not equal to NP, then they’re sort of jumping 20 or 30 nontrivial steps beyond what we know today. [...] We have very strong reasons to believe that these problems cannot be solved without major — enormous — advances in human knowledge. [...] So in order to prove such a thing, a prerequisite to it is to understand the space of all possible efficient algorithms. That is an unbelievably tall order. So the expectation is that on the way to proving such a thing, we’re going to learn an enormous amount about efficient algorithms, beyond what we already know, and very, very likely discover new algorithms that will likely have applications that we can’t even foresee right now. (http://web.mit.edu/newsoffice/2010/3q-pnp.html).

But that is just my highly uneducated guess which I never seriously contemplated. I believe that for most academics the problem here is mainly about the missing proof of concept. Missing evidence. They are not the kind of people who wait before testing the first nuke because it might ignite the atmosphere. If there's no good evidence, a position supported by years worth of disjunctive lines of reasoning won't convince them either.

The paperclip maximizer (http://wiki.lesswrong.com/wiki/Paperclip_maximizer) scenario needs serious consideration. But given what needs to be done, what insights may be necessary to create something creative that is effective in the real world, it's hard to believe that this is a serious risk. It's similar with the kind of grey goo scenario that nanotechnology might hold. It will likely be a gradual development that once it becomes sophisticated enough to pose a serious risk is also understood and controlled by countermeasures.

I also wonder why we don't see any alien paperclip maximizer's out there? If there are any in the observable universe our FAI will lose anyway since it is far behind in its development.

Oops, posted an article of my own before seeing this, apologies for the duplication! I think it might be best to leave mine up, though: it's in the main rather than the discussion area, and it includes some quotes from the article.

"I think the relation between breadth of intelligence and depth of empathy is a subtle issue which none of us fully understands (yet). It's possible that with sufficient real-world intelligence tends to come a sense of connectedness with the universe that militates against squashing other sentiences."

Oh ok, never mind about FAI, the woo-woo will save us all.

Way to quotemine. Here's the next line, for people who haven't yet read the article themselves:

"But I'm not terribly certain of this, any more than I'm terribly certain of its opposite."

Regardless of how not-so-certain he is about it, it seems pretty irrational to say, "well, maybe this bad scenario will happen, but my wishful thinking scenario is just as likely!".

My interpretation was that he found both about as plausible, i.e. not very. And the fragment out of context made it seem like the part you quoted was what he actually believed to be the truth, which was certainly misrepresenting him regardless of exactly how implausible he found that scenario to be.

[-][anonymous]13y00
[This comment is no longer endorsed by its author]Reply