He doesn't "diss UFAI concerns", he says "I don’t know how to productively work on that", which seems accurate for the conventional meaning of "productive" even after looking into the issue. He doesn't address the question of whether it's worthwhile to work on the issue unproductively, illustrating his point with the analogy (overpopulation on Mars) where it's clearly not.
His article commentary on G+ seems to get more into the "dissing" territory:
Enough thoughtful AI researchers (including Yoshua Bengio, Yann LeCun) have criticized the hype about evil killer robots or "superintelligence," that I hope we can finally lay that argument to rest. This article summarizes why I don't currently spend my time working on preventing AI from turning evil.
See this video at 39:30 for Yann LeCun giving some comments. He said:
Also here is an IEEE interview:
Spectrum: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?
LeCun: It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.
There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.
Spectrum: What do you think he is going to accomplish in his job at Google?
LeCun: Not much has come out so far.
Spectrum: I often notice when I talk to researchers about the Singularity that while privately they are extremely dismissive of it, in public, they’re much more temperate in their remarks. Is that because so many powerful people in Silicon Valley believe it?
LeCun: AI researchers, down in the trenches, have to strike a delicate balance: be optimistic about what you can achieve, but don’t oversell what you can do. Point out how difficult your job is, but don’t make it sound hopeless. You need to be honest with your funders, sponsors, and employers, with your peers and colleagues, with the public, and with yourself. It is difficult when there is a lot of uncertainty about future progress, and when less honest or more self-deluded people make wild claims of future success. That’s why we don’t like hype: it is made by people who are either dishonest or self-deluded, and makes the life of serious and honest scientists considerably more difficult.
When you are in the kind of position as Larry Page and Sergey Brin and Elon Musk and Mark Zuckerberg, you have to prepare for where technology is going in the long run. And you have a huge amount of resources to make the future happen in a way that you think will be good. So inevitably you have to ask yourself those questions: what will technology be like 10, 20, 30 years from now. It leads you to think about questions like the progress of AI, the Singularity, and questions of ethics.
Spectrum: Right. But you yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.
LeCun: Not anytime soon.
Spectrum: Or ever.
LeCun: No, you can’t say never; technology is advancing very quickly, at an accelerating pace. But there are things that are worth worrying about today, and there are things that are so far out that we can write science fiction about it, but there’s no reason to worry about it just now.
I think since he draws an analogy to a problem it would be actually absurd to work on (no point working on overpopulation on Mars unless several other events happen first), he does seem to be suggesting that it's ridiculous worrying about things like UFAI now rather than "hundreds, maybe thousands of years from now".
Anyway, I only thought the post was interesting from a PR point of view. The AI problem has been getting good press lately with Musk/Gates et al suggesting that it is something worth worrying about. Ng hasn't said anything that will move the larger discussion in an interesting direction, but it does have the ability to move the whole problem back very slightly in the direction of 'weird problem only weird people will spend time worrying about' in the minds of the general public.
Juergen Schmidhuber is another notable AI researcher. He did an AMA a few days ago. He does believe in the singularity and that AIs will probably be indifferent to us if not hostile.
I also think he is the most likely person to create it. A lot of his research is general AI-ish stuff and not just deep convnets.
I am interpreting IlyaShpitser as commenting on OphilaDros's presentation; why say Ng "disses" UFAI concerns instead of "dismisses" them?
It also doesn't help that the underlying content is a handful of different issues that all bleed together: the orthogonality question, the imminence question, and the Hollywood question. Ng is against Hollywood and against imminence, and I haven't read enough of his writing on the subject to be sure of his thoughts on orthogonality, which is one of the actual meaningful points of contention between MIRI and other experts on the issue. (And even those three don't touch on Ng's short objection, that he doesn't see a fruitful open problem!)
My impression was that imminence is a point of contention, much less orthogonality. Who specifically do you have in mind?
My impression was that imminence is a point of contention, much less orthogonality.
This article is a good place to start in clarifying the MIRI position. Since their estimate for imminence seems to boil down to "we asked the community what they thought and made a distribution," I don't see that as contention.
There is broad uncertainty about timelines, but the MIRI position is "uncertainty means we should not be confident we have all the time we need," not "we're confident it will happen soon," which is where someone would need to be for me to say they're "for imminence."
Interesting. I considered imminence more of a point of contention b/c the most outspoken "AI risk is overhyped" people are mostly using it as an argument (and I consider this bunch way more serious than Searle and Brooks: Yann LeCun, Yoshua Bengio, Andrew Ng).
Yes I didn't mean Ng. "Diss" is sort of unfortunate phrasing, he just wants to get work done. Sorry for being unclear.
Ok, sure. Changed the title in line with Vaniver's suggestion.
I had not understood what the "tribal talk" comment was referring to either and then decided to put only as much effort into understanding it as the commenter had in being understood. :)
maybe there will be some AI that turn evil
That's the critical mistake. AIs don't turn evil. If they could, we would have FAI half-solved.
AIs deviate from their intended programming, in ways that are dangerous for humans. And it's not thousands of years away, it's away just as much as a self-driving car crashing into a group of people to avoid a dog crossing the street.
Even your clarification seems too anthromorphic to me.
AIs don't turn evil, but I don't think they deviate from their programming either. Their programming deviates from their programmers values. (Or, another possibility, their programmer's values deviate from humanity's values).
AIs don't turn evil, but I don't think they deviate from their programming either.
They do, if they are self-improving, although I imagine you could collapse "programming" and "meta-programming", in which case an AI would just only partially deviate. The point is you couldn't expect things turn out to be so simple when talking about a runaway AI.
AIs deviate from their intended programming, in ways that are dangerous for humans. And it's not thousands of years away, it's away just as much as a self-driving car crashing into a group of people to avoid a dog crossing the street.
But that's a very different kind of issue than AI taking over the world and killing or enslaving all humans.
EDIT:
To expand: all technologies introduce safety issues.
Once we got fire some people got burnt. This doesn't imply that UFFire (Unfriendly Fire) is the most pressing existential risk for humanity and we must devote huge amount of resources to prevent it and never use fire until we have proved that it will not turn "unfriendly".
Well, there's a phoenomenon called "flash over", that realizes in a confined environment, and happens when the temperature of a fire becomes so high that all the substances within starts to burn and feed the reaction.
Now, imagine that the whole world could become a closed environment for the flashover...
However, UFFire does not uncontrollably exponentially reproduce or improve its functioning. Certainly a conflagration on a planet covered entirely by dry forest would be an unmitigatable problem rather quickly.
In fact, in such a scenario, we should dedicate a huge amount of resources to prevent it and never use fire until we have proved it will not turn "unfriendly".
However, UFFire does not uncontrollably exponentially reproduce or improve its functioning. Certainly a conflagration on a planet covered entirely by dry forest would be an unmitigatable problem rather quickly.
Do you realize this is a totally hypothetical scenario?
http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/
(An earlier version of this article was titled 'Andrew Ng disses UFAI concerns" which is the phrasing many of the commenters are responding to).