I'm sure there's no need to point to Robin Hanson's anti-foom writings? The best single article is IMO Irreducible Detail essentially questioning the generality of intelligence.
Here is a key quote:
Human brains are smart mainly by containing many powerful not-fully-general modules, and using many modules to do each task. These modules would not work well in all possible universes, but they often do in ours - See more at: http://www.overcomingbias.com/2014/07/limits-on-generality.html#sthash.S1KygaG4.dpuf
It is true that adult human brains are built out of many domain specific modules, but these modules develop via a very general universal learning system. The neuroscientific evidence directly contradicts the evolved modularity hypothesis, which hanson appears to be heavily influenced by. That being said, his points about AI progress being driven by a large number of mostly independent advances still carries through.
Hanson's general analysis of the economics of AGI takeoff seem pretty sound - even if it is much more likely that neuro-AGI precedes ems.
One may try the following conjecture: Synthetic biology is so simple and AI is so complex, that risks of extinction from artificial viruses are far earlier in time. Even if both risks have the same probability individually, the one that comes first gets biggest part of total probability.
For example, lets Pv = 0.9 is the risks of viruses in the absence of any other risks, and Pai = 0.9 is the risk of AI in absence of any viruses. But Pv may happened in the first half of 21 century, and Pai in the second. In this case we have total probability of extinction =0.99, from which 0.9 comes from viruses, and 0.09 comes cent from AI.
If it true, than promoting AI as the main existential risk is misallocation of resources.
If we look closer in Pv and Pai, we may find that Pv is exponentially increasing in time because of Moore law in biotech, while Pai describes one time event and is constant. AI will be friendly or not. (It also may have more complex time depending, but here I just estimate the probability that FAI theory will be created and implemented.)
And assuming that AI is the only mean to stop creating dangerous viruses (it may be untrue but for the sake of the argument we will supp...
Arguments against AI risk, .or arguments against the MIRI conception of AI risk?
I have heard a hint of a whisper of a rumour that I am considered a bit of a contrarian around here...but I am actually a little more convinced of AI threat in general than I used be before I encountered less wrong. (in particular, at one time, I would have said "just pull the plug out", but there's some mileage in the unknowing arguments)
The short version of the arguments against MIRIs version of AI threat is that it is highly conjunctive. The long version is long. a consequence of having a multi stage argument, with a fan out of alternative possibilities at each stage.
Ramez Naam discusses it here: http://rameznaam.com/2015/05/12/the-singularity-is-further-than-it-appears/
I find the discussion of corporations as superintelligences somewhat persuasive. I understand why Eliezer and others do not consider them superintelligences, but it seems to me a question of degree; they could become self-improving in more and more respects and at no point would I expect a singularity or a world-takeover.
I also think the argument from diminishing returns is pretty reasonable: http://www.sphere-engineering.com/blog/the-singularity-is-not-coming.html
On the same note, but probably already widely known, Scott Aaronson on "The Signularity Is Far" (2008): http://www.scottaaronson.com/blog/?p=346
Here is a novel argument you may or may not have heard: We live in the best of all probable worlds due to simulation anthropics. Future FAI civs spend a significant amount of their resources to resimulate and resurrect past humanity - winning the sim race by a landslide (as UFAI is not strongly motivated to sim us in large numbers). As a result of this anthropic selection force, we find ourselves in a universe that is very lucky - it is far more likely to lead to FAI than you would otherwise think.
The best standard argument is this: the brain is a universal learning machine - the same general architecture that will necessarily form the basis for any practical AGI. In addition the brain is already near optimal in terms of what can be done for 10 watts with any irreversible learning machine (this is relatively easy to show from wiring energy analysis). Thus any practical AGI is going to be roughly brain like, similar to baby emulations. All of the techniques used to raise humans safely can thus be used to raise AGI safely. LW/MIRI historically reject this argument based - as far as I can tell - on a handwavey notion of 'anthropomorphic bias', which has no technical foundation.
I've presented the above argument about four years ago, but I never bothered to spend the time backing it up in excruciating formal detail. Until more recently. The last 5 years of progress in AI strongly supports this anthropomorphic AGI viewpoint.
the brain is already near optimal in terms of what can be done for 10 watts with any irreversible computer (this is relatively easy to show from wiring energy analysis).
Do you have a citation for this? My understanding is that biological neural networks operate far from the Landauer Limit (sorry I couldn't find a better citation but this seems to be a common understanding), whereas we already have proposals for hardware that is near that limit.
Here's one from a friend of mine. It's not exactly an argument against AI risk, but it is an argument that the problem may be less urgent than it's traditionally presented.
There's plenty of reason to believe that Moore's Law will slow down in the near future
Progress on AI algorithms has historically been rather slow.
AI programming is an extremely high level cognitive task, and will likely be among the hardest things to get an AI to do.
These three things together suggest that there will be a 'grace period' between the development of general agents
http://kruel.co/2012/07/17/ai-risk-critiques-index/
Kruel's critique sounded very convincing when I first read it.
Thanks for doing this. A lack of self criticism about AI risk is one of the reasons I don't take it too seriously.
I generally agree with http://su3su2u1.tumblr.com/ , but it may not be organized enough to be helpful.
As for MIRI specifically, I think you'd be much better served by mainstream software verification and cryptography research. I've never seen anyone address why that is not the case.
I have a bunch of disorganized notes about why I'm not convinced of AI risk, if you're interested I could share more.
Creating stable AGI that operates in the real world may be unexpectedly difficult. What I mean by this is we might solve some hard problems in AI, and the result might work in some limited domains, but isn't stable in the real world.
An example would be Pascal's Mugging. An AI that maximizes expected utility with an unbounded utility function, would spend all it's time worrying about incredibly improbable scenarios.
Reinforcement learning agents might simply hijack their own reinforcement channel, set it to INF, and be done.
Or the Anvil Problem where a reinf...
It doesn't matter how safe you are about AI if there's a million other civilizations in the universe and some non-trivial portion of them aren't being as careful as they should be.
A UFAI is unlikely to stop at the home planet of the civilization that creates it. Rather you'd expect such a thing to continue to convert the remainder of the universe into computronium to store the integer for it's fitness function or some similar doomsday scenario.
AI doesn't work as a filter because it's the kind of disaster likely to keep spreading and we'd expect to see larg...
I've always consider the psychological critiques of AI risk (eg "the singularity is just rapture of the nerds") to be very weak ad hominems. However, they might be relevant for parts of the AI risk thesis that depend on the judgements of the people presenting it. The most relevant part would be in checking whether people have fully considered the arguments against their position, and gone out to find more such arguments.
There exists a technological plateau for general intelligence algorithms, and biological neural networks already come close to optimal. Hence, recursive self-improvement quickly hits an asymptote.
Therefore, artificial intelligence represents a potentially much cheaper way to produce and coordinate intelligence compared to raising humans. However, it will not have orders of magnitude more capability for innovation than the human race. In particular, if humans are unable to discover breakthroughs enabling vastly more efficient production of computational ...
Ray Kurzwiel seems to believe that humans will keep pace with AI through implants or other augmentation, presumably up to the point that WBE becomes possible and humans get all/most of the advantages an AGI would have. Arguments from self-interest might show that humans will very strongly prefer human WBE over training an arbitrary neural network of the same size to the point that it becomes AGI simply because they hope to be the human who gets WBE. If humans are content with creating AGIs that are provably less intelligent than the most intelligent huma...
I wrote some arguments that I think are novel here: http://lesswrong.com/r/discussion/lw/ly8/values_at_compile_time/cam0
At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk.
EDIT: Thanks for all the contribution! Keep them coming...