AlphaGo versus Lee Sedol

When I started hearing about the latest wave of results from neural networks, I thought to myself that Eliezer was probably wrong to bet against them. Should MIRI rethink its approach to friendliness?

Showing 3 of 8 replies (Click to show all)

"Neural networks" vs. "Not neural networks" is a completely wrong way to look at the problem.

For one thing, there are very different algorithms lumped under the title "neural networks". For example Boltzmann machines and feedforward networks are both called "neural networks" but IMO it's more because it's a fashionable name than because of actual similarity in how they work.

More importantly, the really significant distinction is making progress by trail and error vs. making progress by theoretical understanding. The... (read more)

0turchin4yI have one more thought about it. If we work on AI safety problem, we should find the way to secure exiting AIs, not ideal AIs. As if we work on nuclear energy safety, we it would be easy to secure nuclear reactors than nuclear weapons, but knowing that the weapons will be created, we still need to find the way to make the safe. The world had chosen to develop neural net based AI. So we should think how install safety in it.

AlphaGo versus Lee Sedol

by gjm 1 min read9th Mar 2016183 comments


There have been a couple of brief discussions of this in the Open Thread, but it seems likely to generate more so here's a place for it.

The original paper in Nature about AlphaGo.

Google Asia Pacific blog, where results will be posted. DeepMind's YouTube channel, where the games are being live-streamed.

Discussion on Hacker News after AlphaGo's win of the first game.