I went to this event to listen to Jaan TallinnScott Aaronson and Don Eigler discuss the AGI, the Singularity and the [F]AI research. Not surprisingly, Jaan, whose ideas are significantly influenced by what EY preaches, advocated the urgent need for research into AGI friendliness as much as the AGI research proper.

Scott was rather more laid back, estimating that a "FOOMable" AGI is probably 10,000 years away, and that, while the AGI problem is already really really hard, the FAI problem is harder still, so expending significant effort on the FAI problem before we understand the AGI issues better is probably not a good use of resources.

Don, who makes atom-sized gates in his lab, suggested that the Moore's law will probably level out before the AGI becomes a reality. When asked, he said that he can see another 10^3-10^4 times improvement in chip complexity with technological innovations only, without the need for new scientific breakthroughs. This includes both the miniaturization of gates to near-atomic size and introducing 3D layouts with multiple interconnects. He expects the latter to be a significant breakthrough, provided the power consumption and dissipation issues are solved. 10^4 times corresponds to about 30-50 years given the current slope.

A large chunk of the discussion was rehashing the standard points about the FAI (Pascal's wager type of arguments and counterarguments, augmentation and upload as some ways around UFAI, etc.), all of which have been discussed here to no end, so I will not repeat it.

The video of the event will apparently be posted eventually, no time frame given. I have a marginal quality voice recording with my phone, available if anyone really wants to listen.

If any of the Vancouver LWers attended, please feel free to share your impressions.


New Comment
1 comments, sorted by Click to highlight new comments since: Today at 2:53 AM

Thanks for your report. I wish Shane Legg had given this much detail when he shared his thoughts on Rich Sutton's talk on the singularity.

New to LessWrong?