Jonathan Yan

Wiki Contributions

Comments

I find the Outside View very compelling. This post might as well be titled "AGI won't be a huge deal, we survive".

System 1 and system 2, as defined in the book Thinking, Fast and Slow, could be relevant. A typical latest NN would have a much "wider" system 1 than humans, but a much "shorter" system 2.

I increased my AI timelines substantially. Back in 2016 I felt AGI was so near that I founded a startup aiming for safe AGI before 2030. After I spent a couple of years looking deeply into this topic I realized I was wrong and shut down the project.

Relative to my expectations in 2018, AI progress had been even more underwhelming since then. Now I see AGI so far away that I can no longer come up with a model for sensible timeline estimates.

Interested. I have assisted in various AI safety efforts in China, through which I gained a broad understanding of the relevant literatures. And I have almost no background in biology.

I have time in August to look at and comment on successive drafts. Being a beta reader for innovative and interdisciplinary concepts sounds really exciting to me. I look forward to getting involved!

On tech sector out-performance, I think the more appropriate lookback period started around 2016 when AlphaGo became famous.

On predictions, there were also countless many that tech would take over the world. Abundance of predictions for boom or bust is a constant feature of capital markets, and should be given no weight.

On causal attribution, note that there have been many other advances in the tech sector, such as cloud computing, mobile computing, industry digitization, Moore's law, etc. It's unclear how much of the value added is driven by DL.