[ Question ]

If the "one cortical algorithm" hypothesis is true, how should one update about timelines and takeoff speed?

by Adam Scholl 3mo26th Aug 20191 min read9 comments

24


Set aside the specifics of Jeff Hawkins' proposed model of this algorithm—i.e., that grid cells also exist in neocortex, and that displacement cells both exist and exist in neocortex, and in general that the machinery we use for navigating concepts is a repurposed version of the machinery we use for navigating physical spaces. If the basic underlying claim is true—that the neocortex is a homogeneous general-purpose computing substrate that basically just runs one learning algorithm throughout—how should we expect this to affect the development of transformative AI?

I wasn't able to find much discussion of this question, aside from old Hanson/Yudkowsky foom debates and this AI Impacts post arguing that the existence of "one algorithm" wouldn't lead to discontinuous AI progress.

My own intuition would be to update strongly toward shorter timelines and faster takeoff, but I think I may well be missing things.


New Answer
Ask Related Question
New Comment
Write here. Select text for formatting options.
We support LaTeX: Cmd-4 for inline, Cmd-M for block-level (Ctrl on Windows).
You can switch between rich text and markdown in your user settings.

3 Answers

My own updates after I wrote that were:

  • Increased likelihood of self-supervised learning algorithms as either a big part or even the entirety of the technical path to AGI—insofar as self-supervised learning is the lion's share of how the neocortex learning algorithm supposedly works. That's why I've been writing posts like Self-Supervised Learning and AGI safety.
  • Shorter timelines and faster takeoff, insofar as we think the algorithm is not overwhelmingly complicated
  • Increased likelihood of "one algorithm to rule them all" over Comprehensive AI Services. This might be on the meta-level of one learning algorithm to rule them all, and we feed it biology books to get a superintelligent biologist, and separately we feed it psychology books and nonfiction TV to get a superintelligent psychological charismatic manipulator, etc. Or it might be on the base level of one trained model to rule them all, and we train it with all 50 million books and 100,000 years of YouTube and anything else we can find. The latter can ultimately be more capable (you understand biology papers better if you also understand statistics, etc. etc.), but on the other hand the former is more likely if there are scaling limits where memory access grinds to a halt after too many gigabytes get loaded into the world-model, or things like that. Either way, it would make it likelier for AGI (or at least the final missing ingredient of AGI) to be developed in one place, i.e. the search-engine model rather than the open-source software model.

I see no reason that the most capable learner should be simple. If humans turn out to have some complexity limiter on their learning algorithm, such as complex machinery always being universal within a sexually reproducing species (otherwise it'd be too fragile), I expect the first cortical entity with self-modification ability to foom all the way up.

It will appear in a random moment of time when someone will guess it. However, this "randomness" is not evenly distributed. The probability of guessing the correct algo is higher with time (as more people is trying) and also it is higher in a DeepMind-like company than in a random basement as Deep Mind (or similar company) has already hired best minds. Also larger company has higher capability to test ideas, as it has higher computational capacity and other resources.