Introduction I have long been very interested in the limitations of LLMs because understanding them seems to be the most important step to getting timelines right. Right now there seems to be great uncertainty about timelines, with very short timelines becoming plausible, but also staying hotly contested. This led me...
It seems to me the question of consciousness of LLMs is a bit of a red herring. Instead the salient point is that they are sequence learning systems similar to our cortex (+ hippocampus). Therefore we should expect them to be able to learn sequences. What we should not expect...
Introduction There is a new paper and lesswrong post about "learned look-ahead in a chess-playing neural network". This has long been a research interest of mine for reasons that are well-stated in the paper: > Can neural networks learn to use algorithms such as look-ahead or search internally? Or are...
In the past AI systems have reached super human performance by adding search to neural networks while the network alone could not reach the level of the best humans. At least this seems to be the case for AlphaGo, AlphaZero/Leela, AlphaGeometry and probably more, while AlphaStar and OpenAI Five where...
There is a growing? fraction of people who consider LLMs to be AGI. And it makes sense. Clearly, when the term AGI was established this was what was meant: A machine that can tackle a wide range of problems, communicate with natural language, very different from all the examples of...
Amidst the rumours about a new breakthrough at OpenAI I thought I'd better publish this draft before it gets completely overtaken by reality. It is essentially a collection of "gaps" between GPT4 and the human mind. Unfortunately the rumours around Q* force me to change the conclusion from "very short...