If the general capabilities necessary for effective self-improvement or to directly get an AGI can be bridged without the apparent complexity of the brain structures that enable general intelligence in humans (just with memory, more data, compute and some algorithmic breakthroughs or even none), I wonder why those structures are not needed.

Sure, it's not necessary that a sufficiently advanced AI has to work like the brain, but there has to be an intuition about why those neural structures are not needed to at least create an autonomous utility maximizer if you are going to defend short timelines.
 

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

Hoagy

Sep 27, 2023

90

1 line summary is that NNs can transmit signals directly from any part of the network to any other, while brain has to work only locally.

More broadly I get the sense that there's been a bit of a shift in at least some parts of theoretical neuroscience from understanding how we might be able to implement brain-like algorithms to understanding how the local algorithms that the brain uses might be able to approximate backprop, suggesting that artificial networks might have an easier time than the brain and so it would make sense that we could make something which outcompetes the brain without a similar diversity of neural structures.

This is way outside my area tbh, working off just a couple of things like this paper by Beren Millidge https://arxiv.org/pdf/2006.04182.pdf and some comments by Geoffrey Hinton that I can't source.

So in your model how much of the progress to AGI can be made just by adding more compute + more data + working memory + algorithms that 'just' keep up with the scaling?

Specifically, do you think that self-reflective thought already emerges from adding those?

1Hoagy7mo
Not totally sure but i think it's pretty likely that scaling gets us to AGI, yeah. Or more particularly, gets us to the point of AIs being able to act as autonomous researchers or act as high (>10x) multipliers on the productivity of human researchers which seems like the key moment of leverage for deciding how the development to AI will go. Don't have a super clean idea of what self-reflective thought means. I see that e.g. GPT-4 can often say something, think further about it, and then revise its opinion. I would expect a little bit of extra reasoning quality and general competence to push this ability a lot further.
1JavierCC7mo
The point that you brought up seemed to rest a lot on Hinton's claims, so it seems that his opinions on timelines and AI progress should be quite important   Do you have any recent source on his claims about AI progress? 
1Hoagy7mo
See e.g. "So I think backpropagation is probably much more efficient than what we have in the brain." from https://www.therobotbrains.ai/geoff-hinton-transcript-part-one More generally, I think the belief that there's some kind of important advantage that cutting edge AI systems have over humans comes more from human-AI performance comparisons e.g. GPT-4 way outstrips the knowledge about the world of any individual human in terms of like factual understanding (though obv deficient in other ways) with probably 100x less params. A bioanchors based model of AI development would imo predict that this is very unlikely. Whether the core of this advantage is in the form or volume or information density of data, or architecture, or something about the underlying hardware I am less confident.

Adam Shai

Sep 29, 2023

20

First, brains (and biological systems more generally) have many constraints that artificial networks do not. Brains exist in the context of a physically instantiated body, with heavy energy constraints. Further, they exist in specific niches, with particular evolutionary histories, which has enormous effects on structure and function.

Second, biological brains have different types of intelligence from AI systems, at least currently. A bird is able to land fluidly on a thin branch in windy conditions, while gpt4 can help you code. In general, the intelligences that one thinks of in the context of AGI do not totally overlap with the varied, often physical and metabolic, intelligences of biology.

All that being said, who knows what future AI systems will look like

5 comments, sorted by Click to highlight new comments since: Today at 2:52 AM

Sure, it's not necessary that a sufficiently advance AI has to work like the brain, but there has to be an intuition about why is not need it to at least create an utility maximizer.

Octopus' brain(s) is nothing like that of mammals, and yet it is equally intelligent.

Yeah, but I would need more specificity than just giving an example of a brain with a different design.

[-][anonymous]7mo10

without the apparent complexity of the brain structures that enable general intelligence in humans

can you specify what brain structures you mean by that? doesn't the process of neural network training just cause any useful complexity as a result of selecting for better performance on the training objective? (same as with human evolution)

Can you quote any source that provides evidence for that conclusion? 

 

The process of evolution optimised the structures of the brain themselves through generations, the training is just equivalent to the development of the individual. The structures of the brain seem to not only be determined by development, but that's one reason why I said "apparent complexity", from Yudkowsky:

  • "Metacognitive" is the optimization that builds the brain - in the case of a human, natural selection; in the case of an AI, either human programmers or, after some point, the AI itself.
[-][anonymous]7mo1-2

i don't have a source, it's just intuitive given that evolution is an example of a training process and human brains are neural networks.