The "are LLMs conscious" debate bugsme a lot, because it assumes brains and silicon are remotely comparable. I keep arguing that they are not.
Brains are 3D. ~100 trillion synapses in 1.4 liters, every neuron within 40μm of a capillary, with 400 miles of vasculature doing cooling and nutrient delivery at micron resolution. Our silicon chips are flat with heatsinks glued on top. We don't have the manufacturing tech for anything else.
As for energy, a brain runs on maybe 20 watts (rough estimate, includes maintenance). Over 80 years that's ~14,000 kWh total, and we have developed countless incredible minds across that kind of horizon. Training GPT-4 used ~50 GWh. That's about 3,500 brain-lifetimes for one model, before inference.
And if Hameroff/Penrose are even partially right about microtubules (which I subscribe to, why not have game of life running at the brain level?): A brain has maybe 10^18 to 10^21 tubulin dimers, potentially switching at nanosecond scales (hello Wolfram!). That's 10^27+ state transitions per second. At 20 watts. While also running a body.
Computers have always been capable of incredible accomplishments we have only dreamt of (and subsequently had to develop the hardware and software for). I don’t disagree that LLMs are incredible feats of technology, and I expect that progress will remain some form of exponential. However, asking "is ChatGPT conscious like a human" might just be a category error. The only way to resolve it true is to call everything conscious.
We don't know what brains are doing, we can't even fathom building anything like them, and we're comparing across a gap we can't even measure properly. We also don’t have the maths, the physics, or the scientific instrument to surpass this gap, and I argue it will take us a new kind of physics/statistics for emergent intelligence and complex systems.
My final question is, can we replicate a digital human brain without understanding what the human brain is doing in its completeness? It just seems too complex!
The "are LLMs conscious" debate bugs me a lot, because it assumes brains and silicon are remotely comparable. I keep arguing that they are not.
Brains are 3D. ~100 trillion synapses in 1.4 liters, every neuron within 40μm of a capillary, with 400 miles of vasculature doing cooling and nutrient delivery at micron resolution. Our silicon chips are flat with heatsinks glued on top. We don't have the manufacturing tech for anything else.
As for energy, a brain runs on maybe 20 watts (rough estimate, includes maintenance). Over 80 years that's ~14,000 kWh total, and we have developed countless incredible minds across that kind of horizon. Training GPT-4 used ~50 GWh. That's about 3,500 brain-lifetimes for one model, before inference.
And if Hameroff/Penrose are even partially right about microtubules (which I subscribe to, why not have game of life running at the brain level?): A brain has maybe 10^18 to 10^21 tubulin dimers, potentially switching at nanosecond scales (hello Wolfram!). That's 10^27+ state transitions per second. At 20 watts. While also running a body.
Computers have always been capable of incredible accomplishments we have only dreamt of (and subsequently had to develop the hardware and software for). I don’t disagree that LLMs are incredible feats of technology, and I expect that progress will remain some form of exponential. However, asking "is ChatGPT conscious like a human" might just be a category error. The only way to resolve it true is to call everything conscious.
We don't know what brains are doing, we can't even fathom building anything like them, and we're comparing across a gap we can't even measure properly. We also don’t have the maths, the physics, or the scientific instrument to surpass this gap, and I argue it will take us a new kind of physics/statistics for emergent intelligence and complex systems.
My final question is, can we replicate a digital human brain without understanding what the human brain is doing in its completeness? It just seems too complex!