Steven Byrnes

I'm an AGI safety / AI alignment researcher in Boston with a particular focus on brain algorithms. Research Fellow at Astera. See https://sjbyrnes.com/agi.html for a summary of my research and sorted list of writing. Physicist by training. Email: steven.byrnes@gmail.com. Leave me anonymous feedback here. I’m also at: RSS feed, X/Twitter, Bluesky, LinkedIn, and more at my website.

Sequences

Intuitive Self-Models
Valence
Intro to Brain-Like-AGI Safety

Wikitag Contributions

Comments

Sorted by

(Not really answering your question, just chatting.)

What’s your source for “JVN had ‘the physical intuition of a doorknob’”? Nothing shows up on google. I’m not sure quite what that phrase is supposed to mean, so context would be helpful. I’m also not sure what “extremely poor perceptual abilities” means exactly.

You might have already seen this, but Poincaré writes about “analysts” and “geometers”:

It is impossible to study the works of the great mathematicians, or even those of the lesser, without noticing and distinguishing two opposite tendencies, or rather two entirely different kinds of minds. The one sort are above all preoccupied with logic; to read their works, one is tempted to believe they have advanced only step by step, after the manner of a Vauban who pushes on his trenches against the place besieged, leaving nothing to chance. The other sort are guided by intuition and at the first stroke make quick but sometimes precarious conquests, like bold cavalrymen of the advance guard.

The method is not imposed by the matter treated. Though one often says of the first that they are analysts and calls the others geometers, that does not prevent the one sort from remaining analysts even when they work at geometry, while the others are still geometers even when they occupy themselves with pure analysis. It is the very nature of their mind which makes them logicians or intuitionalists, and they can not lay it aside when they approach a new subject.

Not sure exactly how that relates, if at all. (What category did Poincaré put himself in? It’s probably in the essay somewhere, I didn’t read it that carefully. I think geometer, based on his work? But Tao is extremely analyst, I think, if we buy this categorization in the first place.)

I’m no JVN/Poincaré/Tao, but if anyone cares, I think I’m kinda aphantasia-adjacent, and I think that fact has something to do with why I’m naturally bad at drawing, and why, when I was a kid doing math olympiad problems, I was worse at Euclidean geometry problems than my peers who got similar overall scores.

Kinda related: You might enjoy the book The Culture Map by Erin Meyer, e.g. I copied one of her many figures into §1.5.1 here. The book mostly talks about international differences, but subcultural differences (and sub-sub-…-subcultures, like one particular friend group) can vary along the same axes.

Note that my suggestion (“…try a model where there are 2 (or 3 or whatever) latent schizophrenia subtypes. So then your modeling task is to jointly (1) assign each schizophrenic patient to one of the 2 (or 3 or whatever) latent subtypes, and (2) make a simple linear SNP predictor for each subtype…”)

…is a special case of @TsviBT’s suggestion (“what about small but not tiny circuits?”).

Namely, my suggestion is the case of the following “small but not tiny circuit”: X OR Y […maybe OR Z etc.].

This OR circuit is nice in that it’s a step towards better approximation almost no matter what the true underlying structure is. For example, if there’s a U-shaped quadratic dependency, the OR can capture whether you’re on the descending vs ascending side of the U. Or if there’s a sum of two lognormals, one is often much bigger than the other, and the OR can capture which one it is. Or whatever.

Thinking about it more, I guess the word “disjoint” in “disjoint root causes” in my comment is not quite right for schizophrenia and most other cases. For what little it’s worth, here’s the picture that was in my head in regards to schizophrenia:

The details don’t matter too much but see 1,2. The blue blob is a schizophrenia diagnosis. The purple arrows represent some genetic variant that makes cortical pyramidal neurons generally less active. For someone predisposed to schizophrenia mainly due to “unusually trigger-happy 5PT cortical neurons”, that genetic variant would be protective against schizophrenia. For someone predisposed to schizophrenia mainly due to “deficient cortex-to-cortex communication”, the same genetic variant would be a risk factor. 

The X OR Y model would work pretty well for this—it would basically pull apart the people towards the top from the people towards the right. But I shouldn’t have said “disjoint root causes” because someone can be in the top-right corner with both contributory factors at once.

(I’m very very far from a schizophrenia expert and haven’t thought this through too much. Maybe think of it as a slightly imaginary illustrative example instead of a confident claim about how schizophrenia definitely works.)

But isn’t this exactly the OPs point? 

Yup, I expected that OP would generally agree with my comment.

First off, you just posted them online

They only posted three questions, out of at least 62 (=1/(.2258-.2097)), perhaps much more than 62. For all I know, they removed those three from the pool when they shared them. That’s what I would do—probably some human will publicly post the answers soon enough. I dunno. But even if they didn’t remove those three questions from the pool, it’s a small fraction of the total.

You point out that all the questions would be in the LLM company user data, after kagi has run the benchmark once (unless kagi changes out all their questions each time, which I don’t think they do, although they do replace easier questions with harder questions periodically). Well:

  • If an LLM company is training on user data, they’ll get the questions without the answers, which probably wouldn’t make any appreciable difference to the LLM’s ability to answer them;
  • If an LLM company is sending user data to humans as part of RLHF or SFT or whatever, then yes there’s a chance for ground truth answers to sneak in that way—but that’s extremely unlikely to happen, because companies can only afford to send an extraordinarily small fraction of user data to actual humans.

What I'm not clear on is how those two numbers (20,000 genes and a few thousand neuron types) specifically relate to each other in your model of brain functioning. 

Start with 25,000 genes, but then reduce it a bunch because they also have to build hair follicles and the Golgi apparatus and on and on. But then increase it a bit too because each gene has more than one design degree of freedom, e.g. a protein can have multiple active sites, and there’s some ability to tweak which molecules can and cannot reach those active sites and how fast etc. Stuff like that.

Putting those two factors together, I dunno, I figure it’s reasonable to guess that the genome can have a recipe for a low-thousands of distinct neuron types each with its own evolutionarily-designed properties and each playing a specific evolutionarily-designed role in the brain algorithm.

And that “low thousands” number is ballpark consistent with the slide-seq thing, and also ballpark consistent with what you get by counting the number of neuron types in a random hypothalamus nucleus and extrapolating. High hundreds, low thousands, I dunno, I’m treating it as a pretty rough estimate.

Hmm, I guess when I think about it, the slide-seq number and the extrapolation number are probably more informative than the genome number. Like, can I really rule out “tens of thousands” just based on the genome size? Umm, not with extreme confidence, I’d have to think about it. But the genome size is at least a good “sanity check” on the other two methods.

Is the idea that each neuron type roughly corresponds to the expression of one or two specific genes, and thus you'd expect <20,000 neuron types?

No, I wouldn’t necessarily expect something so 1-to-1. Just the general information theory argument. If you have N “design degree of freedom” and you’re trying to build >>N specific machines that each does a specific thing, then you get stuck on the issue of crosstalk.

For example, suppose that some SNP changes which molecules can get to the active site of some protein. It makes Purkinje cells more active, but also increases the ratio of striatal matrix cells to striosomes, and also makes auditory cortex neurons more sensitive to oxytocin. Now suppose there’s very strong evolutionary pressure for Purkinje cells to be more active. Then maybe that SNP is going to spread through the population. But it’s going to have detrimental side-effects on the striatum and auditory cortex. Ah, but that’s OK, because there’s a different mutation to a different gene which fixes the now-suboptimal striatum, and yet a third mutation that fixes the auditory cortex. Oops, but those two mutations have yet other side-effects on the medulla and … Etc. etc.

…Anyway, if that’s what’s going on, that can be fine! Evolution can sort out this whole system over time, even with crazy side-effects everywhere. But only as long as there are enough “design degrees of freedom” to actually fix all these problems simultaneously. There do have to be more “design degrees of freedom” in the biology / genome than there are constraints / features in the engineering specification, if you want to build a machine that actually works. There doesn’t have to be a 1-to-1 match between design-degrees-of-freedom and items on your engineering blueprint, but you do need that inequality to hold. See what I mean?

Interestingly, the genome does do this! Protocadherins in vertebrates and DSCAM1 are expressed in exactly this way, and it's thought to help neurons to distinguish themselves from other neurons…

Of course in an emulation you could probably just tell the neurons to not interact with themselves

Cool example, thanks! Yeah, that last part is what I would have said.  :)

My take on missing heritability is summed up in Heritability: Five Battles, especially §4.3-4.4. Mental health and personality have way more missing heritability than things like height and blood pressure. I think for things like height and blood pressure etc., you’re limited by sample sizes and noise, and by SNP arrays not capturing things like copy number variation. Harris et al. 2024 says that there exist methods to extract CNVs from SNP data, but that they’re not widely used in practice today. My vote would be to try things like that, to try to squeeze a bit more predictive power in the cases like height and blood pressure where the predictors are already pretty good.

On the other hand, for mental health and personality, there’s way more missing heritability, and I think the explanation is non-additivity. I humbly suggest my §4.3.3 model as a good way to think about what’s going on.

If I were to make one concrete research suggestion, it would be: try a model where there are 2 (or 3 or whatever) latent schizophrenia subtypes. So then your modeling task is to jointly (1) assign each schizophrenic patient to one of the 2 (or 3 or whatever) latent subtypes, and (2) make a simple linear SNP predictor for each subtype. I’m not sure if anyone has tried this already, and I don’t personally know how to solve that joint optimization problem, but it seems like the kind of problem that a statistics-savvy person or team should be able to solve.

I do definitely think there are multiple disjoint root causes for schizophrenia, as evidenced for example by the fact that some people get the positive symptoms without the cognitive symptoms, IIUC. I have opinions (1,2) about exactly what those disjoint root causes are, but maybe that’s not worth getting into here. Ditto with autism having multiple disjoint root causes—for example, I have a kid who got an autism diagnosis despite having no sensory sensitivities, i.e. the most central symptom of autism!! Ditto with extroversion, neuroticism, etc. having multiple disjoint root causes, IMO.

Good luck!  :)

As for the philosophical objections, it is more that whatever wakes up won't be me if we do it your way. It might act like me and know everything I know but it seems like I would be dead and something else would exist.

Ah, but how do you know that the person that went to bed last night wasn’t a different person, who died, and you are the “something else” that woke up with all of that person’s memories? And then you’ll die tonight, and tomorrow morning there will be a new person who acts like you and knows everything you know but “you would be dead and something else would exist”?

…It’s fine if you don’t want to keep talking about this. I just couldn’t resist.  :-P

If you have a good theory of what all those components are individually you would still be able to predict something like voltage between two arbitrary points.

I agree that, if you have a full SPICE transistor model, you’ll be able to model any arbitrary crazy configuration of transistors. If you treat a transistor as a cartoon switch, you’ll be able to model integrated circuits perfectly, but not to model transistors in very different weird contexts.

By the same token, if you have a perfect model of every aspect of a neuron, then you’ll be able to model it in any possible context, including the unholy mess that constitutes an organoid. I just think that getting a perfect model of every aspect of a neuron is unnecessary, and unrealistic. And in that framework, successfully simulating an organoid is neither necessary nor sufficient to know that your neuron model is OK.

Yeah I think “brain organoids” are a bit like throwing 1000 transistors and batteries and capacitors into a bowl, and shaking the bowl around, and then soldering every point where two leads are touching each other, and then doing electrical characterization on the resulting monstrosity.  :)

Would you learn anything whatsoever from this activity? Umm, maybe? Or maybe not. Regardless, even if it’s not completely useless, it’s definitely not a central part of understanding or emulating integrated circuits.

(There was a famous paper where it’s claimed that brain organoids can learn to play Pong, but I think it’s p-hacked / cherry-picked.)

There’s just so much structure in which neurons are connected to which in the brain—e.g. the cortex has 6 layers, with specific cell types connected to each other in specific ways, and then there’s cortex-thalamus-cortex connections and on and on. A big ball of randomly-connected neurons is just a totally different thing.

Also, I am not sure if you're proposing we compress multiple neurons down into a simpler computational block, the way a real arrangement of transistors can be abstracted into logic gates or adders or whatever. I am not a fan of that for WBE for philosophical reasons and because I think it is less likely to capture everything we care about especially for individual people.

Yes and no. My WBE proposal would be to understand the brain algorithm in general, notice that the algorithm has various adjustable parameters (both because of inter-individual variation and within-lifetime learning of memories, desires, etc.), do a brain-scan that records those parameters for a certain individual, and now you can run that algorithm, and it’s a WBE of that individual.

When you run the algorithm, there is no particular reason to expect that the data structures you want to use for that will superficially resemble neurons, like with a 1-to-1 correspondence. Yes you want to run the same algorithm, producing the same output (within tolerance, such that “it’s the same person”), but presumably you’ll be changing the low-level implementation to mesh better with the affordances of the GPU instruction set rather than the affordances of biological neurons. 

The “philosophical reasons” are presumably that you think it might not be conscious? If so, I disagree, for reasons briefly summarized in §1.6 here.

“Less likely to capture everything we care about especially for individual people” would be a claim that we didn’t measure the right things or are misunderstanding the algorithm, which is possible, but unrelated to the low-level implementation of the algorithm on our chips.

I definitely am NOT an advocate for things like training a foundation model to match fMRI data and calling it a mediocre WBE. (There do exist people who like that idea, just I’m not one of them.) Whatever the actual information storage is, as used by the brain, e.g. synapses, that’s what we want to be measuring individually and including in the WBE.  :)

I second the general point that GDP growth is a funny metric … it seems possible (as far as I know) for a society to invent every possible technology, transform the world into a wild sci-fi land beyond recognition or comprehension each month, etc., without quote-unquote “GDP growth” actually being all that high — cf. What Do GDP Growth Curves Really Mean? and follow-up Some Unorthodox Ways To Achieve High GDP Growth with (conversely) a toy example of sustained quote-unquote “GDP growth” in a static economy.

This is annoying to me, because, there’s a massive substantive worldview difference between people who expect, y’know, the thing where the world transforms into a wild sci-fi land beyond recognition or comprehension each month, or whatever, versus the people who are expecting something akin to past technologies like railroads or e-commerce. I really want to talk about that huge worldview difference, in a way that people won’t misunderstand. Saying “>100%/year GDP growth” is a nice way to do that … so it’s annoying that this might be technically incorrect (as far as I know). I don’t have an equally catchy and clear alternative.

(Hmm, I once saw someone (maybe Paul Christiano?) saying “1% of Earth’s land area will be covered with solar cells in X number of years”, or something like that. But that failed to communicate in an interesting way: the person he was talking to treated the claim as so absurd that he must have messed up by misplacing a decimal point :-P ) (Will MacAskill has been trying “century in a decade”, which I think works in some ways but gives the wrong impression in other ways.)

Good question! The idea is, the brain is supposed to do something specific and useful—run a certain algorithm that systematically leads to ecologically-adaptive actions. The size of the genome limits the amount of complexity that can be built into this algorithm. (More discussion here.) For sure, the genome could build a billion different “cell types” by each cell having 30 different flags which are on and off at random in a collection of 100 billion neurons. But … why on earth would the genome do that? And even if you come up with some answer to that question, it would just mean that we have the wrong idea about what’s fundamental; really, the proper reverse-engineering approach in that case would be to figure out 30 things, not a billion things, i.e. what is the function of each of those 30 flags.

A kind of exception to the rule that the genome limits the brain algorithm complexity is that the genome can (and does) build within-lifetime learning algorithms into the brain, and then those algorithms run for a billion seconds, and create a massive quantity of intricate complexity in their “trained models”. To understand why an adult behaves how they behave in any possible situation, there are probably billions of things to be reverse-engineered and understood, rather than low-thousands of things. However, as a rule of thumb, I claim that:

  • when the evolutionary learning algorithm adds a new feature to the brain algorithm, it does so by making more different idiosyncratic neuron types and synapse types and neuropeptide receptors and so on,
  • when one of the brain’s within-lifetime learning algorithm adds a new bit of learned content to its trained model, it does so by editing synapses.

Again, I only claim that these are rules-of-thumb, not hard-and-fast rules, but I do think they’re great starting points. Even if there’s a nonzero amount of learned content storage via gene expression, I propose that thinking of it as “changing the neuron type” is not a good way to think about it; it’s still “the same kind of neuron”, and part of the same subproject of the “understanding the brain” megaproject, it’s just that the neuron happens to be storing some adjustable parameter in its nucleus and acting differently in accordance with that.

By contrast, medium spiny neurons versus Purkinje cells versus cortical pyramidal neurons versus magnocellular neurosecretory cells etc. etc. are all just wildly different from each other—they look different, they act different, they play profoundly different roles in the brain algorithm, etc. The genome clearly needs to be dedicating some of its information capacity to specifying how to build each and every of those cell types, individually, such that each of them can play its own particular role in the brain algorithm.

Does that help explain where I’m coming from?

Load More