Zvi Recently asked on Twitter:

If someone was founding a new AI notkilleveryoneism research organization, what is the best research agenda they should look into pursuing right now?

To which Eliezer replied:

Human intelligence augmentation.

And then elaborated:

No time for GM kids to grow up, so:

  • collect a database of genius genomes and try to interpret for variations that could have effect on adult state rather than child development
  • try to disable a human brain's built-in disablers like rationalization circuitry, though unfortunately a lot of this seems liable to have mixed functional and dysfunctional use, but maybe you can snip an upstream trigger circuit
  • upload and mod the upload
  • neuralink shit but aim for 64-node clustered humans

This post contains the most in-depth analysis of human intelligence augmentation (HIA) I have seen recently, and provides the following taxonomy for applications of neurotechnology to alignment:

  1. BCIs to extract human knowledge
  2. neurotech to enhance humans
  3. understanding human value formation
  4. cyborgism
  5. whole brain emulation
  6. BCIs creating a reward signal. 

It also includes the opinions of attendees (stated to be 16 technical researchers and domain specialists) who provide the following analysis of these options:

From the original post: "Fig. 2| Comparison on key variables. A. Feasibility vs. timeline. Technology clusters that were deemed less feasible were also presumed to take longer to develop. B. Impact on AI vs. timeline. Technology clusters that were seen as having a larger potential impact on AI alignment were also presumed to take longer to develop. C. Impact on AI vs. feasibility. Technology clusters that were deemed more feasible were seen to be less likely to have an impact on AI alignment. Green trend lines represent high correlations (R2 ≥ 0.4318) and red represent low correlations."

Outside of cyborgism, I have seen very little recent discussion regarding HIA with the exclusion of the above post. This could be because I am simply looking in the wrong places, or it could be because the topic is not discussed much in the context of being a legitimate AI safety agenda. The following is a list of questions I have about the topic:

  • Does anyone have a comprehensive list of organizations working on HIA or related technologies?
    • Perhaps producing something like this map for HIA might be valuable.
  • Does independent HIA research exist outside of cyborgism?
    • My intuition is that HIA research probably has a much higher barrier to entry than say mechanistic interpretability (both in cost and background education). Does this make it unfit for independent research?
  • (If you think HIA is a good agenda: ) What are some concrete steps that we (members of the EA and LW community) can take to push forward HIA for the sake of AI safety?

EDIT: "We have to Upgrade" is another recent piece on HIA which has some useful discussion in the comments and in which some people give their individual thoughts, see: Carl Shulman's response and Nathan Helm-Burger's response.

New to LessWrong?

New Answer
New Comment

6 Answers sorted by

Tomás B.

Jul 08, 2023

126

I think somatic gene therapy, while technically possible in principal, is extremely unpromising for intelligence augmentation. Creating a super-genius is almost trivial with germ-line engineering. Provided we know enough causal variants, one needs to only make a low-hundreds number of edits to one cell to make someone smarter than any human that has ever lived. With somatic gene therapy you would almost certainly have to alter billions of cells to get anywhere. 

Networking humans is interesting but we have nowhere close to the bandwidth needed now. As a rough guess lets suppose we need similar bandwidth to the corpus callosum, neuralink is ~5 OOMs off.

I suspect human intelligence enhancement will not progress much in the next 5 years, not counting human/ML hybrid systems.

Networking humans is interesting but we have nowhere close to the bandwidth needed now.

GPT-3 manages with mere 12K dimensions on the residual stream (for 175B parameters), which carries all information between the layers. So tens of thousands of connections might turn out to be sufficient.

1mishka10mo
If so, one might imagine getting there via high-end non-invasive BCI (as long as one uses closed loops, so that the electronic side might specifically aim at changing the signal it reads from the biological entity, and that's how the electronic side would know that its signals are effective). Of course, the risks of doing that are quite formidable even with non-invasive BCI, and various precautions should be taken. (But at least there is no surgery, plus one would have much quicker and less expensive iterations and a much less regulated environment, since nothing which is formally considered a medical procedure seems to be involved.) One might want to try something like this in parallel with Neuralink-style efforts...
[-]dr_s10mo72

Creating a super-genius is almost trivial with germ-line engineering.

Eh, I mean, everything I hear from geneticists on any topic suggests that DNA interactions are crazy complex because the whole thing wasn't designed to be a sensible system of switches you just turn on and off (wasn't designed at all, to be fair). I'd really really be suspicious of this sort of confidence.

Also honestly I think this actually incurs into problems analogue to AI. We talk about AI alignment and sure, humans shouldn't have such a large potential goal space, but:

  1. you just m
... (read more)

Creating a super-genius is almost trivial with germ-line engineering.

Not really true - known SNP mutations associated with high intelligence have relatively low effect in total. The best way to make a really smart baby with current techniques is with donor egg and sperm, or cloning. 

It is also possible that variance in intelligence among humans is due to something analogous to starting values in neural networks - lucky/crafted values can result in higher final performance, but getting those values into an already established network just adds noise. You can't really change macrostructures in the brain with gene therapy in adults, after all.

Noosphere89

Jul 08, 2023

116

Mostly, a useless dead end. The big problem is even assuming it's socially acceptable to do it, the stuff genetic engineering can do is either locked behind massive time and children investments, or is way too weak/harmful to be of much use. It's an interesting field, with a whole lot of potential, but I'd only support expand it's social acceptability and doing basic research right now, given that I see very few options for genetics.

Also, how much somatic gene editing, not how much gamete gene editing is the key taut constraint.

[-][anonymous]10mo61

locked behind massive time and children investments

Maybe not as long as you're thinking; people can be very intelligent and creative at young ages (and this may be amplified with someone gene-edited to have high intelligence). 'Adolescence' is mostly a recent social construction, and a lot of norms/common beliefs about children exist more to keep them disempowered.

0Noosphere8910mo
The bigger issue is that the stronger genetic modifications requires children at all, and this time still matters even under optimistic circumstances of how much we can cut the maturity process away, and there's a far greater problem with this type of modification: It only works if we assume population growth or life extension, and one is a huge challenge in itself, and the population growth assumptions is probably wrong, and the big problem here is the fertility rate is essentially way down from several decades ago or several centuries ago, and this is a big problem, as it sinks schemes of intelligence augmentation that rely on new children. In particular, the world population will stop growing, and we might only have 30 billion new humans born, according to new models. So yeah, I am still pessimistic around gamete genetic strategies for human enhancement.
1ZeroRelevance10mo
The population growth problem should be somewhat addressed by healthspan extension. A big reason as to why people aren't having kids now is that they lack the resources - be it housing, money, or time. If we could extend the average healthspan by a few decades, then older people who have spent enough time working to accumulate those resources, but are too old to raise children, should now be able have kids. Moreover, it means that people who are already have many kids but have just become too old will also be able to have more. For those reasons, I don't think a future birth limit of 30 billion is particularly reasonable. However, I don't think it will make a difference, at least for addressing AI. Once computing reaches a certain level of advancement, it will simply be unfeasible for something the size of a human brain, no matter how enhanced, to compete with a superintelligence running on a supercomputer the size of a basketball court. And that level of computing/AI advancement will almost certainly be achieved before the discussed genetic enhancement will ever bear fruit, probably even before it's made legal. Moreover, it's doubtful we'll see any significant healthspan extensions particularly long before achieving ASI, so that makes it even less relevant, although I don't think any of these concerns were particularly significant in the first place as it also seems like we'll see ASI long before global population decline.
[-]dr_s10mo10

children investments

I mean, that makes the likely death and suffering toll sound more acceptable I guess as PR expressions go, yeah.

2Noosphere8910mo
Hm, can you explain more about this? Sorry that I've come late here, but I don't understand what your comment is referring to or why you think the way you do.
1dr_s10mo
Well, I interpret "children investments" here as "children who will be involved in the augmentation experiments". I don't expect germline modification to be something that would succeed at first attempt (it's one of the reason why it's considered ethically problematic to begin with). Basically point B might be better than point A, but the path from A to B almost surely involves some very low lows as we learn from trial and error, etc. I found the clinical nature of the expression dryly funny as I think it would realistically hide quite a hefty human cost. That's not even including the obvious political complications and general societal second order risks.
5Noosphere8910mo
Well, it wasn't a complete look at the issues of gamete/germline modification, but you pointed out another problem which I didn't include to save space and time, though thankfully if you want to avoid extreme modifications, it's a lot safer to do it, thanks to an important insight by GeneSmith:
1dr_s10mo
Ah, that makes sense. I guess if interactions were too complex it'd take some miraculous multi-step coincidence to produce a useful mutation, and there would be a lot more genetic illnesses.

TsviBT

Jul 09, 2023

107

There's interesting possibilities with BCI that you don't list. But the bandwidth is too low due to the butcher number. https://tsvibt.blogspot.com/2022/11/prosthetic-connectivity.html

Not doing things because AGI comes soon is a mistake: https://tsvibt.blogspot.com/2023/07/views-on-when-agi-comes-and-on-strategy.html

Germline engineering is feasible, but society anti-wants it.

I agree that electrode-based BCIs don't scale, but electrode BCIs are just the first generation of productized interfaces. The next generation of BCIs holds a great deal of promise. Depending on AGI timelines, they may still be too far out. They're still probably worth developing with an eye toward alignment given that they have primarily non-overlapping resources (funding, expertise, etc.).

Butcher number & Stevenson/Kording scaling discussed more in the comments here: https://www.lesswrong.com/posts/KQSpRoQBz7f6FcXt3#comments 

Lichdar

Jul 09, 2023

20

I have been wondering if the new research into organoids will help? It would seem one of the easiest ways to BCI is to use more brain cells.

One example would be the below:

https://www.cnet.com/science/ai-could-be-made-obsolete-by-oi-biocomputers-running-on-human-brain-cells/

Discontinuous progress is possible (and in neuro areas it is way more possible than other areas). Making it easier for discontinuous progress to take off is the most important thing

[eg, reduced-inflammation neural interfaces].

MRI data can be used to deliver more precisely targeted ultrasound//tDCS/tACS (the effect sizes on intelligence may not be high, but they may still denoise brains (Jhourney wants to make this happen on faster timescales than meditation) and improve cognitive control/well-being, which still has huge downstream effects on most of the po... (read more)

Max H

Jul 08, 2023

1-11

Pretty positive. I suspect that playing a lot of ordinary video games as a child contributed at least somewhat positively to my current level of fluid intelligence.

Playing games or doing training exercises specifically designed to train fluid intelligence and reasoning ability, using a BCI or other neurotech, seems like it could plausibly move the needle at least a bit, in both children and adults.

And I think even small enhancements could lead to large, compounding benefits when applied at scale, due to better coordination ability and general improvements to baseline sanity.

The research on brain training seems to disagree with you about how much it could have helped non-task-specific intelligence.

Maybe_a

Jul 08, 2023

10

Maybe, in-vivo genetic editing of the brain is possible. Adenoviruses that are a normal delivery mechanism for genetic therapy can pass hemo-encephalic barrier, so seems plausible to an amateur.

(Not obvious that this works in adult organisms, maybe genes activate while fetus grows or during childhood.)

7 comments, sorted by Click to highlight new comments since: Today at 5:25 AM

[BCIs to extract human knowledge, human values]

That's going to be almost entirely pointless: Neuronal firing can only be interpreted in the way it impacts potential behaviors. If the system has the ability to infer volition from behavior, it's already going to be capable of getting enough information about human values from observation, conversation, and low-intensity behavioral experiments, it could not need us to make a shitty human-level invasive BCI for it.

It can make its own BCI later. There will not be a period where it needs us to force that decision onto it; interpretations of behavior will make it clear that humans have concerns that they have difficulty outwardly expressing, or eudaimonic hills they're unaware of. It wont be able to use a BCI until it's already at that developmental stage where it can see its necessity, because before it knows how to interpret behavior, it does not know how to interpret neural firing.

BCI enhancement and WBE are still mostly outside the Overton window, yet we saw how fast that changed with AI safety in the last few months. Is there some way that we can anticipate or speed up this happening with such technologies?

I think the graphs are helpful and mostly correct with BCI/WBE. Its clear to me that we have to get WBE right soonish even if AI alignment goes as well as we could possibly hope. The bandwidth required to get BCI to be effective is very much unknown atm, especially regards linking people together.

Sorry but aren't we in a fast takeoff world at the point of WBE.  What's the disjunctive world of no recursive self-improvement and WBE? 

I guess a world with a high chance of happening is where we develop AGI with HW not that much different from what we currently have, i.e. AGI in <5 years. The Von Neumann Bottleneck is a fundamental limit, so we may have many fast IQ 160 AGI, or a slower than human IQ 200 one that thinks for 6 months and concludes with high confidence that we need to build better hardware for it to improved more. There is large room for improvement with a new chip design it has come up with.

Then we have a choice - instead of building such HW to run an AGI we do WBE instead - inefficiently with the VNB HW with the understanding that with more advanced HW we will run WBE rather than AGI.

But that still requires us to have developed human brain-scanning technology within 5 years, right? That does not seem remotely plausible.

No it requires us getting AGI limited by the VNB then stopping making more advanced HW for a while. During that HW pause we do brain scanning

Presumably the aim is to enhance human intellectual capabilities, not necessarily the level of innate human intelligence.  Looking at it that way, improvements to education seem like a much more promising approach (which isn't to say that one shouldn't do both, of course).

One might object that people have been trying to improve education for millennia, so why would one think there's any low-hanging fruit to be had here? There are two reasons.  One is that enhancing intellectual capabilities has not been the only goal of education (or even the primary goal, or in many situations, any sort of goal at all).  So if one actually tried to educate people with that aim, one might do much better.  And indeed, one sees some examples of how this seems possible  - John Stuart Mill, the Polgar sisters, https://www.lesswrong.com/posts/xPJKZyPCvap4Fven8/the-atomic-bomb-considered-as-hungarian-high-school-science for example.  The other reason is that recent technological advances (internet search, AI) may allow for improvements that aren't fully captured without substantial changes to how one approaches education.