How can we make many humans who are very good at solving difficult problems?

Summary (table of made-up numbers)

I made up the made-up numbers in this table of made-up numbers; therefore, the numbers in this table of made-up numbers are made-up numbers.

Call to action

If you have a shitload of money, there are some projects you can give money to that would make supergenius humans on demand happen faster. If you have a fuckton of money, there are projects whose creation you could fund that would greatly accelerate this technology.

If you're young and smart, or are already an expert in either stem cell / reproductive biology, biotech, or anything related to brain-computer interfaces, there are some projects you could work on.

If neither, think hard, maybe I missed something.

You can DM me or gmail me at tsvibtcontact.

Context

The goal

What empowers humanity is the ability of humans to notice, recognize, remember, correlate, ideate, tinker, explain, test, judge, communicate, interrogate, and design. To increase human empowerment, improve those abilities by improving their source: human brains.

AGI is going to destroy the future's promise of massive humane value. To prevent that, create humans who can navigate the creation of AGI. Humans alive now can't figure out how to make AGI that leads to a humane universe.

These are desirable virtues: philosophical problem-solving ability, creativity, wisdom, taste, memory, speed, cleverness, understanding, judgement. These virtues depend on mental and social software, but can also be enhanced by enhancing human brains.

How much? To navigate the creation of AGI will likely require solving philosophical problems that are beyond the capabilities of the current population of humans, given the available time (some decades). Six standard deviations is 1 in 10^9, seven standard deviations is 1 in 10^12. So the goal is to create many people who are 7 SDs above the mean in cognitive capabilities. That's "strong human intelligence amplification". (Why not more SDs? There are many downside risks to changing the process that creates humans, so going further is an unnecessary risk.)

It is my conviction that this is the only way forward for humanity.

Constraint: Algernon's law

Algernon's law: If there's a change to human brains that human-evolution could have made, but didn't, then it is net-neutral or net-negative for inclusive relative genetic fitness. If intelligence is ceteris paribus a fitness advantage, then a change to human brains that increases intelligence must either come with other disadvantages or else be inaccessible to evolution.

Ways around Algernon's law, increasing intelligence anyway:

  • We could apply a stronger selection pressure than human-evolution applied. The selection pressure that human-evolution applied to humans is capped (somehow) by the variation of IGF among all germline cells. So it can only push down mutational load to some point.
  • Maybe (recent, perhaps) human-evolution selected against intelligence beyond some point.
  • We could come up with better design ideas for mind-hardware.
  • We could use resources that evolution didn't have. We have metal, wires, radios, basically unlimited electric and metabolic power, reliable high-quality nutrition, mechanical cooling devices, etc.
  • Given our resources, some properties that would have been disadvantages are no longer major disadvantages. E.g. a higher metabolic cost is barely a meaningful cost.
  • We have different values from evolution; we might want to trade away IGF to gain intelligence.

How to know what makes a smart brain

Figure it out ourselves

  • We can test interventions and see what works.
  • We can think about what, mechanically, the brain needs in order to function well.
  • We can think about thinking and then think of ways to think better.

Copy nature's work

  • There are seven billion natural experiments, juz runnin aroun doin stuff. We can observe the behaviors of the humans and learn what circumstances of their creation leads to fewer or more cognitive capabilities.
  • We can see what human-evolution invested in, aimed at cognitive capabilities, and add more of that.

Brain emulation

The approach

Method: figure out how neurons work, scan human brains, make a simulation of a scanned brain, and then use software improvements to make the brain think better.

The idea is to have a human brain, but with the advantages of being in a computer: faster processing, more scalable hardware, more introspectable (e.g. read access to all internals, even if they are obscured; computation traces), reproducible computations, A/B testing components or other tweaks, low-level optimizable, process forking. This is a "figure it out ourselves" method——we'd have to figure out what makes the emulated brain smarter.

Problems

  • While we have some handle on the fast (<1 second) processes that happen in a neuron, no one knows much about the slow (>5 second) processes. The slow processes are necessary for what we care about in thinking. People working on brain emulation mostly aren't working on this problem because they have enough problems as it is.

  • Experiments here, the sort that would give 0-to-1 end-to-end feedback about whether the whole thing is working, would be extremely expensive; and unit tests are much harder to calibrate (what reference to use?).

  • Partial success could constitute a major AGI advance, which would be extremely dangerous. Unlike most of the other approaches listed here, brain emulations wouldn't be hardware-bound (skull-size bound).

  • The potential for value drift——making a human-like mind with altered / distorted / alien values——is much higher here than with the other approaches. This might be especially selected for: subcortical brain structures, which are especially value-laden, are more physiologically heterogeneous than cortical structures, and therefore would require substantially more scientific work to model accurately. Further: because the emulation approach is based on copying as much as possible and then filling in details by seeing what works, many details will be filled in by non-humane processes (such as the shaping processes in normal human childhood).

Fundamentally, brain emulations are a 0-to-1 move, whereas the other approaches take a normal human brain as the basic engine and then modify it in some way. The 0-to-1 approach is more difficult, more speculative, and riskier.

Genomic approaches

These approaches look at the 7 billion natural experiments and see which genetic variants correlate with intelligence. IQ is a very imperfect but measurable and sufficient proxy for problem-solving ability. Since >7 of every 10 IQ points are explained by genetic variation, we can extract a lot of what nature knows about what makes brains have many capabilities. We can't get that knowledge about capable brains in a form usable as engineering (to build a brain from scratch), but we can at least get it in a form usable as scores (which genomes make brains with fewer or more capabilities). These are "copy nature's work" approaches.

Adult brain gene editing

The approach

Method: edit IQ-positive variants into the brain cells of adult humans.

See "Significantly Enhancing ...".

Problems

  • Delivery is difficult.

  • Editors damage DNA.

  • The effect is greatly attenuated, compared to germline genetics. In adulthood, learning windows have been passed by; many genes are no longer active; damage that accumulates has already been accumulated; many cells don't receive the edits. This adds up to an optimistic ceiling somewhere around +2 or +3 SDs.

Germline engineering

This is the way that will work. (Note that there are many downside risks to germline engineering, though AFAICT they can be alleviated to such an extent that the tradeoff is worth it by far.)

The approach

Method: make a baby from a cell that has a genome that has many IQ-positive genetic variants.

Subtasks:

  • Know what genome would produce geniuses. This is already solved well enough. Because there are already polygenic scores for IQ that explain >12% of the observed variance in IQ (pgscatalog.org/score/PGS003724/), 10 SDs of raw selection power would translate into trait selection power at a rate greater than √(1/9) = 1/3, giving >3.3 SDs of IQ trait selection power, i.e. +50 IQ points.

  • Make a cell with such a genome. This is probably not that hard——via CRISPR editing stem cells, via iterated meiotic selection, or via chromosome selection. My math and simulations show that several methods would achieve strong intelligence amplification. If induced meiosis into culturable cells is developed, IMS can provide >10 SDs of raw selection power given very roughly $10^5 and a few months.

  • Know what epigenomic state (in sperm / egg / zygote) leads to healthy development. This is not fully understood——it's an open problem that can be worked on.

  • Given a cell, make a derived cell (diploid mitotic or haploid meiotic offspring cell) with that epigenomic state. This is not fully understood——it's an open problem that can be worked on. This is the main bottleneck.

These tasks don't necessarily completely factor out. For example, some approaches might try to "piggyback" off the natural epigenomic reset by using chromosomes from natural gametes or zygotes, which will have the correct epigenomic state already.

See also Branwen, "Embryo Selection ...".

More information on request. Some of the important research is happening, but there's always room for more funding and talent.

Problems

  • It takes a long time; the baby has to grow up. (But we probably have time, and delaying AGI only helps if you have an out.)

  • Correcting the epigenomic state of a cell to be developmentally competent is unsolved.

  • The baby can't consent, unlike with other approaches, which work with adults. (But the baby can also be made genomically disposed to be exceptionally healthy and sane.)

  • It's the most politically contentious approach.

Signaling molecules for creative brains

The approach

Method: identify master signaling molecules that control brain areas or brain developmental stages that are associated with problem-solving ability; treat adult brains with those signaling molecules.

Due to evolved modularity, organic systems are governed by genomic regulatory networks. Maybe we can isolate and artificially activate GRNs that generate physiological states that produce cognitive capabilities not otherwise available in a default adult's brain. The hope is that there's a very small set of master regulators that can turn on larger circuits with strong orchestrated effects, as is the case with hormones, so that treatments are relatively simple, high-leverage, and discoverable. For example, maybe we could replicate the signaling context that activates childish learning capabilities, or maybe we could replicate the signaling context that activates parietal problem-solving in more brain tissue.

I haven't looked into this enough to know whether or not it makes sense. This is a "copy nature's work" approach: nature knows more about how to make brains that are good at thinking, than what is expressed in a normal adult human.

Problems

  • Who knows what negative effects might result.

  • Learning windows might be irreversibly lost after childhood, e.g. by long-range connections being irrecoverably pruned.

Brain-brain electrical interface approaches

Brain-computer interfaces don't obviously give an opportunity for large increases in creative philosophical problem-solving ability. See the discussion in "Prosthetic connectivity". The fundamental problem is that we, programming the computer part, don't know how to write code that does transformations that will be useful for neural minds.

But brain-brain interfaces——adding connections between brain tissues that normally aren't connected——might increase those abilities. These approaches use electrodes to read electrical signals from neurons, then transmit those signals (perhaps compressed/filtered/transformed) through wires / fiber optic cables / EM waves, then write them to other neurons through other electrodes. These are "copy nature's work" approaches, in the sense that we think nature made neurons that know how to arrange themselves usefully when connected with other neurons.

Problems with all electrical brain interface approaches

  • The butcher number. Current electrodes kill more neurons than they record. That doesn't scale safely to millions of connections.
  • Bad feedback. Neural synapses are not strictly feedforward; there is often reciprocal signaling and regulation. Electrodes wouldn't communicate that sort of feedback, which might be important for learning.

Massive cerebral prosthetic connectivity

Source: https://www.neuromedia.ca/white-matter/

Half of the human brain is white matter, i.e. neuronal axons with fatty sheaths around them to make them transmit signals faster. White matter is ~1/10 the volume of rodent brains, but ~1/2 the volume of human brains. Wiring is expensive and gets minimized; see "Principles of Neural Design" by Sterling and Laughlin. All these long-range axons are a huge metabolic expense. That means fast, long-range, high bandwidth (so to speak——there are many different points involved) communication is important to cognitive capabilities. See here.

A better-researched comparison would be helpful. But vaguely, my guess is that if we compare long-range neuronal axons to metal wires, fiber optic cables, or EM transmissions, we'd see (amortized over millions of connections): axons are in the same ballpark in terms of energy efficiency, but slower, lower bandwidth, and more voluminous. This leads to:

Method: add many millions of read-write electrodes to several brain areas, and then connect them to each other.

See "Prosthetic connectivity" for discussion of variants and problems. The main problem is that current brain implants furnish <10^4 connections, but >10^6 would probably be needed to have a major effect on problem-solving ability, and electrodes tend to kill neurons at the insertion site. I don't know how to accelerate this, assuming that Neuralink is already on the ball well enough.

Human / human interface

Method: add many thousands of read-write electrodes to several brain areas in two different brains, and then connect them to each other.

If one person could think with two brains, they'd be much smarter. Two people connected is not the same thing, but could get some of the benefits. The advantages of an electric interface over spoken language are higher bandwidth, lower latency, less cost (producing and decoding spoken words), and potentially more extrospective access (direct neural access to inexplicit neural events). But it's not clear that there should be much qualitative increase in philosophical problem-solving ability.

A key advantage over prosthetic connectivity is that the benefits might require a couple ooms fewer connections. That alone makes this method worth trying, as it will be probably be feasible soon.

Interface with brain tissue in a vat

Method: grow neurons in vitro, and then connect them to a human brain.

The advantage of this approach is that it would in principle be scalable. The main additional obstacle, beyond any neural-neural interface approaches, is growing cognitively useful tissue in vitro. This is not completely out of the question——see "DishBrain"——but who knows if it would be feasible.

Massive neural transplantation

The approach

Method: grow >10^8 neurons (or appropriate stem cells) in vitro, and then put them into a human brain.

There have been some experiments along these lines, at a smaller scale, aimed at treating brain damage.

The idea is simply to scale up the brain's computing wetware.

Problems

  • It would be a complex and risky surgery.
  • We don't know how to make high-quality neurons in vitro.
  • The arrangement of the neurons might be important, and would be harder to replicate. Using donor tissue might fix this, but becomes more gruesome and potentially risky.
  • It might be difficult to get transplanted tissue to integrate. There's at least some evidence that human cerebral organoids can integrate into mouse brains.
  • Problem-solving might be bottlenecked on long-range communication rather than neuron count.

Support for thinking

Generally, these approaches try to improve human thinking by modifying the algorithm-like elements involved in thinking. They are "figure it out ourselves" approaches.

The approaches

There is external support:

Method: create artifacts that offload some elements of thinking to a computer or other external device.

E.g. the printing press, the text editor, the search engine, the typechecker.

There is mental software:

Method: create methods of thinking that improve thinking.

E.g. the practice of mathematical proof, the practice of noticing rationalization, the practice of investigating boundaries.

There is social software:

Method: create methods of social organization that support and motivate thinking.

E.g. a shared narrative in which such-and-such cognitive tasks are worth doing, the culture of a productive research group.

Method: create methods of social organization that constitute multi-person thinking systems.

E.g. git.

Problems

  • The basic problem is that the core activity, human thinking, is not visible or understood. As a consequence, problems and solutions can't be shared / reproduced / analysed / refactored / debugged. Philosophers couldn't even keep paying attention to the question. There are major persistent blind spots around important cognitive tasks that have bad feedback.
  • Solutions are highly context dependent——they depend on variables that aren't controlled by the technology being developed. This adds to the unscalability of these solutions.
  • The context contains strong adversarial memes, which limits these properties of solutions: speed (onboarding time), scope (how many people), mental energy budget (fraction of each person's energy), and robustness (stability over time and context).

FAQ

What about weak amplification

Getting rid of lead poisoning should absolutely be a priority. It won't greatly increase humanity's maximum intelligence level though.

What about ...

  • BCIs? weaksauce
  • Nootropics? weaksauce
  • Brain training? weaksauce
  • Transplanting bird neurons? Seems risky and unlikely to work.
  • Something something bloodflow? weaksauce
  • Transcranial magnetic stimulation? IDK, probably weaksauce. This is a "counting up from negative up to zero" thing; might remove inhibitions or trauma responses, or add useful noise that breaks anti-helpful states, or something. But it won't raise the cap on insight, probably——people sometimes get to have their peak problem solving sometimes anyway.
  • Ultrasound? ditto
  • Neurofeedback? Possibly... seems like a better bet than other stuff like this, but probably weaksauce.
  • Getting good sleep? weaksauce——good but doesn't make supergeniuses
  • Gut microbiome? weaksauce
  • Mnemonic systems? weaksauce
  • Software exobrain? weaksauces
  • LLMs? no
  • Psychedelics? stop
  • Buddhism? Aahhh, I don't think you get what this is about
  • Embracing evil? go away
  • Rotating armodafinil, dextromethorphan, caffeine, nicotine, and lisdexamfetamine? AAHHH NOOO
  • [redacted]? Absolutely not. Go sit in the corner and think about what you were even thinking of doing.

The real intelligence enhancement is ...

Look, I'm all for healing society, healing trauma, increasing collective consciousness, creating a shared vision of the future, ridding ourselves of malign egregores, blah blah. I'm all for it. But it's a difficult, thinky problem. ...So difficult that you might need some good thinking help with that thinky problem...

Is this good to do?

Yeah, probably. There are many downside risks, but the upside is large and the downsides can be greatly alleviated.

Overview of strong human intelligence amplification methods
New Comment
141 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Raemon148

Curated. Augmenting human intelligence seems like one of the most important things-to-think-about this century. I appreciated this post's taxonomy.

I appreciate the made of graph of made up numbers that Tsvi made up being clearly labeled as such.

I have a feeling that this post could be somewhat more thorough, maybe with more links to the places where someone could followup on the technical bits of each thread.

-2[anonymous]
.
8Raemon
The point of made up numbers is that they are a helpful tool for teasing out some implicit information from your intuitions, which is often better than not doing that at all, but, it's important that they are useful in a pretty different way from numbers-you-empirically-got-from-somewhere, and thus it's important that they be clearly labeled as made up numbers that Tsvi made up numbers. See: If it's worth doing, it's worth doing with Made Up Statistics
3[anonymous]
.
2Raemon
Why do you think the table is the most important thing in the article? A different thing Tsvi could have done was say “here’s my best guess of which of these are most important, and my reasoning why”, but this would have essentially the same thing as the table + surrounding essay but with somewhat less fidelity of what his guesses were for the ranking. Meanwhile I think the most important thing was laying out all the different potential areas of investigation, which I can now reason about on my own.
-9[anonymous]
7TsviBT
Basically what Raemon said. I wanted to summarize my opinions, give people something to disagree with (both the numbers and the rubric), highlight what considerations seem important to me (colored fields); but the numbers are made up (because they are predictions, which are difficult; and they are far from fully operationalized; and they are about a huge variety of complex things, so would be difficult to evaluate; and I've thought hard about some of the numbers, but not about most of them). It's better than giving no numbers, no?
-3[anonymous]
.
4Raemon
FYI I do think the downside of "people may anchor off the numbers" is reasonable to weigh in the calculus of epistemic-community-norm-setting. I would frame the question: "is the downside of people anchoring off potentially-very-off-base numbers worse than the upside of having intuitions somewhat more quantified, with more gears exposed?". I can imagine that question resolving in the "actually yeah it's net negative", but, if you're treating the upside as "zero" I think you're missing some important stuff.

As someone who spent a few years researching this direction intensely before deciding to go work on AI alignment directly (the opposite direction you've gone!), I can't resist throwing in my two cents.

I think germline engineering could do a lot, if we had the multiple generations to work with. As I've told you, I don't think we have anything like near enough time for a single generation (much less five or ten).

I think direct brain tissue implantation is harder even than you imagine. Getting the neurons wired up right in an adult brain is pretty tricky. Even when people do grow new axons and dendrites after an injury to replace a fraction of their lost tissue, this sometimes goes wrong and makes things worse. Misconnected neurons are more of a problem than too few neurons.

I think there's a lot more potential in brain-computer-interfaces than you are giving them credit for, and an application you haven't mentioned.

Some things to consider here:

  1. The experiments that have been tried in humans have been extremely conservative, aiming to fix problems in the most well-understood but least-relevant-to-intelligence areas of the brain (sensory input, motor output). In other words, weaksauce by
... (read more)
6TsviBT
That's creative. But * It seems immoral, maybe depending on details. Depending on how humanized the neurons are, and what you do with the pigs (especially the part where human thinking could get trained into them!), you might be creating moral patients and then maiming and torturing them. * It has a very high ick factor. I mean, I'm icked out by it; you're creating monstrosities. * I assume it has a high taboo factor. * It doesn't seem that practical. I don't immediately see an on-ramp for the innovation; in other words, I don't see intermediate results that would be interesting or useful, e.g. in an academic or commercial context. That's in contrast to germline engineering or brain-brain interfaces, which have lots of component technologies and partial successes that would be useful and interesting. Do you see such things here? * Further, it seems far far less scalable than other methods. That means you get way less adoption, which means you get way fewer geniuses. Also, importantly, it means that complaints about inequality become true. With, say, germline engineering, anyone who can lease-to-own a car can also have genetically super healthy, sane, smart kids. With networked-modified-pig-brain-implant-farm-monster, it's a very niche thing only accessible to the rich and/or well-connected. Or is there a way this eventually results in a scalable strong intelligence boost? That's compelling though, for sure. On the other hand, the quality is going to be much lower compared to human brains. (Though presumably higher quality compared to in vitro brain tissue.) My guess is that quality is way more important in our context. I wouldn't think so as strongly if connection bandwidth were free; in that case, plausibly you can get good work out of the additional tissue. Like, on one end of the spectrum of "what might work", with low-quality high-bandwidth, you're doing something like giving each of your brain's microcolumns an army of 100 additional, shitty microcolumn
4Nathan Helm-Burger
We see pretty significant changes in ability of humans when their brain volume changes only a bit. I think if you can 10x the effective brain volume, even if the additional 'regions' are of lower quality, you should expect some dramatic effects. My guess is that if it works at all, you get at least 7 SDs of sudden improvement over a month or so of adaptation, maybe more. As I explained, I think evidence from the human connectome shows that bandwidth is not an issue. We should be able to supply plenty of bandwidth. I continue to find it strange that you are so convinced that computer simulations of neurons would be insufficient to provide benefit. I'd definitely recommend that before trying to network animal brains to a human. In that case, you can do quite a lot of experimentation with a lot of different neuronal models and machine learning models as possible boosters for just a single human. It's so easy to change what program the computer is running, and how much compute you have hooked up. Seems to me you should prove that this doesn't work before even considering going the animal brain route. I'm confident that no existing research has attempted anything like this, so we have no empirical evidence to show that it wouldn't work. Again, even if each simulated cortical column is only 1% as effective (which seems like a substantial underestimate to me), we'd be able to use enough compute that we could easily simulate 1000x extra.  Have you watched videos of the first neuralink patient using a computer? He has great cursor control, substantially better than previous implants have been able to deliver. I think this is strong evidence that the implant tech is at acceptable performance level.
4Nathan Helm-Burger
I don't think the moral cost is relevant if the thing you are comparing it too is saving the world, and making lots of human and animal lives much better. It seems less problematic to me than a single ordinary pig farm, since you'd be treating these pigs unusually well. Weird that you'd feel good about letting the world get destroyed in order to have one fewer pig farm in it. Are you reasoning from Copenhagen ethics? That approach doesn't resonate with me, so maybe that's why I'm confused. It is quite impractical. A weird last ditch effort to save the world. It wouldn't be scalable, you'd be enhancing just a handful of volunteers who would then hopefully make rapid progress on alignment. To get a large population of people smarter, polygenic selection seems much better. But slow. The humanization isn't critical, and it isn't for the purposes of immune-signature matching. It's human genes related to neural development, so that the neurons behave more like human neurons (e.g. forming 10x more synapses in the cortex). Pigs are a better cost-to-brain-matter ratio. I wasn't worrying about animal suffering here, like I said above.
4TsviBT
Gotcha. Yeah, I think these strategies probably just don't work. The moral differences are: * Humanized neurons. * Animals with parts of their brains being exogenously driven; this could cause large amounts of suffering. * Animals with humanized thinking patterns (which is part of how the scheme would be helpful in the first place). Where did you get the impression that I'd feel good about, or choose, that? My list of considerations is a list of considerations. That said, I think morality matters, and ignoring morality is a big red flag. Separately, even if you're pretending to be a ruthless consequentialist, you still want to track morality and ethics and ickyness, because it's a very strong determiner of whether or not other people will want to work on something, which is a very strong determiner of success or failure.
5Nathan Helm-Burger
Yes, fair enough. I'm not saying that clearly immoral things should be on the table. It just seems weird to me that this is something that seems approximately equivalent to a common human activity (raising and killing pigs) that isn't widely considered immoral.
2Nathan Helm-Burger
FWIW, I wouldn't expect the exogenous driving of a fraction of cortical tissue to result in suffering of the subjects.  I do agree that having humanized neurons being driven in human thought patterns makes it weird from an ethical standpoint.
2TsviBT
My reason is that suffering in general seems related to [intentions pushing hard, but with no traction or hope]. A subspecies of that is [multiple drives pushing hard against each other, with nobody pulling the rope sideways]. A new subspecies would be "I'm trying to get my brain tissue to do something, but it's being externally driven, so I'm just scrabbling my hands futilely against a sheer blank cliff wall." and "Bits of my mind are being shredded because I create them successfully by living and demanding stuff of my brain, but the the bits are exogenously driven / retrained and forget to do what I made them to do.".
4Nathan Helm-Burger
Really hard to know without more research on the subject.  My subjective impression from working with mice and rats is that there isn't a strong negative reaction to having bits of their cortex stimulated in various ways (electrodes, optogenetics). Unlike, say, experiments where we test their startle reaction by placing them in a small cage with a motion sensor and then playing a loud startling sound. They hate that!
4TsviBT
This is interesting, but I don't understand what you're trying to say and I'm skeptical of the conclusion. How does this square with half the brain being myelinated axons? Are you talking about adult brains or child brains? If you're up for it, maybe let's have a call at some point.
3Nathan Helm-Burger
Half the brain by >>volume<< being myelinated axons. Myelinated axons are extremely volume-wasteful due to their large width over relatively large distances. I'm talking about adult brains. Child brains have slightly more axons (less pruning and aging loss has occurred), but much less myelination. Happy to chat at some point.
4TsviBT
Yep, I agree. I vaguely alluded to this by saying "The main additional obstacle [...] is growing cognitively useful tissue in vitro."; what I have in mind is stuff like: * Well-organized connectivity, as you say. * Actually emulating 5-minute and 5-day behavior of neurons--which I would guess relies on being pretty neuron-like, including at the epigenetic level. IIUC current in vitro neural organoids are kind of shitty--epigenetically speaking they're definitely more like neurons than like hepatocytes, but they're not very close to being neurons. * Appropriate distribution of cell types (related to well-organized connectivity). This adds a whole additional wrinkle. Not only do you have to produce a variety of epigenetic states, but also you have to have them be assorted correctly (different regions, layers, connections, densities...). E.g. the right amount of glial cells...
4TsviBT
Your characterization of the current state of research matches my impressions (though it's good to hear from someone who knows more). My reasons for thinking BCIs are weaksause have never been about that, though. The reasons are that: * I don't see any compelling case for anything you can do on a computer which, when you hook it up to a human brain, makes the human brain very substantially better at solving philosophical problems. I can think of lots of cool things you can do with a good BCI, and I'm sure you and others can think of lots of other cool things, but that's not answering the question. Do you see a compelling case? What is it? (To be more precise, I do see compelling cases for the few areas I mentioned: prosthetic intrabrain connectivity and networking humans. But those both seem quite difficult technically, and would plausibly be capped in their success by connection bandwidth, which is technically difficult to increase.) * It doesn't seem like we understand nearly as much about intelligence compared to evolution (in a weak sense of "understand", that includes stuff encoded in the human genome cloud). So stuff that we'll program in a computer will be qualitatively much less helpful for real human thinking, compared to just copying evolution's work. (If you can't see that LLMs don't think, I don't expect to make progress debating that here.)
4Nathan Helm-Burger
I think that cortical microcolumns are fairly close to acting in a pretty well stereotyped way that we can simulate pretty accurately on a computer. And I don't think their precise behavior is all that critical. I think actually you could get 80-90% of the effective capacity by simply having a small (10k? 100k? parameter) transformer standing in for each simulated cortical column, rather than a less compute efficient but more biologically accurate simulation. The tricky part is just setting up the rules for intercolumn connection (excitatory and inhibitory) properly. I've been making progress on this in my research, as I've mentioned to you in the past. Interregional connections (e.g. parietal lobe to prefrontal lobe, or V1 to V2) are fewer, and consistent enough between different people, and involve many fewer total connections, so they've all been pretty well described by modern neuroscience. The full weighted directed graph is known, along with a good estimate of the variability on the weights seen between individuals. It's not the case that the whole brain is involved in each specific ability that a person has. The human brain has a lot of functional localization. For a specific skill, like math or language, there is some distributed contribution from various areas but the bulk of the computation for that skill is done by a very specific area. This means that if you want to increase someone's math skill, you probably need to just increase that specific known 5% or so of their brain most relevant to math skill by 10x. This is a lot easier than needing to 10x the entire brain.
4TsviBT
I don't know enough to evaluate your claims, but more importantly, I can't even just take your word for everything because I don't actually know what you're saying without asking a whole bunch of followup questions. So hopefully we can hash some of this out on the phone.
4Nathan Helm-Burger
Sorry that my attempts to communicate technical concepts don't always go smoothly! I keep trying to answer your questions about 'what I think I know and how I think I know it' with dumps of lists of papers. Not ideal! But sometimes I'm not sure what else to do, so.... here's a paper! https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001575 An estimation of the absolute number of axons indicates that human cortical areas are sparsely connected    Burke Q. Rosen, Eric Halgren Abstract The tracts between cortical areas are conceived as playing a central role in cortical information processing, but their actual numbers have never been determined in humans. Here, we estimate the absolute number of axons linking cortical areas from a whole-cortex diffusion MRI (dMRI) connectome, calibrated using the histologically measured callosal fiber density. Median connectivity is estimated as approximately 6,200 axons between cortical areas within hemisphere and approximately 1,300 axons interhemispherically, with axons connecting functionally related areas surprisingly sparse. For example, we estimate that <5% of the axons in the trunk of the arcuate and superior longitudinal fasciculi connect Wernicke’s and Broca’s areas. These results suggest that detailed information is transmitted between cortical areas either via linkage of the dense local connections or via rare, extraordinarily privileged long-range connections.
3Towards_Keeperhood
Wait are you saying that not only there is quite low long-distance bandwidth, but also relatively low bandwith between neighboring areas? Numbers would be very helpful. And if there's much higher bandwidth between neighboring regions, might there not be a lot more information that's propagating long-range but only slowly through intermediate areas (or would that be too slow or sth?)? (Relatedly, how crisply does the neocortex factor into different (specialized) regions? (Like I'd have thought it's maybe sorta continuous?))
3Nathan Helm-Burger
I'm glad you're curious to learn more! The cortex factors quite crisply into specialized regions. These regions have different cell types and groupings, so were first noticed by early microscope users like Cajal. In a cortical region, neurons are organized first into microcolumns of 80-100 neurons, and then into macrocolumns of many microcolumns. Each microcolumn works together as a group to calculate a function. Neighboring microcolumns inhibit each other. So each macrocolumn is sort of a mixture of experts. The question then is how many microcolumns from one region send an output to a different region. For the example of V1 to V2, basically every microcolumn in V1 sends a connection to V2 (and vise versa). This is why the connection percentage is about 1%. 100 neurons per microcolumn, 1 of which has a long distance axon to V2. The total number of neurons is roughly 10 million, organized into about 100,000 microcolumns. For areas that are further apart, they send fewer axons. Which doesn't mean their signal is unimportant, just lower resolution. In that case you'd ask something like "how many microcolumns per macrocolumn send out a long distance axon from region A to region B?" This might be 1, just a summary report of the macrocolumn. So for roughly 10 million neurons, and 100,000 microcolumns organized into around 1000 macrocolumns... You get around 1000 neurons send axons from region A to region B. More details are in the papers I linked elsewhere in this comment thread.
3Towards_Keeperhood
Thanks! Yeah I believe what you say about that long-distance connections not that many. I meant that there might be more non-long-distance connections between neighboring areas. (E.g. boundaries of areas are a bit fuzzy iirc, so macrocolumns towards the "edge" of a region are sorta intertwined with macrocolumns of the other side of the "edge".) (I thought when you mean V1 to V2 you include those too, but I guess you didn't?) Do you think those inter-area non-long-distance connections are relatively unimportant, and if so why?
3Nathan Helm-Burger
Here's a paper about describing the portion of the connectome which is invariant between individual people (basal component), versus that which is highly variant (superstructure): https://arxiv.org/abs/2012.15854  ## Uncovering the invariant structural organization of the human connectome Anand Pathak, Shakti N. Menon and Sitabhra Sinha (Dated: January 1, 2021) In order to understand the complex cognitive functions of the human brain, it is essential to study the structural macro-connectome, i.e., the wiring of different brain regions to each other through axonal pathways, that has been revealed by imaging techniques. However, the high degree of plasticity and cross-population variability in human brains makes it difficult to relate structure to function, motivating a search for invariant patterns in the connectivity. At the same time, variability within a population can provide information about the generative mechanisms.    In this paper we analyze the connection topology and link-weight distribution of human structural connectomes obtained from a database comprising 196 subjects. By demonstrating a correspondence between the occurrence frequency of individual links and their average weight across the population, we show that the process by which the human brain is wired is not independent of the process by which the link weights of the connectome are determined. Furthermore, using the specific distribution of the weights associated with each link over the entire population, we show that a single parameter that is specific to a link can account for its frequency of occurrence, as well as, the variation in its weight across different subjects. This parameter provides a basis for “rescaling” the link weights in each connectome, allowing us to obtain a generic network representative of the human brain, distinct from a simple average over the connectomes. We obtain the functional connectomes by implementing a neural mass model on each of the vertices of the corre

Brain emulation looks closer than your summary table indicates.

Manifold estimates a 48% chance by 2039.

Eon Systems is hiring for work on brain emulation.

Manifold is pretty weak evidence for anything >=1 year away because there are strong incentives to bet on short term markets.

7TsviBT
I'm not sure how to integrate such long-term markets from Manifold. But anyway, that market seems to have a very vague notion of emulation. For example, it doesn't mention anything about the emulation doing any useful cognitive work!
3Max Lee
Once we get superintelligence, we might get every other technology that the laws of physics allow, even if we aren't that "close" to these other technologies. Maybe they believe in a ≈38% chance of superintelligence by 2039. PS: Your comment may have caused it to drop to 38%. :)
5PeterMcCluskey
Manifold estimates an 81% chance of ASI by 2036, using a definition that looks fairly weak and subjective to me. I've bid the brain emulation market back up a bit.

This is great! Everybody loves human intelligence augmentation, but I've never seen a taxonomy of it before, offering handholds for getting started. 

I'd say "software exobrain" is less "weaksauce," and more "80% of the peak benefits are already tapped out, for conscientious people who have heard of OneNote or Obsidian." I also am still holding out for bird neurons with portia spider architectural efficiency and human cranial volume; but I recognize that may not be as practical as it is cool.

If there's a change to human brains that human-evolution could have made, but didn't, then it is net-neutral or net-negative for inclusive relative genetic fitness. If intelligence is ceteris paribus a fitness advantage, then a change to human brains that increases intelligence must either come with other disadvantages or else be inaccessible to evolution.

You're assuming a steady state. Firstly, evolution takes time. Secondly, if humans were, for example, in an intelligence arms-race with other humans (for example, if smarter people can reliably con dumber... (read more)

9Nathan Helm-Burger
An example I love of a helpful brain adaptation with few downsides that I know of, which hasn't spread far throughout mammals is one in seal brains. Seals, unlike whales and dolphins, had an evolutionary niche which caused them to not get as good at holding their breathe as would be optimal for them. They had many years of occasionally diving too deep and dying from brain damage related to oxygen deprivation (ROS in neurons). So, some ancient seal had a lucky mutation that gave them a cool trick. The glial cells which support neurons can easily grow back even if their population gets mostly wiped out. Seals have extra mitochondria in their glial cells and none in their neurons, and export the ATP made in the glial cells to the neurons. This means that the reactive oxygen species from oxygen deprivation of the mitochondria all occur in the glia. So, when a seal stays under too long, their glial cells die instead of their neurons. The result is that they suffer some mental deficiencies while the glia grow back over a few days or a couple weeks (depending on the severity), but then they have no lasting damage. Unlike in other mammals, where we lose neurons that can't grow back. Given enough time, would humans evolve the same adaptation (if it does turn out to have no downsides)? Maybe, but probably not. There just isn't enough reproductive loss due to stroke/oxygen-deprivation to give a huge advantage to the rare mutant who lucked into it. But since we have genetic engineering now... we could just give the ability to someone. People die occasionally competing in deep freediving competitions, and definitely get brain damage. I bet they'd love to have this mod if it were offered.
4Nathan Helm-Burger
Also, sometimes there are 'valleys of failure' which block off otherwise fruitful directions in evolution. If there's a later state that would be much better, but to get there would require too many negative mutations before the positive stuff showed up, the species may simply never get lucky enough to make it through the valley of failure. This means that evolution is heavily limited to things which have mostly clear paths to them. That's a pretty significant limitation!

Short note: We don't need 7SDs to get 7SDs.

If we could increase the average IQ by 2SDs, then we'd have lots of intelligent people looking into intelligence enhancement. In short, intelligence feeds into itself, it might be possible to start the AGI explosion in humans.

[-]TsviBT112

(Just acknowledging that my response is kinda disorganized. Take it or leave it, feel free to ask followups.)

Most easy interventions work on a generational scale. There's pretty easy big wins like eliminating lead poisoning (and, IDK, feeding everyone, basic medicine, internet access, less cannibalistic schooling) which we should absolutely do, regardless of any X-risk concerns. But for X-risk concerns, generational is pretty slow.

This is both in terms of increasing general intelligence, and also in terms of specific capabilities. Even if you bop an adult on the head and make zer +2SDs smarter, ze still would have to spend a bunch of time and effort to train up on some new field that's needed for the next approach to further increasing intelligence. That's not a generational scale exactly, maybe more like 10 years, but still.

We're leaking survival probability mass to an AI intelligence explosion year by year. I think we have something like 0-2 or 0-3 generations before dying to AGI.

To be clear, I'm assuming that when you say "we don't need 7SDs", you mean "we don't need to find an approach that could give 7SDs". (Though to be clear, I agree with that in a literal sense, because you... (read more)

4StartAtTheEnd
You're correct that the average IQ could be increased in various ways, and that increasing the minimum IQ of the population wouldn't help us here. I was imagining shifting the entire normal distribution two SDs to the right, so that those who are already +4-5SDs would become +5-7SDs. As far as I'm concerned, the progress of humanity stands on the shoulders of giants, and the bottom 99.999% aren't doing much of a difference. The threshold for recursive self-improvement in humans, if one exists, is quite high. Perhaps if somebody like Neumann lived today it would be possible. By the way, most of the people who look into nootropics, meditations and other such things do so because they're not functional, so in a way it's a bit like asking "Why are there so many sick people in hospitals if it's a place for recovery?" thought you could make the argument that geniuses would be doing these things if they worked. My score on IQ tests has increased about 15 points since I was 18, but it's hard to say if I succeeded in increasing my intelligence or if it's just a result of improving my mental health and actually putting a bit of effort into my life. I still think that very high levels of concentration and effort can force the brain to reconstruct itself, but that this process is so unpleasant that people stop doing it once they're good enough (for instance, most people can't read all that fast, despite reading texts for 1000s of hours. But if they spend just a few weeks practicing, they can improve their reading speed by a lot, so this kind of shows how improvement stops once you stop applying pressure) By the way, I don't know much about neurons. It could be that 4-5SD people are much harder to improve since the ratio of better states to worse states is much lower
2TsviBT
Right, but those interventions are harder (shifting the right tail further right is especially hard). Also, shifting the distribution is just way different numerically from being able to make anyone who wants be +7SD. If you shift +1SD, you go from 0 people at +7SD to ~8 people. (And note that the shift is, in some ways, more unequal compared to "anyone who wants, for the price of a new car, can reach the effective ceiling".)
1StartAtTheEnd
Right, I agree with that. A right shift by 2SDs would make people like Hawkings, Einstein, Tesla, etc. about 100 times more common, and make it so that a few people who are 1-2SDs above these people are likely to appear soon. I think this is sufficient, but I don't know enough about human intelligence to guarantee it. I think it depends on how the SD is increased. If you "merely" create a 150-IQ person with a 20-item working memory, or with a 8SD processing speed, this may not be enough to understand the problem and to solve it. Of course, you can substitute with verbal intelligence, which I think a lot of mathematicians do. I can't rotate 5D objects in my head, but I can write equations on paper which can rotate 5D objects and get the right answer. I think this is how mathematics is progressing past what we can intuitively understand. Of course, if your non-verbal intelligence can keep up, you're much better off, since you can combine any insights from any area of life and get something new out of it.

ditto

we have really not fully explored ultrasound and afaik there is no reason to believe it's inherently weaker than administering signaling molecules. 

2TsviBT
Signaling molecules can potentially take advantage of nature's GRNs. Are you saying that ultrasound might too?
2sarahconstantin
Neuronal activity could certainly affect gene regulation! so yeah, I think it's possible (which is not a strong claim...lots of things "regulate" other things, that doesn't necessarily make them effective intervention points)
6TsviBT
Yeah, of course it affects gene regulation. I'm saying that -- maayybe -- nature has specific broad patterns of gene expression associated with powerful cognition (mainly, creativity and learning in childhood); and since these are implemented as GRNs, they'll have small, discoverable on-off switches. You're copying nature's work about how to tune a brain to think/learn/create. With ultrasound, my impression is that you're kind of like "ok, I want to activate GABA neurons in this vague area of the temporal cortex" or "just turn off the amygdala for a day lol". You're trying to figure out yourself what blobs being on and off is good for thinking; and more importantly you have a smaller action space compared to signaling molecules -- you can only activate / deactivate whatever patterns of gene expression happen to be bundled together in "whatever is downstream of nuking the amygdala for a day".

I think you're underestimating meditation.

Since I've started meditating I've realised that I've been much more sensitive to vibes.

There's a lot of folk who would be scarily capable if the were strong in system 1, in addition to being strong in system 2.

Then there's all the other benefits that mediation can provide if done properly: additional motivation, better able to break out of narratives/notice patterns.

Then again, this is dependent on their being viable social interventions, rather than just aiming for 6 or 7 standard deviations of increase in intelligence.

Meditation has been practiced for many centuries and millions practice it currently.

Please list 3 people who got deeply into meditation, then went on to change the world in some way, not counting people like Alan Watts who changed the world by promoting or teaching meditation.

I think there are many cases of reasonably successful people who often cite either some variety of meditation, or other self-improvement regimes / habits, as having a big impact on their success. This random article I googled cites the billionaires Ray Dalio, Marc Benioff, and Bill Gates, among others. (https://trytwello.com/ceos-that-meditate/)

Similarly you could find people (like Arnold Schwarzenegger, if I recall?) citing that adopting a more mature, stoic mindset about life was helpful to them -- Ray Dalio has this whole series of videos on "life principles" that he likes. And you could find others endorsing the importance of exercise and good sleep, or of using note-taking apps to stay organized.

I think the problem is not that meditation is ineffective, but that it's not usually a multiple-standard-deviations gamechanger (and when it is, it's probably usually a case of "counting up to zero from negative", as TsviBT calls it), and it's already a known technique. If nobody else in the world meditated or took notes or got enough sleep, you could probably stack those techniques and have a big advantage. But alas, a lot of CEOs and other top performers already know to do this ... (read more)

[-]Viliam107

To compare to the obvious alternative, is the evidence for meditation stronger than the evidence for prayer? I assume there are also some religious billionaires and other successful people who would attribute their success to praying every day or something like that.

Maybe other people have a very different image of meditation than I do, such that they imagine it as something much more delusional and hyperreligious? Eg, some religious people do stuff like chanting mantras, or visualizing specific images of Buddhist deities, which indeed seems pretty crazy to me.

But the kind of meditation taught by popular secular sources like Sam Harris's Waking Up app, (or that I talk about in my "Examining The Witness" youtube series about the videogame The Witness), seems to me obviously much closer to basic psychology or rationality techniques than to religious practices. Compare Sam Harris's instructions about paying attention to the contents of one's experiences, to Gendlin's idea of "Circling", or Yudkowsky's concept of "sit down and actually try to think of solutions for five minutes", or the art of "noticing confusion", or the original Feynman essay where he describes holding off on proposing solutions. So it's weird to me when people seem really skeptical of meditation and set a very high burden of proof that they wouldn't apply for other mental habits like, say, CFAR techniques.

I'm not like a meditation fanatic -- personally I don't even meditate ... (read more)

4Viliam
Thanks for answering my question directly in the second half. I find the testimonies of rationalists who experimented with meditation less convincing than perhaps I should, simply because of selection bias. People who have pre-existing affinity towards "woo" will presumably be more likely to try meditation. And they will be more likely to report that it works, whether it does or not. I am not sure how much should I discount for this, perhaps I overdo it. I don't know. A proper experiment would require a control group -- some people who were originally skeptical about meditation and Buddhism in general, and only agreed to do some exactly defined exercises, and preferably the reported differences should be measurable somehow. Otherwise, we have another selection bias, that if there are people for whom meditation does nothing, or is even harmful, they will stop trying. So at the end, 100% of people who tried will report success (whether real or imaginary), because those who didn't see any success have selected themselves out. I approve of making the "secular version of Buddhism", but in a similar way, we could make a "secular version of Christianity". (For example, how is gratitude journaling significantly different from thanking God for all his blessing before you go sleep?) And yet, I assume that the objection against "secular Christianity" on Less Wrong would be much greater than against "secular Buddhism". Maybe I am wrong, but the fact that no one is currently promoting "secular Christianity" on LW sounds like weak evidence. I suspect, the relevant difference is that for an American atheist, Christianity is outgroup, and Buddhism is fargroup. Meditation is culturally acceptable among contrarians, because our neighbors don't do it. But that is unrelated to whether it works or not. Also, I am not sure how secular the "secular Buddhism" actually is, given that people still go to retreats organized by religious people, etc. It feels too much for me to trust that s
4MondSemmel
Re: successful people who meditate, IIRC in Tim Ferriss' book Tools of Titans, meditation was one of the most commonly mentioned habits of the interviewees.
2TsviBT
Are these generally CEO-ish-types? Obviously "sustainably coping with very high pressure contexts" is an important and useful skill, and plausibly meditation can help a lot with that. But it seems pretty different from and not that related to increasing philosophical problem solving ability.
2MondSemmel
This random article I found repeats the Tim Ferriss claim re: successful people who meditate, but I haven't checked where it appears in the book Tools of Titans: Other than that, I don't see why you'd relate meditation just to high-pressure contexts, rather than also conscientiousness, goal-directedness, etc. To me, it does also seem directly related to increasing philosophical problem-solving ability. Particularly when it comes to reasoning about consciousness and other stuff where an improved introspection helps most. Sam Harris would be kind of a posterchild for this, right? What I can't see meditation doing is to provide the kind of multiple SD intelligence amplification you're interested in, plus it has other issues like taking a lot of time (though a "meditation pill" would resolve that) and potential value drift.
6TsviBT
Got any evidence?

Not really.

1Alex K. Chen (parrot)
How about TMS/tFUS/tACS => "meditation"/reducing neural noise? Drastic improvements in mental health/reducing neural noise & rumination are way more feasible than increasing human intelligence (and still have huge potential for very high impact when applied on a population-wide scale [1]), and are possible to do on mass-scale (and there are some experimental TMS protocols like SAINT/accelerated TMS which aim to capture the benefits of TMS on a 1-2 week timeline) [there's also wave neuroscience, which uses mERT and works in conjunction with qEEG, but I'm not sure if it's "ready enough" yet - it seems to involve some sort of guesswork and there are a few negative reviews on reddit]. There are a few accelerated TMS centers and they're not FDA-approved for much more than depression, but if we have fast AGI timelines, the money matters less. [speeding up feedback loops are also important for mass-adoption - which both accelerated TMS/SAINT and the "intense tACS program" that people like neurofield [Nicholas Dogris/Tiffany Thompson] and James Croall people try to do]. Ideally, the TMS/SAINT or tACS should be done in conjunction with regular monitoring of brainwaves with qEEG or fMRI throughout. Effect sizes of tFUS are said to be small relative to certain medications/drugs [this is true for neurofeedback/TMS/tACS in general], but part of this may be that people tend to be conservative with tFUS. Leo Zaroff has created an approachable tFUS community in the bay area. Still worth trying b/c the opportunity cost of trying them (with the right people) is very low (and very few people in our communities have heard of them). There are some like Jeff Tarrant and the Neurofield people (I got to meet many of them at ISNR2024 => many are coming to the Suisun Summit now) who explore these montages. Making EEG (or EEG+fNIRS) much easier to get can be high impact relative to amount of effort invested [with minimal opportunity cost]). I was pretty impressed with the convenience of

I don't understand. The hard problem of alignment/CEV/etc. is that it's not obvious how to scale intelligence while "maintaining" utility function/preferences, and this still applies for human intelligence amplification.

I suppose this is fine if the only improvement you can expect beyond human-level intelligence is "processing speed", but I would expect superhuman AI to be more intelligent in a variety of ways.

8TsviBT
Yeah, there's a value-drift column in the table of made-up numbers. Values matter and are always at stake, and are relatively more at stake here; and we should think about how to do these things in a way that avoids core value drift. You have major advantages when creating humans but tweaked somehow, compared to creating de novo AGI. * The main thing is that you're starting with a human. You start with all the stuff that determines human values--a childhood, basal ganglia giving their opinions about stuff, a stomach, a human body with human sensations, hardware empathy, etc. Then you're tweaking things--but not that much. (Except for in brain emulation, which is why it gets the highest value drift rating.) * Another thing is that there's a strong built-in limit on the strength of one human: skullsize. (Also other hardware limits: one pair of eyes and hands, one voicebox, probably 1 or 1.5 threads of attention, etc.) One human just can't do that much--at least not without interfacing with many other humans. (This doesn't apply for brain emulation, and potentially applies less for some brain-brain connectivity enhancements.) * Another key hardware limit is that there's a limit on how much you can reprogram your thinking, just by introspection and thinking. You can definitely reprogram the high-level protocols you follow, e.g. heuristics like "investigate border cases"; you can maybe influence lower-level processes such as concept-formation by, e.g., getting really good at making new words, but you maybe can't, IDK, tell your brain to allocate microcolumns to analyzing commonalities between the top 1000 best current candidate microcolumns for doing some task; and you definitely can't reprogram neuronal behavior (except through the extremely blunt-force method of drugs). * A third thing is that there's a more plausible way to actually throttle the rate of intelligence increase, compared to AI. With AI, there's a huge compute overhang, and you have no idea what dia
8Nathan Helm-Burger
Ok, just want to make a very small neuroscience note here: skull size isn't literally the limiting factor alone, it's more like 'resources devoted to brain, and a cost function involving fetal head volume at time of birth'. Why not literally skull size? Well, because infant skulls are quite malleable, and if the brain continued growing significantly in the period after birth, it would have several months to still expand without physical blocker from the skull. You can see this quite clearly in the sad case of an infant who has overproduction of fluid in the brain (hydroencephalus). This increased pressure within the skull damages and shrinks the brain, but at the same time significantly increases skull size. If it were neural tissue overproduction instead that was causing the increase, the infant skull would similarly expand, making room for more brain tissue. You could induce an increase in skull size by putting a special helmet on the infant that kept a slight negative air pressure outside the skull. This wouldn't affect brain size though, brain size is controlled by the fetal development genes which tell the neural stem cells how many times to divide. When I was looking into human intelligence enhancement, this was one of the things I researched. Experiments in mice with increasing the number of times their neural stem cells divide to give them more neurons resulted in... bigger brains, but some combination of the neurons getting crammed tightly into a skull, and reproducing more than expected thus messing up the timing patterns of various regulatory genes that help set up the repeated cortical motifs (e.g. microcolumns) resulted in the brain-expanded mice having highly disordered brains. This resulted in the behavior of them being unusually anti-social and aggressive (mice usually like living in groups). So, it's certainly possible to engineer a fetal brain to have extra neurons and an overall bigger brain, but it'd take more subtle adjustments to a larger nu
5TsviBT
Thanks, this is helpful info. I think when I'm saying skull size, I'm bundling together several things: * As you put it, "a cost function involving fetal head volume at time of birth". * The fact that adults, without surgery, have a fixed skull size to worth with (so that, for example, any sort of drug, mental technique, or in vivo editing would have some ceiling on the results). * The fact -- as you give some color to -- trying to do engineering to the brain to get around this barrier, potentially puts you up against very thorny bioengineering problems, because then you're trying to go from zero to one on a problem that evolution didn't solve for you. Namely, evolution didn't solve "how to have 1.5x many neurons, and set up the surrounding support structures appropriately". I agree brain implants are the most plausible way around this. * The fact that evolution didn't solve the much-bigger-brain problem, and so applying all the info that evolution did work out regarding building capable brains, would still result in something with a comparable skull size limit, which would require some other breakthrough technology to get around. (And I'm not especially saying that there's a hard evolutionary constraint with skull size, which you might have been responding to; I think we'd agree that there's a strong evolutionary pressure on natal skull size.) Actually I'd expect this to be quite bad, though I'm wildly guessing. One of the main reasons I say the target is +7SDs, maybe +8, rather "however much we can get", is that the extreme version seems much less confidently safe. We know humans can be +6SDs. It would be pretty surprising if you couldn't push out a bit from that, if you're leveraging the full adaptability of the human ontogenetic program. But going +15SDs or whatever would probably be more like the mice you mentioned. Some ways things could go wrong, from "Downsides ...": You wrote: Can you give more detail on what might actually work? If it involves add
3TsviBT
BTW, do not maim children in the name of X-risk reduction (or in any other name).
2Nathan Helm-Burger
Yes, another good reason to focus on interventions for consenting adults, rather than fetuses or infants.

Is "give the human a calculator and a scratchpad" not allowed in this list?  i.e. if you give a human brain the ability to instantly recall any fact and solve any math problem (by connecting the human brain to a computer via neuralink) seems like this would make us smarter.

We already see this effect in part. For example, having access to chatGPT allows me to program more complicated projects because I can offload sub-problems to the AI (thereby freeing up working-memory to focus on the remaining complexity).  Even just having a piece of paper I c... (read more)

5TsviBT
You mean, recall any fact that's been put into text-searchable form in the past and by you, and solve any calculation problem that's in a reasonably common form. I'm saying that the effect on philosophical problem-solving is just not very large. Yeah, if you've been spending 80% of your time on manually calculating things and 20% on "leaps of logic", and you could just as well spend 90% on the leaps, then calculators help a lot. But it's not by making you be able to do significantly better leaps. Maybe you can become better by getting more practice or something? But generally skills tend to plateau pretty sharply--there's always new bottlenecks, like a clicker game. If an improvement only addresses some smallish subset of the difficulty involved in some overall challenge, the overall challenge isn't addressed that much. Like, if you could do calculations with 10x less effort, what calculations would you do to solve alignment, or get AGI banned, or make sure everyone gets food, or fix the housing crisis, or ....? To put it a different way, I don't think Gödel's lack of a big fast calculator mattered too much?
4Logan Zoellner
  No, I do not mean that at all. An ideal system would store every piece of information its user has ever seen or heard in addition to every book/article/program ever written or recorded and be able to translate problems given in "common english" into objective mathematical proofs then giving an explanation of the answer in English again. This is an empirical question, but based on my own experience I would speculate the gain is quite significant.  Again, merely giving me access to a calculator and a piece of paper makes me better at math than 99.99% of people who do not have access to such tools. would I "solve alignment"? Yes. "get AGI banned" No, because I solved alignment. "make sure everyone gets food, or fix the housing crisis" Both of these are political problems that have nothing to do with "intelligence".  If everyone was 10x smarter, maybe they would stop voting for retarded self-destructive polices. Idk, though.
5TsviBT
That's what I said. It excludes, for example, a fact that the human thinks of, unless ze speaks or writes it. It makes you better at calculation, which is relevant for some kinds of math. It doesn't make you better at math in general though, no. If you're not familiar with higher math (the sort of things that grad students and professors do), you might not be aware: Most of the stuff that most of them do involves not very much that one would plug in to a calculator. What calculations would you plug into your fast-easy-calculator that result in you solving alignment?
4Logan Zoellner
  Already wrote an essay about this.
5TsviBT
I don't think most of those proposals make sense, but anyway, the ones that do make sense only make sense with a pretty extreme math oracle--not something that leaves the human to fill in the "leaps of logic". It's just talking about AGI, basically. Which defeats the purpose.
5Logan Zoellner
  A "math proof only" AGI avoids most alignment problems.  There's no need to worry about paperclip maximizing or instrumental convergence.
6TsviBT
Not true. This isn't the place for this debate, but if you want to know: 1. To get an AGI that can solve problems that require lots of genuinely novel thinking, you're probably pulling an agent out of a hat, and then you have an agent with unknown values and general optimization channels. 2. Even if you only want to solve problems, you still need compute, and therefore wish to conquer the universe (for science!).
5Logan Zoellner
  An agent that only thinks about math problems isn't going to take over the real world (it doesn't even have to know the real world exists, as this isn't a thing you can deduce from first principles). We're going to get compute anyway.  Mundane uses of deep learning already use a lot of compute.
3Nathan Helm-Burger
I think whether or not the math-proof-AI became accidentally an agent is indeed a moot point if you can successfully create the agent in a censored environment (aka using censored data sets and simulations, with careful obfuscation of the true nature of the substrate on which it is being simulated). I agree with Logan that in such a scenario, the pure math the agent has been trained on doesn't plausibly give even a superhumanly smart agent enough clues about our physical universe to figure out that it could hack out or manipulate some hypothetical hidden operator into letting it out. An agent specializing in human biology could figure this sort of thing out, but specializing in math? No, I think you can keep the data clean enough to avoid tells. Is there some weird theoretical super-powerful agent who could figure out how to escape its containment despite having no direct knowledge about its substrate or operators? Perhaps, but I don't think you need to create an agent anywhere near that powerful in order to satisfy the specified use case of 'slightly superhuman proof assistant'.
6TsviBT
First of all, we don't know how to keep a computer system secure against humans, let alone superhumans running on the fucking computer. The AI doesn't need to know the color of your shoes or how to snowboard before it breaks its software context, pwns its compute rack, and infects the dev's non-airgapped machine when ze logs in to debug the AI. (Or were you expecting AGI researchers to develop AIs by just... making up what code they thought would be cool, and then spending $10 mill on a run while not measuring anything about the AI except for some proof tasks...?) Second, how do you get world-saving work out of superhuman proof assistant? Third of all, if you're not doing sponge alignment, the humans have to communicate with the AI. I would guess that if there's an answer to the second point, it involves not just getting yes/no to math questions, but also understanding proofs--in which case you're asking for much higher bandwidth communication.
4Nathan Helm-Burger
Yeah, I think we're not so far apart here. I'm not arguing for a math-proof-AI as a solution because I don't believe that it enables world-saving actions. I'm just trying to say you could have a narrow math-assistant AI at a higher level of math-specific competence relatively safely compared to a general AI which knew facts about computers and humans and such.
3Logan Zoellner
  What data?  Why not just train it on literally 0 data (muZero style)? You think it's going to derive the existence of the physical world from the Peano Axioms? 
3Nathan Helm-Burger
Math data! [Edit: to be clear, I'm not arguing with Logan here, I'm agreeing with Logan. I think it's clear to most people who might read this comment thread that training a model on nothing but pure math data is unlikely to result in something which could hack it's way out of computer systems while still anywhere near the ballpark of human genius level. There's just too much missing info that isn't implied by pure math. A more challenging, but I think still feasible, training set would be math and programming. To do this in a safe way for this hypothetical extremely powerful future model architecture, you'd need to 'dehumanize' the code, get rid of all details like variable names that could give clues about the real physical universe.]

Questions I have:

  1. Why do you think the potential capability improvement of human-human interface is that high? Can you say more on how you imagine that working?
  2. For WBE my current not amazingly informed model thinks the bottleneck is finding a way to run it that wouldn't result in huge value drift. Are the 2% your guess that we could run it successfully without value drift, or that we can run it at all in a way that fooms even if it breaks alignment and potentially causes s-risk? For the latter case I'd have higher probability on that we could get that withi
... (read more)
2TsviBT
These are both addressed in the post.
4Towards_Keeperhood
Well for (1) I don't see what's written in the post matches your 2-20std estimation. You said yourself "But it's not clear that there should be much qualitative increase in philosophical problem-solving ability.". Like higher communication bandwidth would be nice, but it's not like more than 30 people can do significantly useful alignment research and even within those who can there's a huge heavytail IMO. If you could just write more like e.g. if you imagine a smart person effectively getting sth like a bigger brain by recruting areas from some other person (though then it'd presumably require a decent amount of artificial connections again?). Or do you imagine many people turing into sth like a hivemind (and how more precisely might the hivemind operate and why would they be able to be much smarter together than individually)? Such details would be helpful. For (2) I just want to ask for clarification whether your 2% estimate in the table includes mitigating the value drift problems you mentioned. (Which then would seem reasonable to me. But one might also read the table as "2% that it works at all and even then there would probably be significant value drift".) Like with a few billion dollars we could manufacture enough electronmicroscopes to get a human connectome and i'd unfortunately expect that it's not too hard to guess some of the important learning rules and simulate a bunch until the connectome seems like a plausible equilibrium given the firing and learning rules and then it can sorta run and bootstrap even if there's significant divergence from the original human.
5TsviBT
Ok some examples: * Multiple attention heads. * One person solves a problem that induces genuine creative thinking; the other person watches this, and learns how genuine creative thinking works. Not very feasible with current setup, maybe feasible with low-cost hardware access. * One person works on a difficult, high-context question; the other person remembers the stack trace, notices and remembers paths [noticed, but not taken, and then forgotten], debugs including subtle shifts, etc. Not very feasible currently without a bunch of distracting exposition. See TAP. * More direct (hence faster, deeper) implicit knowledge/skill sharing. But a lot of the point is that there are thoughtforms I'm not aware of, which would be created by networked people. The general idea is as I stated: you've genuinely moved somewhat away from several siloed human minds, toward something more integrated.
4TsviBT
(1): Do you think that one person with 2 or more brains would be 2-20 SDs? I have no idea, that's why the range is so high. (2): The .02 is, as the table says, "as described"; so it should be plausibly a realistic emulation of the human brain. That would include getting slower dynamics right-ish, but wouldn't exclude getting value drift anyway. Maybe. Why do you think this?
1Towards_Keeperhood
If I had another copy of my brain I'd guess that might give me like +1std or possibly +2std but very hard to predict. If a +6std person would get another brain from a +5std person the effect would be much lower I'd guess, maybe yielding overall +6.4std or possibly +6.8std. But idk the counterfactual seems hard to predict because I cannot imagine it that concretely. Could be totally wrong. This was maybe not that well expressed. I mostly don't know but it doesn't seem all that unlikely it could work. (I might read your timelines post within a week or so and maybe then I have a better model of your model to better locate cruxes, idk.)
5TsviBT
My main evidence is 1. It's much easier to see the coarse electrical activity, compared to 5-second / 5-minute / 5-hour processes. The former, you just measure voltage or whatever. The latter you have to do some complicated bio stuff (transcriptomics or other *omics). 2. I've asked something like 8ish people associated with brain emulation stuff about slow processes, and they never have an answer (either they hadn't thought about it, or they're confused and think it won't matter which I just think they're wrong about, or they're like "yeah totally but we've already got plenty of problems just understanding the fast electrical stuff"). 3. We have very little understanding of how the algorithms actually do their magic, so we're relying on just copying all the details well enough that we get the whole thing to work.
1Towards_Keeperhood
I mean you can look at neurons in vitro and see how they adopt to different stimuli. Idk I'd weakly guess that the neuron level learning rules are relatively simple, and that they construct more complex learning rules for e.g. cortical minicolumns and eventually cortical columns or sth, and that we might be able to infer from the connectome what kind of function cortical columns perhaps implement, and that this can give us a strong hint for what kind of cortical-column-level learning rules might select for the kind of algorithms implemented there abstractly, and that we can trace rules back to lower levels given the connectome. Tbc i don't think it might look exactly like that, just saying sth roughly like that, where maybe it's actually some common circut loops instead of cortical columns which are interesting or whatever.

Thanks for writing this amazing overview!

Some comments:

  • I think different people might imagine quite different intelligence levels when under +7std thinkoompf.
    • E.g. I think that from like +6.3std the heavytail becomes even a lot stronger because those people can bootstrap themselves extremely good mental software. (My rough guess is that Tsvi::+7std = me::+6.5std, though I'd guess many readers would need to correct in the other direction (aka they might imagine +7std as less impressive than Tsvi).)
  • I think one me::Tsvi::+7std person would probably be enough t
... (read more)
4TsviBT
I agree something like this happens, I just don't think it's that strong of an effect. * A single human still has pretty strong limitations. E.g. fixed skull size (without further intervention); other non-scalable hardware (~one thread of attention, one pair of eyes and hands); self-reprogramming is just hard; benefits of self-reprogramming don't scale (hard to share with other people). * Coercion is bad; without coercion, a supergenius might just not want to work on whatever is strategically important for humanity. * It doesn't look to me like we're even close to being able to figure out AGI alignment, or other gnarly problems for that matter (such as a decoding egregores). So we need a lot more brainpower, lots of lottery tickets. * There's a kind of power that comes from having many geniuses--think Manhattan project. Not sure what you're referring to here. Different methods have different curves. Adult brain editing would have diminishing returns, but nowhere near that diminishing. Plausibly, though I don't know of strong evidence for this. For example, my impression is that modern proof assistants still aren't in a state where a genius youngster with a proof assistant can unlock what feels like the possibility of learning a seemingly superhuman amount of math via direct dialogue with the truth--but I could imagine this being created soon. Do you have other evidence in mind?
1Towards_Keeperhood
Basically agree, but I think alignment is the kind of problem where one supergenius might matter more. E.g. for general relativity Einstein basically managed to find in 3 times faster or sth than the rest of physics would've. I don't think a Manhatton project would've helped there because even after Einstein published GR only relatively few people understood it (if i am informed correctly), and I don't think they could've made progress in the same way Einstein did but would've needed more experimental evidence. Plausible to me that there are other potentially pivotal problems that have something of this character, but idk. Well not very legible evidence, and I could be wrong, but some of my thoughts on mental software: It seems plausible to me that someone with +6.3std would be able to do some bootstrapping loop very roughly like: * find better ontology for modelling what is happening in my mind. * train to relatively-effortlessly model my thoughts in the new better ontology that compresses observations more and thus lets me notice a bit more of what's happening in my mind (and notice pieces where the ontology doesn't seem to fit well). * repeat. The "relatively-effortlessly model well what is happening in my mind" part might help significantly for getting much faster and richer feedback loops for learning thinking skills. When you have a good model of what happened in your mind to produce some output you can better see the parts that were useless and the parts that were important and see how you want your cognitive algorithms to look like and plan how to train yourself to shape them that way. When you master this kind of review-and-improving really well you might be able to apply the skill on itself and bootstrap your review process. It's generally hard to predict what someone smarter might figure out so I wouldn't be confident it's not possible.
2TsviBT
I agree that peak problem-solving ability is very important, which is why I think strong amplification is such a priority. I just... so far I'm either not understanding, or else you're completely making up some big transition between 6 and 6.5?
2Towards_Keeperhood
Yeah I sorta am. I feel like that's what I see from eyeballing the largest supergeniuses (in particular Einstein and Eliezer) but idk it's very few data and maybe I'm wrong.
3TsviBT
My guess would be that you're seeing a genuine difference, but that flavor/magnitude of difference is not not very special to the 6 -> 6.5 transition. See my other comment.
-2Sweetgum
I think you're massively overestimating Eliezer Yudkowsky's intelligence. I would guess it's somewhere between +2 and +3 SD.
7Mo Putera
Seems way underestimated. While I don't think he's at "the largest supergeniuses" level either, even +3 SD implies just top 1 in ~700 i.e. millions of Eliezer-level people worldwide. I've been part of more quantitatively-selected groups talent-wise (e.g. for national scholarships awarded on academic merit) and I've never met anyone like him.
2Sweetgum
But are you sure the way in which he is unique among people you've met is mostly about intelligence rather than intelligence along with other traits?
2TsviBT
Wait are you saying it's illegible, or just bad? I mean are you saying that you've done something impressive and attribute that to doing this--or that you believe someone else has done so--but you can't share why you think so?
1Towards_Keeperhood
Maybe bad would be a better word. Idk I feel like I have a different way of thinking about such intelligence-explosion-dynamics stuff that most people don't have (though Eliezer does) and I cannot really describe it all that well and I think it makes sensible predictions but yeah idk I'd stay sceptical given that I'm not that great at saying why I believe what I believe there. No I don't know of anyone who did that. It's sorta what I've been aiming for since very recently and I don't particularly expect a high chance of success but I'm also not quite +6.3std I think (though I'm only 21 and the worlds where it might succeed are the ones where I continue getting smarter for some time). Maybe I'm wrong but I'd be pretty surprised if sth like that wouldn't work for someone with +7std.
4TsviBT
I mean, I agree that intelligence explosion is a thing, and the thing you described is part of it, and humans can kinda do it, and it helps quite a lot to have more raw cognitive horsepower... I guess I'm not sure we're disagreeing about much here, except that 1. I don't know why you're putting some important transition around 6 SDs. I expect that many capabilities will have shitty precursors in people with less native horsepower; I also expect some capabilities will basically not have such precursors, and so will be "transitions"; I just expect there to be enough such things that you wouldn't see some major transition at one point. I do think there's an important different between 5.5 SD and 7.5 SD, which is that now you've created a human who's probably smarter than any human who's ever lived, so you've gone from 0 to 1 on some difficult thoughts; but I don't think that's special about this range, it would happen at any range. 2. I think that adding more 6 SD or 7 SD is really important, but you maybe don't as much? Not sure what you think.
1Towards_Keeperhood
First tbc, I'm always talking about thinkoompf, not just what's measured by IQ tests but also sanity and even drive. Idk I'm not at all sure about that but it seems to me like Nate and Eliezer might be a decent chunck more competent than all the other people I'm aware of. So maybe for the current era (by which I mostly mean "after the sequences were published") it's like 1 Person (Nate) per decade-or-a-bit-more who becomes really competent, which is very roughly +6std. (EDIT: Retracted because evidence too shaky. It still seems to me like the heavytail of intelligence gets very far very quickly though.) Like I'd guess before the sequences and without having the strong motivator of needing to save humanity the transition might rather have been +6.4std -- +6.8std. Idk. Though tbc I don't really expect to be like "yeah maybe from 6.3std it enters a faster improvement curve which is then not changing that much" but more like the curve just getting steeper and steeper very fast without there being a visible kink. I feel like if we now created someone with +6.3std the person would already become smarter than any person who ever lived because there are certain advantages of being born now which would help a lot for getting up to speed (e.g. the sequences, the Internet).
1Morpheus
Such high diminishing returns in g based on genes seems quite implausible to me, but would be happy if you can point to evidence to the contrary. If it works well for people with average Intelligence, I'd expect it to work at most half as well with +6sd.
1Towards_Keeperhood
Idk I'd be intuitively surprised if adult augmentation would get someone from +6 to +7. I'm like from +0 to +3 is a big difference, and from +6 to +6.3 is an almost as big difference too. But idk maybe not. Maybe partially it's also that I think that intelligence augmentation interventions get harder once you get into higher intelligence levels. Where there are previously easy improvement possibilities there might later need to be more entangled groups of genes that are good and it's harder to tune those. And it's hard to get very good data on what genes working together actually result in very high intelligence because we don't have that many very smart people.
1Rob Lucas
Is there a reason you are thinking of to expect that transition to happen at exactly the tail end of the distribution of modern human intelligence?  There don't seem, as far as I'm aware, to have been any similar transitions in the evolution of modern humans from our chimp-like ancestors.  If you look at proxies, like stone tools from homo-habilis to modern humans you see very slow improvements that slowly, but exponentially, accelerate in the rate of development.   I suspect that most of that improvement, once cultural transition took off at all, happens because of the ways in which cultural/technological advancements feed into each other (in part due to economic gains meaning higher populations with better networks which means accelerated discovery which means more economic gains and higher better connected populations), and that is hard to disentagle from actual intelligence improvements.  So I suppose its still possible that you could have these exponential progress in technology feeding itself while at the same time actual intelligence is hitting a transition to a regime of diminishing returns, and it would be hard to see the latter in the record. Another decent proxy for intelligence is brain size, though.  If intelligence wasn't actually improving the investment in larger brains just wouldn't pay off evolutionarily, so I expect that when we see brain size increases in the fossil record we are also seeing intelligence increasing at at least a similar rate.  Are there transitions in the fossil record from fast to slow changes in brain size in our lineage?  That wouldn't demonstrate diminishing returns intelligence (could be diminishing returns in the use of intelligence relative to the other metabolic costs, which is different from just particular changes to genes just not impacting intelligence as much as in the past), but it would at least be consistent with it.   Anyway, I'm not entirely sure where to look for evidence of the transition you seem to expec
1Towards_Keeperhood
I mostly expect you start getting more and more into sub-critical intelligence explosion dynamics when you exceed +6std more and more. (E.g. see second half of this other comment i wrote) I also expect very smart people will be able to better setup computer-augmented note organizing systems or maybe code narrow aligned AIs that might help them with their tasks (in a way it's a lot more useful than current LLMs but hard to use for other people). But idk. I'm not sure how big the difference between +6 and +6.3std actually is. I also might've confused the actual-competence vs genetical-potential scale. On the scale I used the drive/"how hard one is trying" also plays a big role. I actually mostly expect this from seeing that intelligence is pretty heavitailed. E.g. alignment research capability seems incredibly heavitailed to me, though it might be hard to judge the differences in capability there if you're not already one of the relatively few people who are good at alignment research. Another example is how Einstein managed to find general relativity where the combined rest of the world wouldn't have been able to do it like that without more experimental evidence. I do not know why this is the case. It is (very?) surprising to me. Einstein didn't even work on understanding and optimizing his mind. But yeah that's how I guess.
[-]Foyle2-6

I read some years ago that average IQ of kids is approximately 0.25*(Mom IQ + Dad IQ + 2x population mean IQ).  So simplest and cheapest means to lift population average IQ by 1 standard deviation is just use +4 sd sperm (around 1 in 30000), and high IQ ova if you can convince enough genius women to donate (or clone, given recent demonstration of male and female gamete production from stem cells).  +4sd mom+dad = +2sd kids on average.  This is the reality that allows ultra-wealthy dynasties to maintain ~1.3sd IQ average advantage over genera... (read more)

I think I'm more optimistic about starting with relatively weak intelligence augmentation. For now, I test my fluid intelligence at various times throughout the day (I'm working on better tests justified by algorithmic information theory in the style of Prof Hernandez-Orallo, like this one but it sucks to take https://github.com/mathemajician/AIQ but for now I use my own here: https://github.com/ColeWyeth/Brain-Training-Game), and I correlate the results with everything else I track about my lifestyle using reflect: https://apps.apple.com/ca/app/reflect-tr... (read more)

[-]TsviBT2926

I think it makes sense to pick the low-hanging fruit first (then attempt incrementally harder stuff with the benefit of being slightly smarter)

No, this doesn't make sense.

I think the stuff you're doing is probably fun / cool / interesting / helpful / something you like. That's great! You don't need to make an excuse for doing it, in terms of something about something else.

But no, that's not the right way to make really smart humans. The right way is to directly create the science and tech. You're saying something like "it stands to reason that if we can get a 5% boost on general intelligence, we should do that first, and then apply that to the tech". But

  • It's not a 5% boost to the cognitive capabilities that are the actual bottlenecks to creating the more powerful tech. It's less than that.
  • What you're actually doing is doing the 5% boost, and never doing the other stuff. Doing the other stuff is better for the purposes of making a bunch of supergeniuses. (Which, again, doesn't have to be your goal!)
1Cole Wyeth
I think there's a reasonable chance everything you said is true, except: I intend to do the other stuff after finishing my PhD - though its not guaranteed I'll follow through.  The next paragraph is low confidence because it is outside of my area of expertise (I work on agent foundations, not neuroscience): The problem with neuralink etc. is that they're trying to solve the bandwith problem which is not currently the bottleneck and will take too long to yield any benefits. A full neural lace is maybe similar to a technical solution to alignment in the sense that we won't get either within 20 years at our current intelligence levels. Also, I am not in a position where I have enough confidence in my sanity and intelligence metrics to tamper with my brain by injecting neurons into it and stuff. On the other hand, even minor non-invasive general fluid intelligence increase at the top of the intelligence distribution would be incredibly valuable and profits could be reinvested in more hardcore augmentation down the line. I'd be interested to here where you disagree with this.  It almost goes without saying that if you can make substantial progress on the hardcore approaches that would be much, much more valuable than what I am suggesting, and I encourage you to try.
3TsviBT
My guess is that it would be very hard to get to millions of connections, so maybe we agree, but I'm curious if you have more specific info. Why is it not the bottleneck though? That's fair. Germline engineering is the best approach and mostly doesn't have this problem--you're piggybacking off of human-evolution's knowledge about how to grow a healthy human. You're talking about a handful of people, so the benefit can't be that large. A repeatable method to make new supergeniuses is vastly more valuable.
1Cole Wyeth
I'm not a neuroscientist / cognitive scientist, but my impression is that rapid eye movements are already much faster than my conscious deliberation. Intuitively, this means there's already a lot of potential communication / control / measurement bandwidth left on the table. There is definitely a point beyond which you can't increase human intelligence without effectively adding more densely connected neurons or uploading and increasing clock speed. Honestly I don't think I'm equipped to go deeper into the details here.  I'm not sure I agree with either part of this sentence. If we had some really excellent intelligence augmentation software built into AR glasses we might boost on the order of thousands of people. Also I think the top 0.1% of people contribute a large chunk of economic productivity - say on the order of >5%.  
2TsviBT
I'm talking about neuron-neuron bandwith. https://tsvibt.blogspot.com/2022/11/prosthetic-connectivity.html I agree that neuron-computer bandwidth has easier ways to improve it--but I don't think that bandwidth matters very much.
1Cole Wyeth
Personally I'm unlikely to increase my neuron-neuron bandwidth anytime soon, sounds like a very risky intervention even if possible.

as I understand it, the AI capabilities necessary for Intelligence amplification via BCI already exist, and we simply need to show/encourage people how to start using it 

If a person were to provide a state-of-the-art model with a month's worth of their data typically collected by our eyes and ears and the ability to interject in real time in conversations via earbuds or speaker. 

Such an intervention wouldn't be the superhuman "team of geniuses in your datacenter" but it would be more helpful than even some of the best personal assistant's (and 10... (read more)

2TsviBT
Are you claiming that this would help significantly with conceptual thinking? E.g., doing original higher math research, or solving difficult philosophical problems? If so, how would it help significantly? (Keep in mind that you should be able to explain how it brings something that you can't already basically get. So, something that just regular old Gippity use doesn't get you.)

On human-computer interfaces: Working memory, knowledge reservoirs and raw calculation power seem like the easiest pieces, while fundamentally making people better at critical thinking, philosophy or speeding up actual comprehension would be much for difficult.

The difference being upgrading the core vs plug-ins.

Curated reservoirs of practical and theoretical information, well indexed, would be very useful to super geniuses.

On human-human: You don't actually need to hook them up physically. Having multiple people working on different parts of a problem let... (read more)

2TsviBT
But both of these things are basically available currently, so apparently our current level isn't enough. LLMs + google (i.e. what Perplexity is trying to be) are already a pretty good index; what would a BCI add? I commented on a similar topic here: https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods?commentId=uZg9s2FfP7E7TMTcD
1Purplehermann
I'm not sure how well curated and indexed most information is. Working memory allows for looking at the whole picture at once better with the full might of human intelligence (which is better at many things than LLMs), while removing frictions that come from delays and effort expended in search for data and making calculations. Of course we have smart people together now, but getting multiple 7+SD people together would have many further benefits beyond having them work solo. We probably have at least a generation (we're probably going to slow down before we hit SAGI due to the data wall, limited production of new compute, and regulation). The focus should be on moving quickly to get a group ecliping current human capabilities ASAP, not on going much further
2TsviBT
How specifically would you use BCIs to improve this situation?
1Purplehermann
I can only describe the Product, not the tech. The idea would be to plug in a bigger working memory in the area of the brain currently holding working memory. This is the piece I think matters most On reflection something like wolfram alpha should be enough for calculations, and a well indexed reservoir of knowledge with an LLM pulling up relevant links with summaries should be good enough for the rest
[-][anonymous]10
[This comment is no longer endorsed by its author]Reply
4TsviBT
I wouldn't exactly say "counter balance". It's more like we, as humans, want to get ahead of the AI intelligence explosion. Also, I wouldn't advocate for a human intelligence explosion that looks like what an AI explosion would probably look like. An explosion sounds like gaining capability as fast as possible, seizing any new mental technology that's invented and immediately overclocking it to invent the next and the next mental technology. That sort of thing would shred values in the process. We would want to go about increasing the strength of humanity slowly, taking our time to not fuck it up. (But we wouldn't want to drag our feet either--there are, after all, still people starving and suffering, decaying and dying, by the thousands and tens of thousands every day.) But yes, the situation with AI is very dire. I'm not following. Why would augmented humans have worse judgement about what is good for what we care about? Or why would they care about different things?

I feel like it would be beneficial to add another sentence or two to the “goal” section, because I’m not at all convinced that we want this. As someone new to this topic, my emotional reaction to reading this list is terror.

Any of these techniques would surely be available to only a small fraction of the world’s population. And I feel like that would almost certainly result in a much worse world than today, for many of the same reasons as AGI. It will greatly increase the distance between the haves and the have-nots. (I get the same feeling reading this as... (read more)

6TsviBT
Ok, I added some links to "Downside risks of genomic selection". Not true! This consideration is the main reason I included a "unit price" column. Germline engineering should be roughly comparable to IVF, i.e. available to middle class and up; and maybe cheaper given more scale; and certainly ought be subsidized, given the decreased lifetime healthcare costs alone. Eh, unless you can explain this more, I think you've been brainwashed by Gattaca or something. Gattaca conflates class with genetic endowment, which is fine because it's a movie about class via a genetics metaphor, but don't be confused that it's about genetics. Did the invention of smart phones increase or decrease the distance? In general, some technologies scale with money, and other technologies scale by bodycount. Each person only gets one brain to receive implants and stuff. Elon Musk, famously extremely rich and baby-obsessed, has what... 12 kids? A peasant could have 12 kids if they wanted to! Germline engineering would therefore be extremely democratic, at least for middle class and up. The solution, of course, is to make the tech even cheaper and more widely available, not to inflict preventable disease and disempowerment on everyone's kids. Stats or GTFO. First, the two specific things you listed are quite genetically heritable. Second, 7 SDs -- which is the most extreme form that I advocate for -- is only a little bit outside the Gaussian human distribution. It's just not that extreme of a change. It seems quite strange to postulate that a highly polygenic trait, if pushed to 5350 out of 10000 trait-positive variants, would suddenly cause major psychological problems, whereas natural-born people with 5250 or 5300 out of 10000 trait-positive variants are fine.
4Raemon
I think the terror reaction is honestly pretty reasonable. ([edit: Not, like, like, necessarily meaning one shouldn't pursue this sort of direction on balance. I think the risks of doing this badly are real and I think the risks of not doing anything are also quite real and probably great for a variety of reasons]) One reason I nonetheless think this is very important to pursue is that we're probably going to end up with superintelligent AI this century, and it's going to be dramatically more alien and scary than the tail-risk outcomes here. I do think the piece would be improved if it acknowledged and grappled with that more.
2TsviBT
The essay is just about the methods. But I added a line or two linking to https://tsvibt.blogspot.com/2022/08/downside-risks-of-genomic-selection.html

The genetic portions of this seem like a manifesto for creating highly intelligent, highly depressed, and thus highly unproductive people.

5TsviBT
What do you mean? Why would they be depressed? Do you mean because they'd be pressured into working on AGI alignment, or something? Yeah, don't do that. Same as with any other kids, you teach them to be curious and good and kind and strong and free and responsible and so on.

I made up the made-up numbers in this table of made-up numbers; therefore, the numbers in this table of made-up numbers are made-up numbe

These hallucinated outputs are really getting out of hand

I think you're underestimating meditation.

Since I've started meditating I've realised that I've been much more sensitive to vibes.

There's a lot of folk who would be scarily capable if the were strong in system 1, in addition to being strong in system 2.

Then there's all the other benefits that mediation can provide if done properly: additional motivation, better able to break out of narratives/notice patterns.

Thanks for the detailed writeup. I would personally be against basically all of the suggested methods that could create a significant improvement because the hard problem of consciousness remains hard and it seems very possible that an unconscious human race could result. I was a bit surprised to see no mention of this in the essay.

6TsviBT
I guess that falls under "value drift" in the table. But yeah I think that's extremely unlikely to happen without warning, except in the case of brain emulations. I do think any of these methods would be world-changing, that therefore extremely dangerous and would demand lots of care and caution.
1notfnofn
Could you explain what sort of warnings we'd get with, for instance, the interfaces approach? I don't see how that's possible. Also this is semantics I guess, but I wouldn't classify this under "value drift". If there is such a thing as the hard problem of consciousness and these post-modified humans don't have whatever that is, I wouldn't care whether or not their behaviors and value functions resemble those of today's humans
5TsviBT
Someone gets some kind of interface, and then they stop being conscious. So they act weird, and people are like "hey they're acting super weird, they seem not conscious anymore, this seems bad". https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies
2notfnofn
Yudkowsky's essay is explaining why he believes there is no hard problem of consciousness.

I think youre making this more complicated than it has to be. Why try to move a river to you when you can move to the river? Social Engineering is the way, I think. The same way that flat-surfaced guardrails on stars encourage people leaving trash/drinks/whatever there so does everything else in our public life (the life that we have when interacting with others- going to the store; filling with gas; waiting in lines; shopping etc). Combining microhabits with de-atrophication of our brains is the easiest and most widely possible solution. Maybe create a pr... (read more)

Somewhat surprised that this list doesn't include something along the lines of "punt this problem to a sufficiently advanced AI of the near future." This could potentially dramatically decrease the amount of time required to implement some of these proposals, or otherwise yield (and proceed to implement) new promising proposals. 

It seems to me in general that human intelligence augmentation is often framed in a vaguely-zero-sum way with getting AGI ("we have to all get a lot smarter before AGI, or else..."), but it seems quite possible that AGI or near-AGI could itself help with the problem of human intelligence augmentation.

3TsviBT
So your suggestion for accelerating strong human intelligence amplification is ...checks notes... "don't do anything"? Or are you suggesting accelerating AI research in order to use the improved AI faster? I guess technically that would accelerate amplification but seems bad to do. Maybe AI could help with some parts of the research. But 1. we probably don't need AI to do it, so we should do it now, and 2. if we're not all dead, there will still be a bunch of research that has to be done by humans. On a psychologizing note, your comment seems like part of a pattern of trying to wriggle out of doing things the way that is hard that will work. Looking for such cheat codes is good but not if you don't aggressively prune the ones that don't actually work -- hard+works is better than easy+not-works.
1Cameron Berg
I am not suggesting either of those things. You enumerated a bunch of ways we might use cutting-edge technologies to facilitate intelligence amplification, and I am simply noting that frontier AI seems like it will inevitably become one such technology in the near future. Completely unsure what you are referring to or the other datapoints in this supposed pattern. Strikes me as somewhat ad-hominem-y unless I am misunderstanding what you are saying. AI helping to do good science wouldn't make the work any less hard—it just would cause the same hard work to happen faster.  seems trivially true. I think the full picture is something like: efficient+effective > inefficient+effective > efficient+ineffective > inefficient+ineffective Of course agree that if AI-assisted science is not effective, it would be worse to do than something that is slower but effective. Seems like whether or not this sort of system could be effective is an empirical question that will be largely settled in the next few years.

I don't think you mentioned "nootropic drugs" (unless "signaling molecules" is meant to cover that, though it seems more specific).  I don't think there's anything known to give a significant enhancement beyond alertness, but in a list of speculative technologies I think it belongs.

[This comment is no longer endorsed by its author]Reply
5Mateusz Bagiński
mentioned in the FAQ