New to LessWrong?

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 4:44 PM

TL;DR of the article:

This piece describes a lot of why Elon Musk wanted to start Neurolink, and how Brain-Computer Interfaces (BCIs) currently work, and how they might be implemented in the future. It's a really, really broad article, and aims for breadth while still having enough depth to be useful. If you already have a grasp of evolution of the brain, Dual Process Theory, parts of the brain, how neurons fire, etc. you can skip those parts, as I have below.

AI is dangerous, because it could achieve superhuman abilities and operate at superhuman speeds. The intelligence gap would be much smaller if we also had access to such abilities. Therefore, we should attempt this if possible.

This might be possible, despite how extremely limited and highly invasive existing BCIs are. Opening the skull is obviously way too invasive for most people, but the blood vessels offer a possible minimally invasive solution. They are essentially a highway which goes directly to every neuron in the brain. Current methods monitor at most ~100 neurons, or have low temporal resolution. 1,000,000 neurons is probably the tipping point, where it would stop being an alternative to a keyboard/screen input/outputs, and start being transformative.

Neuralink is exploring many possibilities, and probably won't narrow to just one any time soon. However, options might include "neural dust", or stints in the blood vessels. Just as dies have made fine cell structures visible under microscopes, and genetically engineering bioluminescent genes into living animals has made cells glow when active, Neuralink would need a way for such a device to detect individual neuron firings on a large scale.

To do this, the inserts themselves only need to be able to:

  1. React differently to electrical discharge associated with a nearby neurons firing, or to other changes associated with neurons firing, like sodium and potassium levels.

  2. Have that difference be detectable from outside the skull. (I'd divide this into active methods, like emitting light in a wavenelgth which penetrates the skull, or passive changes in properties detectable from the outside, like radioactive isotopes which cluster together based on variables in blood flow.)

(The piece doesn't make this distiction, but I thought it would be useful for better discussion and understanding.)

Neuralink, of course, hasn't narrowed the specifics down very much (and will probably pivot several times, in my opinion). However, they will start out offering something largely similar to the sorts of BCIs available to people with paralysis or sensory problems. Elon hopes that if everything goes smoothly, in a decade they would have something which could provide a useful feature to someone without such disabilities, if the FDA would allow it.

They also hope to be able to eventually influence neural firings, so that we could supply information to the brain, rather than just reading information out. This would require something which could be influenced from the outside, and then influence nearby neurons. We can already put an electric field through the whole brain, to minimize seizures, but for meaningful inputs this would also have to be done at the neuron leven.

Why you should read it anyway:

It's >35,000 words. (For comparison, the cutoff for "short novel" is 40,000.) That said, it's a good read, and I recommend it if you want to understand why Elon Musk might think a BCI might increase our odds of surviving an AI takeoff scenario.

A lot of it is still hand-waving, and doesn't make it clear that we don't necessarily need full self-replicating autonomous nanobots or whatever. Since it doesn't provide a specific architecture, but just surveys what might be possible, I think it's easy to give an uncharatable reading. I've tried to steel-man the phrasing here, but I think if we focus on tangible, near-term concepts, it can be illustrative of what is possible.

If you read this with a critical eye, you'll just note that they haven't narrowed down to one architecture yet, and complain that their lack-of-an-architecture can't possibly work. The point is to convince lay people that this might even be possible, not to convince them that Neurolink will succeed, but the comments I've seen so far have just been skepticism of Neurolink.

Instead, I'd encourage you to read with an eye toward what could be done with a stint or neural dust, and then critically examine the more tangible challenge of how small each of those possible capabilities could be made. What could be done passively? What could be done if inductively powered? How small of blood vessels could various devices fit through? Will those shrink with Moore's law, or are they physics-constrained?

Such questions will generate the possible concrete architectures which you can then apply a critical lens to. Don't bother reading if you just want to be critical of the exploratory activity itself. It won't even put up a fight.


Thanks for the summary and overview!


Honestly, there are a bunch of links I don't click, because the 2 or 3 word titles aren't descriptive enough. I'm a big fan of the community norm on more technically minded subreddits, where you can usually find a summary in one of the top couple comments.

So, I'm doing what I can to encourage this here. But mostly, I thought it was important on the AI front, and wanted to give a summary which more people would actually read and discuss.

Here are some thoughts on the viability of Brain Computer Interfaces. I know nothing, and am just doing my usual reality checks and initial exploration of random ideas, so please let me know if I'm making any dumb assumptions.

They seem to prefer devices in the blood vessels, due to the low invasiveness. The two specific form factors mentioned are stents and neural dust. Whatever was chosen would have to fit in the larger blood vessels, or flow freely through all of them. Just for fun, let's choose the second, much narrower constraint, and play with some numbers.

Wikipedia says white blood cells can be up to 30 μm in diameter. (Also, apparently there are multiple kinds of white blood cells. TIL.) I'd guess that we wouldn't want our neural dust to be any larger than that if we want to be able to give it to someone and be able to reverse the procedure later without any surgery. The injection should be fine, but if you wanted to filter these things back out of your blood, you'd have to do something like giving blood, but with a magnet or something to filter out the neural dust. So, what could we cram into 30 μm?

Well, my first hit when searching "transistors per square mm" is an article titled "Intel Now Packs 100 Million Transistors in Each Square Millimeter", so let's go with that. I realize Elon's ~10 year time horizon would give us another ~6 Moore's law doublings, but if they did an entire run of a special chip just for this, then maybe they don't want to pay top dollar for state of the art equipent, so let's stick with 100m/mm^2. That'd give us on the order of 10k-100k transistors to work with, if we filled the entire area with transistors and nothing else.

But, looking at most electronics, they are more than just a chip. Arduinos and cellphones and motherboards may be built around a chip, but the chip itself has a relatively small footprint on the larger PCB. So, I'm probably missing something which would be incredibly obvious to someone with more hardware experience. (Is all the other stuff just for interfacing with other components and power supplies? In principle, could most of it be done within the chip, if you were willing to do a dedicated manufacturing run just for that one divice, rather than making more modular and flexible chips which can be encorporate into a range of devices?)

If we assume it'd be powered and transmit data electromagnetically, it'd also need an antenna, and an induction coil. I have a hunch that both of these suffer from issues with the square-cube law, so maybe that's a bad idea. The neural dust article mentioned that the (mm scale) devices both reported information and received power ultrasonically, so maybe the square-cube law is the reason. (If not, we might also run into the diffraction limit, and not have any wavelengths of light which were short enough to effect antenas that size, but still long enough to penetrate the skull without ionizing atoms.)

I like the idea of ultrasonic stuff because acoustic waves travel through tissue without depositing much energy. So, you get around the absorption problem photons have, and don't have to literally x-ray anyone's brain. Also, cranial ultrasounds are already a thing for infants, although they have to switch to transcranial Doppler for adults, because our skulls have hardened. Nearby pieces of neural dust would be monitoring the same neurons, and so would give off their signals at about the same time, boosting the signal but maybe smearing it out a little in time.

So, let's play with some numbers for piezoelectric devices instead. (I assume that's what their ultrasonic neural dust must be using, at least. They are switching between electricity and motion somehow, and piezoelectrichttps are the name for the solid state way of doing that. I can't picture them having tiny speakers with electromagnets on flexible speaker cones. The Wikipedia page on transducers doesn't mention other options.)

Quartz crystals are already used for timing in electronics, so maybe the semiconductor industry already has the ability to make transducers if they wanted to. (I'd be surprised if they didn't, since quartz is just ccrystaline silicon dioxide. Maybe they can't get the atomic lattice into the right orientation consistently, though.) If you couldn't transmit and receive simultaneously without interfering, you'd need a tiny capacitor to store energy for at least 1 cycle. I don't know how small quartz crystals could be made, or whether size is even the limiting factor. Maybe sufficiently small piezoelectric can't even put out strong enough pulses to be detectable on an ultrasound, or require too much power to be safely delivered ultrasonically? I don't know, but I'd have to play with a bunch of numbers to get a good feel.

I don't really know where to start, when discussing monitoring neuron firings. Could it be done electromagnetically, since they should make an instantaneous electromagnetic field? Or would the signal be too weak near a blood vessel? Apparently each neuron firing changes the concentration of Na, K, Cl, and Ca in the surrounding blood. Could one of these be monitored? Maybe spectrally, with a tiny LED of the appropriate wavelength, and a photo detector? I think such things are miniturizeable in principle, but I'm not sure we can make them with existing semiconductor manufacturing techniques, so the R&D would be expensive. We probably don't have anything which emits at the exact wavelength we need for spectroscopy though, and even if we did, I bet the LED would need voltage levels which would be hard to deliver without adding a voltage transformer or whatever the DC equivalent is.

Or, can we dump all the fancy electronics all together? Could we do something as simple as a clay particle (tiny rock) coated with a dispersent or other Surfactant, so that changes in the surrounding chemistry cause the collapse of the double layer), making the clay particles to flocculate together? Would such clumps of clay particles be large enough and have high enough density to show up on an ultrasound or other divice? Obviously this wouldn't let us force a neuron to fire, but it might be a cheap way of detecting them.

Maybe the electronics could be added later, if modifying surface charge and chemistry is enough to make a neuron fire. Neurotransmitrers affect neuron firings somehow, if I usnderstand correctly, so maybe chain a bunch of neurotransmitters to some neural dust as functional groups on the end of polymer chains, then change surface charge to make the chains scrunch up or fan out?

I only know just enough about any of this to get myself into trouble, so if it doesn't look like I know what I'm talking about, I probably don't.

(Sorry to spam comments. I'm separating questions out to keep the discussion tidy.)

I would do it by using genetically modified human cells like macrophages, which sit inside blood vessels and register electric activities of the surrounding. It may send information by dumping its log as a DNA chain back into bloodstream. Downstreams such DNA chains will be sorted and read, but it would create time delays.

This way of converting cells into DNA machines will lead eventually to bionanorobots, which will be able to everything original nanobots were intended to do, including neural dust.

Another option is to deliver genetic vectors with genes into some astrocytes, and create inside them some small transmission element, like fluorescent protein reacting on changes of surrounding electric field.

The best solution would be receptor binding drug, like antidepressant (which is legal to deliver into the brain), which also able to transmit information about where and how it has bounded, maybe helping high resolution non-invasive scans.

The article only touches on it briefly, but suggests faster AI takeoff are worse, but "fast" is only relative to the fastest human minds.

Has there been much examination of the benefits of slow takeoff scenarios, or takeoffs that happen after human enhancements become available? I vaguely recall a MIRI fundraiser saying that they would start putting marginal resources toward investigating a possible post-Age of EM takeoff, but I have no idea if they got to that funding goal.

Personally, I don't see Brain-Computer Interfaces as useful for AI takeoffs, at least in the near term. We can type ~100 words per minute, but it takes more than 400 minutes to write a 40,000 word novel. So, we aren't actually I/O bound, as Elon believes. We're limited by the number of neurons devoted to a given task.

Early BCIs might make some tasks much faster, like long division. Since some other tasks really are I/O bound, they'd help some with those. But, we wouldn't be able to fully keep up with AI unless we had full-fledged upgrades to all our cognative architecture.

So, is almost keeping up with AI likely to be useful, or are slow takeoff just as bad? Are the odds of throwing together a FAI in the equivalent of a month any better than in a day? What % of those pannicked emergency FAI activities could be speed up by better computer user interfaces/text editors, personal assistants, a device that zapped your brain every time it detected Akrasia setting in, or by a RAM upgrade to the brain's working-memory?

(sorry to spam. I'm separating questions out to keep the discussion tidy.)

Perhaps Elon doesn't believe we are I/O bound, but that he is I/O bound. ;]

There's a more serious problem which I've not seen most of the Neuralink-related articles talk about* - which is that layering intelligence augmentations around an overclocked baboon brain will probably actually increase the risk of a non-friendly takeoff.

  • haven't read the linked article through yet

I think most people interested in IA want to make it that there will be a large number of humans using IA at once taking off as a group and policing each other, so in aggregate things go okay. It would be madness to rely on a 1 or a small set of humans to take off and rule us.

So the question becomes whether this scenario is better or worse than having an single AI using a goal system based off a highly abstract theorising of overclocked baboon brains to control the future?

For this tactic to be effectual it requires that a society of augmented human brains will converge on a pattern of aggregate behaviours that maximize some idea of humanity's collective values or at least doesn't optimize anything that is counter to such an idea. If the degree to which human values can vary between _un_augmented brains reflects some difference between them that would be infeasible to change then it's not likely that a society of augmented minds would be any more coordinated in values that a society of augmented ones.

In one sense I do believe a designed AI is better - the theorems a human being devised can stand or fall independently of the man who devised them. The risk increases inversely with our ability to follow trustworthy inference procedures in reasoning about designing AIs. With brain-augmentation the risk increases inversely with our aggregate ability to avoid the temptation of power. Humanity has produced many examples of great mathematicians. Trustworthy but powerful men are rarer.

We have been gradually getting more peaceful, even with increasing power. So I think there is an argument that brain augmentation is like literacy and so could increase that trend.

A lot depends on how hard a take off is possible.

I like maths. I like maths safely in the theoretical world, occasionally bought out to bear on select problems that have proven to be amenable to it. Also I've worked with computers enough to know that maths is not enough. They are imperfectly modeled physical systems.

I really don't like maths trying to be in charge of everything in the world, dealing with knotty problems of philosophy. Question like what is a human, what is life, what is a humans value; these do not seem the correct things for maths to be trying to tackle.

even with increasing power

At the individual level? By what metric?

these do not seem the correct things for maths to be trying to tackle

Is that a result of mathematics or of philosophy? :P

At the individual level? By what metric?

Knowledge and ability to direct energy. There are a lot more people who could probably put together half decent fertilizer bomb nowadays but we are not in continual state of trying to assassinate leaders and overthrow governments.

Privately manufactured bombs are common enough to be a problem - and there is a very plausible threat of life imprisonment ( or possibly execution ) for anyone who engages in such behaviour. That an augmented brain with the inclination to doing something analogous would be effectively punishable is open to doubt - they may well find ways of either evading the law or of raising the cost of any attempted punishment to a prohibitive level.

I'd say it's more useful to think of power in terms of things you can do with a reasonable chance of getting away with it rather than just things you can do. Looking at the former class of things - there are many things that people do that are harmful to others that they do nevertheless because they can get away with it easily: littering, lying, petty theft, deliberately encouraging pathological interpersonal relationship dynamics, going on the internet and getting into an argument and trying to bully the other guy into feeling stupid... ( no hint intended to be dropped here, just for clarity's sake ).
Many, in my estimation probably most, human beings do in fact have at least some consequence-free power over others and do choose to abuse that minute level of power.

The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others. Intelligence augmentation would allow you to collect more data and be able to communicate with more people about the actions you see other people taking.

There are worlds where IA is a lot easier than standalone AI, I think that is what elon is optimizing for. He has publicly stated he wants to spread it around when it is created (probably why he is investing in OpenAI as well).

This world feels more probable to me as well, currently. It conflicts somewhat with the need for secrecy in singleton AI scenarios.

The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others.

That is not apparent to me though. It seems like it would lead to a MAD style situation where no agent is able to take any action that might be construed as malintent without being punished. Every agent would have to be suspicious of the motives of every other agent since advanced agents may do a very good job of hiding their own malintent, making any coordinated development very difficult. Some agents might reason that it is better to risk a chance of destruction for the chance of forming a singleton.

It seems to me very hard to reason about the behaviour of advanced agents without ultimately resorting to mathematics ( e.g. situations involving mutual-policing should be formalizable in game-theoretic terms ).

I think I am unsure what properties of future tech you think will lead to more MAD style situations than we have currently. Is it hard takeoff?

The key ingredient for a MAD situation as far as I can think is some technology with a high destructiveness potential distributed among multiple agents who cannot trust each other. To reduce my whole argument to its cartoon outlines: serious brain augmentation seems about as good an idea as handing everyone their own nuclear arsenal.

I think there is a whole long discussion about whether individual or small numbers of brain augments can somehow hope to outsmart whole societies of brain augments that are all working together to improve their augmentations. And also discussions around how much smarter pure AIs would be compared to normal augments.

societies of brain augments that are all working together

Even that this presupposition should hold is questionable. Mutual distrust and the associated risk might make cooperative development an exceptional scenario rather than the default one.