LESSWRONG
LW

johnswentworth
55941Ω671135933590
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
johnswentworth's Shortform
johnswentworth7h70

How can biochemical interventions be spatially localized, and why is that problem important?

High vs low voltage has very different semantics at different places on a computer chip. In one spot, a high voltage might indicate a number is odd rather than even. In another spot, a high voltage might indicate a number is positive rather than negative. In another spot, it might indicate a jump instruction rather than an add.

Likewise, the same chemical species have very different semantics at different places in the human body. For example, high serotonin concentration along the digestive tract is a signal to digest, whereas high serotonin concentration in various parts of the brain signals... uh... other stuff. Similarly, acetylcholine is used as a neurotransmitter both at neuromuscular junctions and in the brain, and these have different semantics. More generally, IIUC neurotransmitters like dopamine, norepinephrine, or serotonin are released by neurons originating at multiple anatomically distinct little sub-organs in the brain. Each sub-organ projects to different places, and the same neurotransmitter probably has different semantics when different sub-organs project to different targets.

Yet most pharmaceutical interventions target one type of molecule, or one receptor, or what have you, approximately everywhere. Such an intervention is analogous to e.g. attempting to make every float in a computer's memory positive by flipping the first bit in every block, but then as a side-effect also changing a bunch of jump instructions to add instructions because there was no way to localize the effect to float-containing memory locations.

Thus the question: how can biochemical interventions be localized, especially in general-purpose ways? I'll throw out some ideas off the top of my head, but I'm interested to hear other peoples' thoughts as well.

Some Methods

Natural Barriers

The blood-brain barrier springs to mind as one example. If a chemical has different semantics in the brain and outside, and one wishes to target outside the brain, then just use a drug which can't cross the barrier.

Implant + Slow Transport/Fast Breakdown

One could put an implant in the right spot to release a drug, and then choose a drug which either isn't transported quickly or breaks down before it can get very far (or both).

Notably, making some random molecule diffuse less quickly seems relatively tractable: one can just attach a bigger molecule to it. And there's an absolutely enormous space of possibilities for what that bigger molecule could be, so it's especially likely to be tractable.

Genetic Modification

Cells already need the ability to tell "where they are" in order for us to have anatomically distinct regions at all. So in principle, it should be possible to genetically modify cells to do something different, but gate the change on the cell being in a particular distinct anatomical region, so cells everywhere else do the same thing as before.

For adult genetic modifications, one would probably want to combine this method with something similar to the implant + slow transport/fast release method above. Adult genetic modifications usually don't hit every cell or even a majority of them, so an ideal use would be modifying some small percentage of cells to release a molecule which influences all the others. Slow diffusion/fast breakdown could then localize that molecule.

What Else?

I'm curious about other methods to localize biochemical interventions in the body, both speculative and already-existing.

Reply
Embedded Altruism [slides]
johnswentworth11h50

when we're a part of a world which is too complex for us to fully understand?

Sounds like a skill issue. The obvious answer would be "get good".

Like, look, people talk all the time about how the economy, or the human body, or what have you, is just too darn complex for one human to fully understand. But, like... humans have on the order of 30k genes. The functions of many of those have not yet been sorted out, but once they are all sorted, that's not an intractable number of functional chemical species for a single human to understand well enough to grasp the whole picture. Especially since so many of them are part of multistep modular synthesis pathways, or modular signalling circuits, or some kind of standard pattern, so whole chunks can be understood at once.

Or, the economy. There's a decently large number of functionally distinct kinds of economic actors, but like... not that many. It's not like one needs to know the particulars of each different hair salon or small trucking firm in order to basically understand the entire economy. There's only so many importantly-different types of business, and it sure seems like a human could basically understand all of them well enough to grasp the whole picture.

Of course one would need to create an entirely different kind of education program, in some cases figure out currently-unknown things (like e.g. function of some genes), and then spend several years of study, in order to understand these things. But it does not seem intractable in principle.

Reply
johnswentworth's Shortform
johnswentworth2d73

Agreed, that's basically how I use them.

Reply
johnswentworth's Shortform
johnswentworth2dΩ411201

I was a relatively late adopter of the smartphone. I was still using a flip phone until around 2015 or 2016 ish. From 2013 to early 2015, I worked as a data scientist at a startup whose product was a mobile social media app; my determination to avoid smartphones became somewhat of a joke there.

Even back then, developers talked about UI design for smartphones in terms of attention. Like, the core "advantages" of the smartphone were the "ability to present timely information" (i.e. interrupt/distract you) and always being on hand. Also it was small, so anything too complicated to fit in like three words and one icon was not going to fly.

... and, like, man, that sure did not make me want to buy a smartphone. Even today, I view my phone as a demon which will try to suck away my attention if I let my guard down. I have zero social media apps on there, and no app ever gets push notif permissions when not open except vanilla phone calls and SMS.

People would sometimes say something like "John, you should really get a smartphone, you'll fall behind without one" and my gut response was roughly "No, I'm staying in place, and the rest of you are moving backwards".

And in hindsight, boy howdy do I endorse that attitude! Past John's gut was right on the money with that one.

I notice that I have an extremely similar gut feeling about LLMs today. Like, when I look at the people who are relatively early adopters, making relatively heavy use of LLMs... I do not feel like I'll fall behind if I don't leverage them more. I feel like the people using them a lot are mostly moving backwards, and I'm staying in place.

Reply731
Habryka's Shortform Feed
johnswentworth2d197

We are absolutely, with no ambiguity, in the "most rapid adoptions of any technology in US history branch". Every single corporation in the world is trying to adopt AI into their products.

Disagree with your judgement on this one. Agree that everyone is trying to adopt AI into their products, but that's extremely and importantly different from actual successful adoption. It's especially importantly different because part of the core value proposition of general AI is that you're not supposed to need to retool the environment around it in order to use it.

Reply
Habryka's Shortform Feed
johnswentworth2d289

reasoning models [...] seem like a bigger deal than GPT-5 to me.

Strong disagree. Reasoning models do not make every other trick work better, the way a better foundation model does. (Also I'm somewhat skeptical that reasoning models are actually importantly better at all; for the sorts of things we've tried they seem shit in basically the same ways and to roughly the same extent as non-reasoning models. But not sure how cruxy that is.)

Qualitatively, my own update from OpenAI releasing o1/o3 was (and still is) "Altman realized he couldn't get a non-disappointing new base model out by December 2024, so he needed something splashy and distracting to keep the investor money fueling his unsustainable spend. So he decided to release the reasoning models, along with the usual talking points of mostly-bullshit evals improving, and hope nobody notices for a while that reasoning models are just not that big a deal in the long run."

Also, I don't believe you that anyone was talking in late 2023 that GPT-5 was coming out in a few months [...] End of 2024 would have been a quite aggressive prediction even just on reference class forecasting grounds

When David and I were doing some planning in May 2024, we checked the prediction markets, and at that time the median estimate for GPT5 release was at December 2024.

Reply
johnswentworth's Shortform
johnswentworth3d30

I've been working on getting more out of lower percentile conversations. The explanation is fairly woo-ey but might also relate to your interest around flirting.

I'd be interested to hear that.

Reply
johnswentworth's Shortform
johnswentworth4d20

I have a few times, found it quite interesting, and would happily do it again. It feels like the sort of thing which is interesting mainly because I learned a lot, but marginal learnings would likely fall off quickly, and I don't know how interesting it would be after doing it a few more times.

Reply
johnswentworth's Shortform
johnswentworth5d2212

Like, when I head you say "your instinctive plan-evaluator may end up with a global negative bias" I'm like, hm, why not just say "if you notice everything feels subtly heavier and like the world has metaphorically lost color"

Because everything did not feel subtly heavier or like the world had metaphorically lost color. It was just, specifically, that most nontrivial things I considered doing felt like they'd suck somehow, or maybe that my attention was disproportionately drawn to the ways in which they might suck.

And to be clear, "plan predictor predicts failure" was not a pattern of verbal thought I noticed, it's my verbal description of the things I felt on a non-verbal level. Like, there is a non-verbal part of my mind which spits out various feelings when I consider doing different things, and that part had a global negative bias in the feelings it spit out.

I use this sort of semitechnical language because it allows more accurate description of my underlying feelings and mental motions, not as a crutch in lieu of vague poetry.

Reply
johnswentworth's Shortform
johnswentworth6d20

Do group conversations count?

Yes.

Reply
Load More
11johnswentworth's Shortform
Ω
5y
Ω
603
No wikitag contributions to display.
From Atoms To Agents
"Why Not Just..."
Basic Foundations for Agent Models
Framing Practicum
Gears Which Turn The World
Abstraction 2020
Gears of Aging
Model Comparison
54Fictional Thinking and Real Thinking
14d
11
192The Value Proposition of Romantic Relationships
1mo
37
67That's Not How Epigenetic Modifications Work
1mo
12
534Orienting Toward Wizard Power
1mo
142
73$500 + $500 Bounty Problem: Does An (Approximately) Deterministic Maximal Redund Always Exist?
2mo
16
113Misrepresentation as a Barrier for Interp (Part I)
2mo
12
89$500 Bounty Problem: Are (Approximately) Deterministic Natural Latents All You Need?
2mo
17
294So You Want To Make Marginal Progress...
4mo
42
181Instrumental Goals Are A Different And Friendlier Kind Of Thing Than Terminal Goals
Ω
5mo
Ω
61
356The Case Against AI Control Research
5mo
81
Load More