It's fine, I wouldn't expect you to read all the comment.
There are only a few hundred IQ-related genes, and they're found through a correlation in over 240k people, so it's not necessary that you can just edit all these genes to set them on a "more IQ" version in a specific genome, and get maximum IQ. www.southampton.ac.uk/news/2018/03/genes-intelligence.page
There have been quite a few more genetic variants linked to IQ since this study was published in 2018. Herasight has the best IQ predictor I know of, which can explain over twice as much variance as the best predictors we had in 2018.
Of course it's correct that in most cases we aren't highly certain which one of a cluster of variants is actually causing the effect we observe.
But we don't need to know. We just need reasonably good odds. Then we can edit variants with the highest combined probability of causality * effect size. That's how we came up with the graph in the post: we didn't falsely assume we know which of the variants are causal.
By editing more nucleotide sequences, you will only increase the risks of breaking the genome's reading, replication, expression and other mechanisms, killing the cell. DNA isn't merely a line of text that encodes proteins, it's full of commands for enzymes to work with it. As a tip of the iceberg: there are 20k genes that encode 100-1000k proteins in human body, because 1 gene contains several exons (and introns), which, during transcription, are combined in different ways to encode several proteins.
Yes, that's true if you're doing single shot editing, which is one of the limitations for the kind of germline engineering we're currently pursuing. But the more advanced editing protocols don't rely on single shot editing. They use iterated CRISPR, as explained in the post, and that in turn uses multiple rounds of editing. In each round, you're selecting cells that not only received many edits, but didn't contain any deal-breaker mutations.
So at least in theory, there is no reason why adding more edits would break essential genomic functions.
In practice, there are a lot of headaches here: base editors result in bystander edits, prime editors result in indels, and each cell division carries a certain risk of copying errors and chromosomal abnormalities. None of these are insurmountable, but overcoming them will require a decent bit of engineering.
Do you have a link to the Habr article? I'm curious to read what people are saying.
Subtle dig at Balaji from Bannon? Interesting.
Anyone have insights into whether this is a genuine offer that could be taken up by members of the administration if they have the right attitude vs a simple power play by China to try to get more support from potential allies?
Trying to gauge how cynical to be here.
As someone who works in genetics and has been told for years he is a "eugenicist" who doesn't care about minorities, I understand your pain.
It's just part of the tax we have to pay for doing something that isn't the same as everyone else.
If you continue down this path, it will get easier to deal with these sorts of criticisms over time. You'll develop little mental techniques that make these interactions less painful. You'll find friends who go through the same thing. And the sheer repetitiveness will make these criticisms less emotionally difficult.
And I hope you do continue because the work you're doing is very important. When new technology causes some kind of change, people look around for the nearest narrative that suits their biases. The narratives in leftist spaces right now are insane. AI is not a concern because it uses too much water. It's not a concern because it is biased against minorities (if anything it is a little biased in favor of them!)
There is one narrative that I think would play well in leftist spaces which comes pretty close to the truth, and isn't yet popular:
AI companies are risking all of our lives in a race for profits
Simply getting this idea out there and more broadly known in leftist spaces is incredibly valuable work.
So I hope you keep going.
On the topic of predictability and engineering – sure, we can influence predispositions, but the point I was trying to make is epistemological: the level of uncertainty and interdependence in human development makes the engineering metaphor fragile. Medicine, to your point, does aim to “figure out” complex systems – but it’s also deeply aware of its limitations, unintended consequences, and historical hubris. That humility didn’t come through strongly in your piece, at least to me.
Perhaps so. But the default assumption, seemingly made by just about everyone, is that there is nothing we can do about any of this stuff.
And that's just wrong. The human genome is not a hopeless complex web of entangled interactions. Most of the variance in common traits is linear in nature, meaning we can come up with reasonably accurate predictions of traits by simply adding up the effects of all the genes involved. And thus by extension, if we could flip enough of these genes, we could actually change people's genetic predisposition.
Furthermore, nature has given us the best dataset ever in genetics, which is billions of siblings that act as literal randomized control trials for the effect of genes on life outcomes.
If I felt that the world was suffering from excess genetic engineering hubris, then I might be more cautious in my language. But that is not in fact what is happening! What is happening is humanity is being far too cautious, mostly because they hold a lot of false assumptions about how complex the genome is.
We have this insane situation in reproductive genetics right now where tens of thousands of children are being born every year with much higher genetic predispositions towards disease than they should have because doctors don't understand polygenic risk scores and would rather implant embryos that look nice under a microscope.
do we really know enough about gene–environment interactions to be confident in the long-term effects of shifting polygenic profiles at scale?
It depends what your standard is: if the bar we need to meet is "we can't make any changes that might result in unpredictable effects", then of course we can't be confident.
But if the bar is "we know enough to say with high confidence improve the life of the child", then we are already there for small changes, and can get there relatively soon for much larger ones.
Hi Nabokos,
I appreciate the comment. I think many academically inclined follks probably have similar views to yours. Let me explain my thinking here:
What troubles me most is how little attention is paid to emotional attachment, which is arguably the cornerstone of healthy development. This reads more like a plan for growing babies in vitro than raising actual children.
If I were to go into the ins and outs of emotional attachment, this already long post would have been at least twice the length. And seeing as I am not an expert in the area, I hardly think it would have been useful to the average reader.
Of course emotional attachment is important. It's one of the most important things for happy, healthy childhood development.
But there are many good books on that topic and I don't think everyone who writes about any aspect of childhood or babies needs to include a section on the topic. If you think there are good resources people here should read, please post them!
You can’t predict or engineer how a baby will turn out.
It's certainly true you can't predict EXACTLY how a baby will turn out, but you CAN influence predispositions. In fact, most of parenting is about exactly this! How to change your child's environment to influence the kinds of things they do and the sort of person they become.
Honest question: do you have kids?
Sadly I do not have kids yet! I hope to have them in the next few years.
Also, much of the terminology you use feels superficial or misapplied. Science and education aren’t just about memorizing buzzwords – they require deep understanding, and that takes time, context, and mentorship.
I don't see how this is at odds with genetic engineering.
I'm a medical doctor, and what strikes me again and again is how people assume that complex systems – like human beings – can be "figured out" with enough reading or clever design
I think it's fair to say that the entire field of medicine is one big attempt to do exactly this. I don't see how gene editing differs from what we try to do with drugs.
(e.g. hypertension isn’t caused by a single gene)
Where exactly did I say this?
But did it ever occur to you that these 'optimized' new people might come with new problems and diseases? Biology tends to work like that: you push on one part, something else breaks.
Yes, I have in fact considered this. There are several different ways to assess how big of a problem this could be:
Together these imply that it should in fact be possible to significantly improve health, intelligence, and other aspects of what makes life good without necessarily making that many difficult tradeoffs.
Also, just based on what we know about evolution it shouldn't actually surprise us that much that we can increase overall performance, especially when there has been as big of a shift in the environment as what we've experienced in the last few hundred years.
It would be nice if your critique actually addressed some specific concrete issues you have with the post or its ideas. The one specific example you gave (me thinking hypertension is caused by one gene) isn't even something I said. I'm not even sure where you're getting that idea from.
I think you make a reasonably compelling case, but when I think about the practicality of this in my own life it's pretty hard to imagine not spending any time talking to chatbots. ChatGPT, Claude and others are extremely useful.
Inducing psychosis in your users seems like a bad business strategy, so I view the current cases as accidental collateral damage, mostly borne of the tendency of some users to end up going down weird self-reinforcing rabbit holes. I haven't had any such experiences because this is not the way I use chatbots, but I guess I can see perhaps some extra caution warranted for safety researchers if these bots get more powerful and are actually adversarial to them?
I think this threat model is only applicable in a pretty narrow set of scenarios: one where powerful AI is agentic enough to decide to induce psychosis if you're chatting with it, but not agentic enough to make this happen on its own despite likely being given ample opportunities to do so outside of contexts in which you're chatting with it. And also one where it actually views safety researchers as pertinent to its safety rather than as irrelevant.
I guess I could see that happening but it doesn't seem like such circumstances would last long.
I would share your concern if TurnTrout or others were replying to everything Nate published in this way. But well... the original comment seemed reasonably relevant to the topic of the post and TurnTrout's reply seemed relevant to the comment. So it seems like there's likely a limiting principle here that would prevent your concern from being realized.
It never really got any traction. And I think you're right about the similarity to eugenics somewhat defeating the purpose.
I think terms like "reproductive freedom" or "reproductive choice" actually get the idea across better anyways since you don't have to stop and explain the meaning of the word.
I'd appreciate if you could provide links to "clear evidence of its writing style across all of these surfaces, and the entire.. vibe of the campaign feels like it was completely synthesized by 4o"
I understand it may be hard to definitively show this but anything you can show would be helpful.