How important is it that cell and nucleus remain intact for your application?
There's a dichotomy in chromosome selection methods, where either you're manipulating chromosomes a bunch while they're still in cells, or else you're extracting them and manipulating them individually. See https://berkeleygenomics.org/articles/Chromosome_identification_methods.html#cell-culturing-vs.-isolating-ensembling-methods . For reasons mentioned there, I'm inclined towards isolating-ensembling methods.
For cell-culturing methods, we want the cell intact and alive. In this context, identification is less of a problem, because you can always do selection after the fact. See https://en.wikipedia.org/wiki/Microcell-mediated_chromosome_transfer ; it's fine if many of your microcells contain the wrong chromosome and then you transmit the wrong chromosome, because you can select in your cell culture after doing the trasmission. See e.g. Petris, Gianluca, Simona Grazioli, Linda van Bijsterveldt, et al. ‘High-Fidelity Human Chromosome Transfer and Elimination’. Science 390, no. 6777 (2025): 1038–43. https://doi.org/10.1126/science.adv9797
Can other chromosomes be genetically engineered?
Not sure what you mean. You're asking, do we create chromosomes, e.g. via CRISPR editing? We could, but that's not necessary. You could get quite a lot of mileage just selecting from easily-obtainable ordinary cells. See https://berkeleygenomics.org/articles/Methods_for_strong_human_germline_engineering.html#method-chromosome-selection
Do you need to be able to identify chromosomes during M phase, or is interphase OK?
For isolating-ensembling methods, we're presumably destroying the cell and nuclear membrane, and dissociating the nucleus. Since we're handling naked chromosomes, we want them to be M-phase or otherwise compact (e.g. sperm chromatin). Interphase is probably too spread out and too vulnerable; the chromosomes would likely literally break.. Though I'm not 100% sure of that.
How many chromosomes do you need to identify and extract?
If it's a cell-culture method, you could do any number. The more you can do, the better, because that means more selection power (i.e. more ability to vector traits of the resulting kid).
If it's an isolating-ensembling method, then you must produce either a full euploid haploid or a full euploid diploid genome, depending on context (e.g. are you making a paternal genome or a zygote genome). So you have to do 23 or 46 chromosomes. (You don't necessarily have to do them each individually, as singletons; see https://berkeleygenomics.org/articles/Chromosome_identification_methods.html#setwise-identification )
Proposed name: Butterfly Conservatory (https://www.lesswrong.com/posts/imnfJ9Ris7GgjkZbT/the-bughouse-effect-1#My_stag_is_best_stag)
2 feels meaningfully stronger/[less likely] than 1 to me
Well I agree it's different and depending on the interpretation logically strictly stronger. But I think it's still quite likely, because you should go back on your commitments to Baby-Eaters. Probably.
aren't basically all your commitments a lot like this though...
I would keep commitments to humans, generally. But it's not absolute, and I don't think it's because of much fancy decision theory (not sure). In the past decade, on one major occassion, I have gone back on one significant blob of commitment, after consideration. I think this was correct to do, even at the cost of being the sort of guy who has ever done that. I felt that--with the revisions I made to my understanding of commitment, what it's for, what humans are, what cooperation is, etc.--[the people who I would want to cooperate with / commit to things] would, given enough info, still be open to such things with me.
even if 2 is true, the plan might be fine, because you might not need to become that smart to ban AI.
I think this could be cruxy for me, and I could be convinced it's not totally implausible, but then we're putting even much more pressure on getting human-level AI. I didn't bring this up before, but yeah, I think getting specifically human-level AI is far from easy, perhaps extremely difficult. Cf. https://tsvibt.blogspot.com/2023/01/a-strong-mind-continues-its-trajectory.html
I think one would like to broadcast to the broader world "when you come to me with an offer, I will be honorable to you even if you can't mindread/predict me", so that others make offers to you even when they can't mindread/predict you. I think there are reasons to not broadcast this falsely, e.g. because doing this would hurt your ability to think and plan together with others (for example, if the two of us weren't honest about our own policies, it would make the present discussion cursed). If one accepts these two points, then one wants to be the sort of guy who can truthfully broadcast "when you come to me with an offer, I will be honorable to you even if you can't mindread/predict me", and so one wants to be the sort of guy who in fact would be honorable even to someone who can't mindread/predict them that comes to them with an offer.
Yeah I suspect I'm not following and/or not agreeing with your background assumptions here. E.g. is the AI supposed to be wanting to "think and plan together with others (humans)"? Isn't it substantively super-humanly smart? My weak guess is that you're conflating [a bunch of stuff that humans do, which breaks down into general very-bounded-agent stuff and human-values stuff] with [general open-source game theory for mildly-bounded agents]. Not sure. Cf. https://www.lesswrong.com/w/agent-simulates-predictor If you're a mildly-bounded agent in an OSGT context, you do want to be transparent so you can make deals, but that's a different thing.
Now we've turned parfit's hitchhiker into something really close to our situations with humans and aliens appearing in simulated big evolutions, right?
I feel I'm not tracking some assumptions you're making or disagreements between our background assumptions.... E.g. the getting smarter thing. What I'm saying is that it's quite plausibly correct for me to
E.g. because I really want to minimize the amount of baby-eating that happens.
[I feel like I may have a basic misunderstanding of what you're saying.]
I haven't thought deeply enough about it, but one guess: The version of honorability/honesty that humans do is only [kinda natural for very bounded minds].
There's a more complex boundary where you're honest with minds who can tell if you're being honest, and not honest with those who can't. This is a more natural boundary to use because it's more advantageous.
You mention wanting to see someone's essays about Parfit's hitchhiker... But that situation requires Eckman to be very good at telling what you'll do. We're not very good at telling what an alien will do.
I think there are humans who, even for weird aliens, would make this promise and stick to it, with this going basically well for the aliens.
Would you guess I have this property? At a quick check, I'm not sure I do. Which is to say, I'm not sure I should. If a Baby-Eater is trying to get a promise like this from me, AND it would totally work to trick them, shouldn't I trick them?
What you say makes perfect sense; yet, somehow something still feels bad about "AI 2027". I'm not sure what, so I'm not sure if my sense is good/true/fair. Maybe my sense is about the piece rather than the title. At a vague guess, it's something about "hype". Like, "AI 2027" is somehow in accordance with hype--using it, or adding to it, or something. But maybe the crux is just that I think the timelines are overconfident, or that it's just bad to describe stuff like this in detail (because it's pumping in narrativium without adding enough info), or something. I'm not sure.
(IMU[ninformed]O, "What superintelligence looks like" is a significantly less epistemically toxic title for that piece than "AI 2027".)
This does quantitatively decrease my objection, yeah. My objection would still be there, somewhat, also quantitatively.
Maybe it's difficult to write the condensed version that's just the parts you added while getting most of the same effect, so there's not a better option. That's certainly the case with Gwern's images (and I use image generators for the same reason).
At a wild guess, I'd say that if the useful artifact is literally a paragraph or less, and you've gone over it several times, then it could be "ok" as testimony according to me. Like, if the LLM drafted a few sentences, and then you read them and deeply checked "is this really the right way to say this? does this really match my idea / felt sense?", and then you asked for a bunch of rewrites / rewordings, and did this several times, then plausibly that's just good.
If it's longer than a paragraph, then I'd suspect there's substantial slop that's slopping in, at various levels of abstraction. IDK.
(Again, not trying to excuse pointlessly being a dick. Plausibly Eliezer is not infrequently a big pointless dick, I do not know, no strong opinion.)
Another hypothesis: It's possible that he thinks some people should be treated with public contempt.
As an intuition pump for how it might be hypothetically possible that someone should be treated with public contempt, consider a car salesman who defrauds desperate people. He just straightforwardly lies about the quality of the cars he sells; he picks vulnerable people desperate for a cheap way to juggle too many transport needs; he charms them, burning goodwill. He has been confronted about this, and just pettily attacks accusers, or if necessary moves towns. He has no excuse, he's not desperate himself, he just likes making money.
How should you treat him? Plausibly contempt is correct--or rather, contempt in reference to anything to do with his car sales business. IDK. Maybe you can think of a better response, but contempt does seem to serve some kind of function here: a very strong signal of "this stuff is just awful; anyone who learns much about it much will agree; join in on contempt for this stuff; this way people will know to avoid this stuff".
(This is not a defense of poor behavior; people are responsible for not pointlessly being dicks.) A hypothesis I keep in mind which might explain some instances of this would be The Bughouse Effect.
To give this hypothesis a bit more color, I think people get invested in hope. Often, hope is predicated on a guess / leap of faith. It takes the form: "Maybe we [some group of people] are on the same page enough that we could hunt the same stag; I will proceed as though this is the case.".
By investing in hope that way, you are opening up ports in your mind/soul, and plugging other people into those ports. It hurts extra when they don't live up to the supposed shared hope.
An added wrinkle is that the decision to invest in hope like this, is often not cleanly separated out mentally. You don't easily, cleanly separate out your guesses about other people, your wishes, your plans, your "just try things and find out" gambles, and so on. Instead, you do a more compressed thing, which often works well. It bundles up several of these elements (plans, hopes, expectations, action-stances, etc.) into one stance. (Compare: https://www.lesswrong.com/posts/isSBwfgRY6zD6mycc/eliezer-s-unteachable-methods-of-sanity?commentId=Hhti6oNe3uk8weiFL and https://tsvibt.blogspot.com/2025/11/constructing-and-coordinating-around.html#flattening-levels-of-recursive-knowledge-into-base-level-percepts ) It's not desirable, in the sense that any specific instance would probably be better to eventually factor out; but that can take a lot of effort, and it's often worth it to do the bundled thing compared to doing nothing at all (e.g. never taking a chance on any hopes), and it might be that, even in theory, you always have some of this "mixed-up-ness".
Because of this added wrinkle, doing better is not just a matter of easily learning to invest appropriately and not getting mad. In other words, you might literally not know how to both act in accordance with having hope in things you care about, while also not getting hurt when the hope-plans get messed up--such as by others being unreliable allies. It's not an available action. Maybe.
Right, for isolating-ensembling methods, that's an important and nontrivial step. I think with light microscopy it shouldn't be too hard to tell when you've succeeded. I think there are standard tools for processing many cells in parallel in microwells, so that aspect should be ok. Assuming most of your cells are euploid in the first place, it shouldn't be too hard to at least collect a euploid set of DNA. The chromosomes might be prone to breaking, depending on a bunch of factors. However, it's fine if some chromosomes break, as long as you still have all the DNA and your identification method (e.g. standard sequencing) can deal with broken DNA. The complementation still works.
I'm not sure I follow. It's true that you need all confident calls for isolating-ensembling methods; see https://berkeleygenomics.org/articles/Chromosome_identification_methods.html#isolating-ensembling-methods-require-high-confidence-number-identification .
Assuming you have plenty of source cells, you can independently and in parallel get a known chromosome 1, a known chromosome 2, etc. It's fine if the identification protocol fails sometimes. The only unacceptable failure is if it says "yep we definitely got chromosome 4!" but it's often wrong (say, more than 1% or 2%).