[Sinclair's razor is a helluva drug haha]
(Remember that I only want to defend "worst form of timelines prediction except all the other approaches". I agree this is kind of a crazy argument in some absolute sense.)
So, just so we're on the same page abstractly: Would you agree that updating / investing "a lot" in an argument that's kind of crazy in some absolute sense, would be an epistemic / strategic mistake, even if that argument is the best available specific argument in a relative sense?
if you insist on my being Bayesian and providing a direction of predictable error when I claim predictable error then fine your timelines are too long.
That doesn't sound like the correct response though. You should just say "I predict this isn't the reason AGI will come late, if AGI comes late". It's much less legible / operationalized, but if that's what you think you know in the context, why add on extra stuff?
Added clarification. (This seems to be a quite general problem of mismatched discourse expectations, where a commenter presumes a different shared-presumption of how much the comment is context-specific, or other things like that, compared to a reader.)
(That's a reasonable interest but not wanting to take that time is part of my not wanting to give an overall opinion about the specific situation with Palisade; I don't have an opinion about Palisade, and my comment just means to discuss general principles.)
I don't want to give an overall opinion [ETA: because I haven't looked into Palisade specifically at all, and am not necessarily wanting to invest the requisite effort into having a long-term worked-out opinion on the whole topic], but some considerations:
You don't need to be smarter in every possible way to get radically increase in speed to solve illnesses.
You need the scientific and technological creativity part, and the rest would probably flow, is my guess.
I think part of the motive of making AGI is to solve all illnesses for everyone and not just people who aren't yet born.
What I mean is that giving humanity more brainpower also gets these benefits. See https://tsvibt.blogspot.com/2025/11/hia-and-x-risk-part-1-why-it-helps.html It may take longer than AGI, but also it doesn't pose a (huge) risk of killing everyone.
Does this basically mean not believing in AGI happening between the next two decades?
It means not being very confident that AGI happens within two decades, yeah. Cf. https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce and https://www.lesswrong.com/posts/5tqFT3bcTekvico4d/do-confident-short-timelines-make-sense
Aren't we talking mostly about diseases that come with age
Yes.
Someone could do a research project to guesstimate the impact more precisely. As one touchpoint, here's 2021 US causes of death, per the CDC:
(From https://wisqars.cdc.gov/pdfs/leading-causes-of-death-by-age-group_2021_508.pdf )
Total deaths of young people in the US is small, in relative terms, so there's not much room for impact. There would still be some impact; we can't tell from this graph of course, but many of the diseases listed could probably be quite substantially derisked (cardio, neoplasms, respiratory).
This is only deaths, so there's more impact if you include non-lethal cases of illness. IDK how much of this you can impact with reprogenetics, especially since uptake would take a long time.
where we will have radically different medical capabilities if AGI happens in the next two decades?
Well, on my view, if actual AGI (general intelligence that's smarter than humans in every way including deep things like scientific and technological creativity) happens, we're quite likely to all die very soon after. But yeah, if you don't think that, then on your view AGI would plausibly obsolete any current scientific work including reprogenetics, IDK.
Another thing to point out is that, if this is a motive for making AGI, then reprogenetics could (legitimately!) demotivate AGI capabilities research, which would decrease X-risk.
Thanks!
We easily agree that this depends on further details, but just at this abstract level, I want to record the case that these "probably, usually" are mistakes. (I'm avoiding the object level because I'm not very invested in that discussion--I have opinions but they aren't the result of lots of investigation on the specific topic of bioanchors or OP's behavior; totally fair for you to therefore bow out / etc.)
The case is like this:
Suppose you have a Bayesian uninformed prior. Then someone makes an argument that's "kinda crazy but the best we have". What should happen? How does a "kinda crazy argument" cash out in terms of likelihood ratios? I'm not sure, and actually I don't want to think of it in a simple Bayesian context; but one way would be to say: On the hypothesis that AGI comes at year X, we should see more good arguments for AGI coming at year X. When we see an argument for that, we update to thinking AGI comes at year X. How much? Well, if it's a really good argument, we update a lot. If it's a crazy argument, we don't update much, because any Y predicts that there are plenty of crazy arguments for AGI coming at year Y.
The way I actually want to think about the situation would be in terms of bounded rationality and abduction. The situation is more like, we start with pure "model uncertainty", or in other words "we haven't thought of most of the relevant hypotheses for how things actually work; we're going off of a mush of weak guesses, analogies, and high-entropy priors over spaces that seem reasonable". What happens when we think of a crazy model? It's helpful, e.g. to stimulate further thinking which might lead to good hypotheses. But does it update our distributions much? I think in terms of probabilities, it looks like fleshing out one very unlikely hypothesis. Saying it's "crazy" means it's low probability of being (part of) the right world-description. Saying it's "the best we have" means it's the clearest model we have--the most fleshed-out hypothesis. Both of these can be true. But if you add an unlikely hypothesis, you don't update the overall distribution much at all.