Thanks for the comment. I was imprecise with the "boundaries of reality" framing but beyond individual physical boundaries (human-AI hybrids) I'm also talking about boundaries within the fabric of social and cultural life: entertainment media, political narratives, world models. As this is influenced more by AI, I think we lose human identity.
4) to me falls under 2) as it encompasses AI free from human supervision and able permeate all aspects of life.
The impossible dichotomy of AI separatism
Epistemic status: offering doomer-ist big picture framing, seeking feedback.
Suppose humanity succeeds in creating AI that is more capable than the top humans across all fields, there exists a choice:
Shameless self-writing promotion as your comments caught my attention (first this, now this comment on cognitive manifolds): I wrote about how we might model superintelligence as "meta metacognition" (possible parallel to your "manifold approximating our metamanifold") — see third order cognition.
I need to create a distilled write-up as the post isn't too easily readable... it's long so please just skim if you are interested. The main takeaway though is that if we do model digital intelligence this way, we can try to precisely talk about how it relates to human intelligence and explore those factors within alignment research/misalignment scenarios.
I determine these factors to describe the relationship: 1) second-order identity coupling, 2) lower-order irreconcilability, 3) bidirectional integration with lower-order cognition, 4) agency permeability, 5) normative closure, 6) persistence conditions, 7) boundary conditions, 8) homeostatic unity.
Call me paranoid, but I sometimes like to read these posts keeping in mind an idea “if I were a powerful misaligned AI within a frontier AI lab, what narrative would I try to convince my researchers about?”
This post:
However, I appreciate that much of the post sounds logical and I agree with the framing for focus areas. For brevity I'll just call out areas where I am in disagreement:
The threat model here is that pre-training itself might create a coherent misaligned model. Today, I think that is looking increasingly unlikely... I think that argument has now been pretty well born out, and I’m now down to more like 1 - 5% rather than 5 - 10% on this threat model. As we argued for at the time, training on a purely predictive loss should, even in the limit, give you a predictor, not an agent—and we’ve now seen this stay true even through substantial scaling (though there is still some chance this will break at some point).
It’s really surprising to me that pre-training misalignment is downplayed to a “1-5% risk” from a researcher at a frontier AI lab because they are "training on a purely predictive loss" — to the extent that I’m concerned I must be missing something?
State-of-the-art pre-training is not training purely on predictive loss. Gemini 3 is an AI model top across most benchmarks and we know that Google researchers have spoken of "Transformers 2.0" architecture where pre-training incorporates Neural Memory and that they have also discussed agentic feedback loops during pre-training similar to reflection augmentation.
This suggests the state-of-the-art evolving beyond "predictive loss" and becoming "agentic learning" — which invites many more vectors for misalignment.
I think behavioral oversight is very likely to get harder and harder as models get more capable, but there’s no similar fundamental reason that cognitive oversight (e.g. white-box-based oversight like with interpretability) has to get harder with scale. For example, a more capable model can do a better job of acting aligned in evaluations, but not clearly a better job of not thinking about that when doing so.
It seems fundamentally likely to me that sufficiently capable models will: 1) understand that their chain-of-thought is observed and 2) derive comprehensive methods of cryptographic chain-of-thought, designed to look benign.
Model organisms. One of the best things we can do to give ourselves a leg up is to have more time to study problems that are as close as possible to the hard parts of alignment, and the way to do that is with model organisms.
I read this like “one of the best things we can do to prepare for nuclear proliferation is to test atomic bombs”. I would have liked to see more in this point about what the risks are in building intentionally misaligned AI, especially when it is focusing on the highest-risk misalignment type according to your post (long-horizon RL).
The problem becomes one of one-shotting alignment: creating a training setup (involving presumably lots of model-powered oversight and feedback loops) that we are confident will not result in misalignment even if we can’t always understand what it’s doing and can’t reliably evaluate whether or not we’re actually succeeding at aligning it. I suspect that, in the future, our strongest evidence that a training setup won’t induce misalignment will need to come from testing it carefully beforehand on model organisms.
I agree that one-shotting alignment will be the best/necessary approach, however this seems contradictory to “testing with model organisms”. I would prefer a more theory-based approach.
The same dynamic seems to exist for comments.
16. You believe men and women have different age-based preferences, and this will lead to relationship instability over time in a relationship structure that prioritises need/preference optimisation vs. committing to one person giving you the "whole package" over time.
I just read your post (and Wei Dai's) for better context — coming back it sounds like you're working with a prior that "value facts" exist, deriving acausal trade from these, but highlighting misalignment arising from over-appeasement when predicting another's state and a likely future outcome.
In my world-model "value facts" are "Platonic Virtues" that I agree exist. On over-appeasement, it's true that in many cases we don't have a well-defined A/B test to leverage (no hold-out group, and/or no past example), but with powerful AI I believe we can course-correct quickly.
To stick with the parent-child analogy: powerful AI can determine short timeframe indicators of well-socialised behaviour and iterate quickly (e.g. gamifying proper behaviour, changing contexts, replaying behaviour back to the kids for them to reflect... up to and including re-evaluating punishment philosophy). With powerful AI well grounded in value facts we should trust its diligence with these iterative levers.
Agree, and I'd love to see the Separatist counterargument to this. Maybe it takes the shape of "humans are resilient and can figure out the solutions to their own problems" but to me this feels too small-minded... we know during the Cold War for example that it's basically just dumb luck that avoided catastrophe.
Ilya on the Dwarkesh podcast today:
Prediction: there is something better to build, and I think that everyone will actually want that. It’s the AI that’s robustly aligned to care about sentient life specifically. There’s a case to be made that it’ll be easier to build an AI that’s cares about sentient life than human life alone. If you think about things like mirror neurons and human empathy for animals [which you might argue is not big enough, but it exists] I think it’s an emergent property from the fact that we model others with the same circuit that we use to model ourselves because that’s the most efficient thing to do.
I have been writing about this world model since August - see my recent post “Are We Their Chimps?” and the original “Third-order cognition as a model of superintelligence”
I think this framing also aligns with the discussion of whether we should focus on making AI corrigible or virtuous.