TL;DR: I ran the most comprehensive stress-test to date of mechanistic interpretability for single-cell foundation models (scGPT, Geneformer): 37 analyses, 153 statistical tests, 4 cell types. Attention-based gene regulatory network extraction fails at every level that matters, mostly because trivial gene-level baselines already explain the signal and the heads most aligned with known regulation turn out to be the most dispensable for the model's actual computation. But the models do learn real layer-organized biological structure, and I found that activation patching in these models has a large, formally quantifiable non-additivity bias that undermines standard component rankings, which is likely relevant for LLM interpretability too. I urge you: if you like mechanistic interpretability, consider working on biological foundation models. They offer external ground truth for validating your methods, more tractable model scales, and direct biomedical payoff with lower dual-use risk than frontier LLM interpretability. Full research is available here.
1. Why I Work on Mechanistic Interpretability of Biological Models, Not LLMs
It is well accepted that mechanistic interpretability is one of the most naturally attractive research directions for technically oriented people who care about AI safety. It feels like science in the most satisfying sense: you have a complex system, you poke at it with carefully designed experiments, and you try to figure out what it's actually doing inside. It rewards exactly the kind of careful, detail-oriented thinking that draws people into alignment research in the first place, and the dream of understanding what happens between a model's inputs and outputs is compelling enough to sustain years of difficult work.
I want to honestly say that I believe, based both on my own reasoning and on arguments made by people whose judgment I take seriously, that mechanistic interpretability of general-purpose models carries risks that are insufficiently appreciated. The concern is relatively straightforward: deep mechanistic understanding of how capable models work can advance their capabilities (by revealing which circuits to scale, optimize, or compose), and perhaps more critically, early weak superintelligences could leverage interpretability tools and knowledge as a substrate for recursive self-improvement. However, this point is just to explain my motivation - agreeing or disagreeing on it is not important for the comprehension of this article.
At the same time, none of this means that mechanistic interpretability knowledge must remain unused and unapplied across the board. What it means is that we should think about where the risk-benefit calculus is most favorable, and I believe biological foundation models are an unusually good answer to that question, for three reasons that I think are individually sufficient and collectively quite strong.
First, advancing the capabilities of narrow biological models is likely to be locally beneficial. A single-cell foundation model that gets better at predicting gene regulatory responses to perturbations is not going to help anyone build a more capable language model or a more dangerous autonomous agent. These models process transcriptomic profiles, not natural language or general world-knowledge, and making them more capable means making biology research faster, not making general AI systems more dangerous. I mean, eventually it will also probably kill you, but general models will kill you much earlier, so the doom from biological models is "screened off". I do acknowledge that there are still some risks here, but I think it is still net positive because of the reasons I explain below.
Second, biological models are far more tractable as subjects for mechanistic study than LLMs. Geneformer V2, the largest model in my study, has 316 million parameters and 18 transformer layers. This is large enough to be interesting (it clearly learns non-trivial structure) but small enough to be, at least in principle, exhaustively analyzed with current tools. More importantly, biological models can be validated against experimental ground truth in ways that LLM interpretability simply cannot: we have CRISPR perturbation data that tells us what actually happens when you intervene on specific genes, we have curated databases of known regulatory relationships, and we can design targeted experiments to test specific mechanistic claims. This makes biology a better laboratory for developing and stress-testing interpretability methods, because when something looks like a mechanistic discovery, you can check whether it actually is one.
Third, and this is the motivation I care about most, I think biological foundation models have a genuine chance of radically advancing our understanding of human biology at the systems level. We have largely resolved the genomics level (sequencing is cheap and comprehensive) and made enormous progress on the structural level (AlphaFold and its successors). What remains is fundamentally the systems level: understanding how genes, proteins, cell states, tissues, and organisms interact as integrated wholes to produce the phenotypes we observe. Single-cell foundation models, trained on tens of millions of individual cellular transcriptomes, are plausible candidates for learning aspects of this systems-level organization. If we can extract that knowledge mechanistically, rather than treating these models as black boxes, the payoff for biomedicine and for our understanding of human biology could be substantial. I also believe, as I've argued previously, that advancing our understanding of human biology at the systems level is one of the most important things we can do for human intelligence augmentation, which in turn is one of the most important things we can do for alignment, but I will not try to carry that argument here and instead point the interested reader to that earlier post.
So the question becomes practical: can we actually extract meaningful biological knowledge from these models using mechanistic interpretability tools? That is what I spent the last months trying to find out, and the answer is more nuanced than either the optimists or the skeptics would prefer.
2. Brief Note: What Are Single-Cell Foundation Models, and Why Should You Care?
For readers who come from the LLM interpretability side and have not worked with biological data, here is the minimum context you need to follow the rest of this post.
The data. Single-cell RNA sequencing (scRNA-seq) measures the expression levels of thousands of genes in individual cells. Unlike bulk sequencing, which averages over millions of cells and hides all the interesting heterogeneity, single-cell data lets you see that a tissue is composed of distinct cell types and cell states, each with its own gene expression program. Modern datasets contain tens of millions of individually profiled cells across dozens of human tissues.
The models. Single-cell foundation models are transformer architectures trained on these large scRNA-seq corpora using self-supervised objectives, analogous to how LLMs are trained on text. The two main model families I studied are:
scGPT treats each gene as a token and its expression value as the token's "identity," then trains with masked expression prediction: hide some genes' expression values, ask the model to predict them from the remaining context. This is conceptually very close to masked language modeling, with genes playing the role of words and expression levels playing the role of token IDs.
Geneformer takes a different approach: it ranks genes within each cell by their expression level (most expressed first) and then uses the rank-ordered gene sequence as input, training with masked gene prediction. The tokenization is fundamentally different from scGPT (ranks vs. expression values), the training objective is different, and the model scale differs (Geneformer V2-316M vs. scGPT's smaller variants), but both architectures learn to predict gene expression patterns from cellular context.
What people claim these models can do. The published literature (see, for example, here and here) suggests that these models achieve useful performance on several downstream tasks: classifying cell types, predicting how cells respond to genetic perturbations, and, most relevant for this post, inferring gene regulatory networks (GRNs) from their attention patterns. This last claim is the one I tested most thoroughly, because it is the most mechanistically interpretable claim and the one with the most direct implications for biological knowledge extraction. The idea is simple and appealing: if the model has learned that gene A regulates gene B, then the attention weight from gene A to gene B should be high, and by extracting the full attention matrix, you can recover the regulatory network the model has learned.
3. What I Did: The Most Comprehensive Stress-Test of Single-Cell Model Interpretability To Date
The paper I am summarizing here reports, to my knowledge, the most thorough systematic evaluation of mechanistic interpretability for single-cell foundation models published so far. It spans 37 distinct analyses, 153 pre-registered statistical tests, 4 cell types (K562, RPE1, T cells, iPSC neurons), 2 perturbation modalities (CRISPRi gene silencing and CRISPRa gene activation), and 2 model families (scGPT and Geneformer). The full details are on arXiv; here I will focus on the findings that I think are most relevant for the community.
3.1. The evaluation philosophy
A core design principle was that no single test is sufficient to validate or invalidate a mechanistic interpretability claim, because each test addresses a different failure mode and any one of them can miss problems that another catches. I built five interlocking families of tests, and the logic of how they complement each other is worth spelling out, because I think this framework is reusable well beyond my specific setting:
Trivial-baseline comparison asks: "Can a method that requires no model at all achieve the same performance?" If gene-level variance (a property you can compute with a pocket calculator) predicts perturbation responses as well as your fancy attention-derived network, you have not demonstrated that your interpretability method captures anything beyond trivial gene properties. This test catches overconfidence from neglecting simple alternatives.
Conditional incremental-value testing asks: "Given the best simple features, does your interpretability output add anything?" This is more demanding than the first test because it conditions on the simple features rather than just comparing to them. A method can be "significantly above chance" and still add zero incremental value once you control for what was already available.
Expression residualisation and propensity matching asks: "Is your signal actually coming from the thing you think it's coming from, or is it a confound proxy?" This is the biological equivalent of discovering that your "sentiment circuit" is actually a "sentence length detector."
Causal ablation with fidelity diagnostics asks: "Does the model actually use the components that your interpretability method identifies as important?" If your method says "these attention heads encode regulatory knowledge," then removing those heads should degrade the model's performance on tasks that require regulatory knowledge. This is the closest to standard NLP activation patching, but with a critical addition: intervention-fidelity diagnostics that verify the ablation actually changed the model's internal representations. Concretely, this means measuring how much the model's logits or hidden states shift when you zero out a head, because if a head's output was near-zero to begin with, ablating it tells you nothing about whether the model relies on it. A null result from ablation is only informative if you can show the intervention was materially disruptive to the computation passing through that component, and the fidelity check is what separates "the model doesn't need this head" from "your ablation didn't actually do anything."
Cross-context replication asks: "Does this hold up in a different cell type, a different perturbation modality, or a different model?" A result that appears in K562 CRISPRi but vanishes in RPE1 or T cells is a dataset-specific observation.
A result that survives all five families is genuinely robust. A result that fails any one of them has a specific, identifiable weakness. And the convergence of multiple independent tests pointing in the same direction provides stronger evidence than any single test can offer, regardless of how well-powered it is.
3.2. A note on the cautionary nature of these results
I want to be upfront about something: I tried a lot of ideas, and many of the simple ones did not work. The field's implicit narrative has been that attention patterns in biological transformers straightforwardly encode regulatory networks (again, here and here, but also in many other places) , and that extracting this information is primarily an engineering challenge (find the right layer, the right aggregation, the right thresholding). What I found instead is that the relationship between attention patterns and biological regulation is far more complex and confound-laden than this narrative suggests.
But I think this negative result is itself highly informative, for two reasons. The first is that it tells the field where not to look, which saves everyone the effort of independently discovering the same dead ends. The second, which I think is more important, is that the systematic framework I built means that when new biological foundation models emerge (and they will, with better architectures, more data, and potentially different training objectives), testing them against this battery of analyses is straightforward rather than requiring reinvention from scratch. The framework accelerates the entire mechanistic interpretability pipeline for this model class, even though many of its current outputs are negative.
3.3. Connections to NLP mechanistic interpretability
Before presenting the specific findings, it is worth noting that several of the phenomena I document have clear parallels in the NLP mechanistic interpretability literature, though the biological setting allows me to push certain questions further than is currently possible with language models. The finding that attention patterns do not reliably indicate computationally important features echoes long existing results on attention and explanation, but my causal ablation findings go beyond showing that many heads are prunable: I show that the heads most aligned with known ground truth are the most dispensable, which is a qualitatively stronger negative result. The layer-structured biological representations I find are reminiscent of the classical layer-specialized circuits documented in LLMs (Olsson et al. 2022 on induction heads, Elhage et al. on superposition), but in biology we can validate the content of each layer against independently curated databases of protein interactions and transcriptional regulation, which is a luxury that NLP interpretability researchers do not currently have. So the methodological tools developed here, particularly the incremental-value framework, the non-additivity diagnostics for activation patching, and the confound decomposition battery, can prove useful to people working on interpretability in general.
4. What Works: Positive and Constructive Findings
The negative results get the headlines (and they should, because the "attention as GRN" claim is the one the field has been banking on), but the positive findings are where the constructive path forward begins. These are the things that survived the full stress-testing battery, and I think each of them points toward something real about what these models have learned.
When I benchmarked Geneformer attention edges against multiple biological reference databases across all 18 layers, protein-protein interaction signal (measured against the STRING database) was strongest at the earliest transformer layer and decreased monotonically with depth. Transcriptional regulation signal (measured against TRRUST, a curated database of transcription factor targets) showed the opposite pattern: it increased with depth and peaked around L15. The cross-layer profiles for these two types of biological signal are anti-correlated, and functional co-annotation signals from pathway databases showed their own distinct depth profiles.
This is interesting, and not just as a biological finding. It means the model has self-organized its layers into a hierarchy that separates different types of biological relationship: physical protein interactions in the early layers, transcriptional regulation in the late layers, with functional pathway associations distributed in between. This is not something the training objective directly incentivizes (the model is just predicting masked gene identities from context), so the layer specialization reflects structure the model discovered on its own.
Critically, this signal survives expression residualisation. When I controlled for pairwise expression similarity (which would remove any signal that was just "these genes are co-expressed, therefore they look related"), 97% of the TRRUST regulatory signal at L15 was retained. So the layer-organized structure is not just a re-encoding of pairwise co-expression in attention-matrix form; it indeed captures something beyond what simple correlation between gene pairs would give you.
4.2. Cell-State Stratified Interpretability (CSSI) as a constructive methodological tool
One of the things I discovered while investigating why attention-based GRN recovery seemed to get worse as you added more cells (which is the opposite of what you would naively expect) is that the problem is not really about "more data makes models worse." The problem is about heterogeneity dilution: when you pool attention patterns across cells in different states (different cell types, different stages of differentiation, different activation states), you average together cell-state-specific regulatory signals that may point in different directions, and the result is a washed-out mess that retains only the regulatory relationships that are universal across all included states.
The solution I developed, Cell-State Stratified Interpretability (CSSI), is conceptually simple: instead of computing attention-derived edge scores across all cells at once, you first cluster cells into relatively homogeneous cell-state groups (using Leiden clustering on the model's own embeddings, so the stratification is informed by what the model itself has learned), compute edge scores within each stratum separately, and then aggregate across strata using max or mean operations. The optimal number of strata in the datasets I tested was around 5-7, which roughly corresponds to the major cell-state subdivisions present in the data.
The results are substantial: CSSI improves TRRUST regulatory edge recovery by up to 1.85-fold compared to unstratified computation. Null tests with random strata assignments confirm that the improvement is not an artifact of the stratification procedure inflating false positives; it specifically requires biologically meaningful strata. In synthetic experiments where I controlled the ground truth, CSSI with oracle labels maintained F1 ≥ 0.90 across all cell count configurations, while pooled inference dropped from ~0.85 at 200 cells to ~0.51 at 1,000 cells.
One of the strongest pieces of evidence that these models have learned something real, rather than just repackaging correlation statistics in a more expensive way, comes from comparing how attention edges and correlation edges perform across different cell types and perturbation modalities:
In K562 cells under CRISPRi (gene silencing), attention and correlation are statistically indistinguishable for predicting perturbation targets. In K562 cells under CRISPRa (gene activation), attention actually performs worse than correlation. In RPE1 cells under CRISPRi, attention significantly outperforms correlation. In iPSC-derived neurons, attention trends better than correlation but the sample is smaller.
If attention were simply a re-encoding of co-expression, you would expect a uniform relationship across contexts: attention and correlation would always perform similarly. The fact that the relationship is context-dependent, and that it flips direction depending on cell type and perturbation modality, means the models have learned something that varies between biological contexts in a way that simple co-expression does not. Whether that something is causal regulatory structure, more complex statistical dependencies, or some other biologically meaningful feature is a question the current evidence cannot fully resolve, but the context-dependence itself is a signal that the models are doing more than just memorizing gene-gene correlations.
(I should note that the RPE1 advantage, despite being statistically robust, turns out to decompose into confound structure when subjected to the full battery, as I discuss in Section 5. But the existence of context-dependence across all four systems is not explained by confounding, and remains a genuine positive finding about the models' representational capacity.)
4.4. Some transcription factors show robust pairwise regulatory signal in attention edges
The aggregate picture (which I discuss more in Section 5) is that attention-derived edges add zero incremental value over gene-level features for predicting perturbation responses. But this aggregate hides real heterogeneity at the level of individual transcription factors. When I performed per-TF bootstrap analyses, 7 out of 18 evaluable transcription factors showed robust edge-level signal, with a global AUROC 95% confidence interval of [0.71, 0.77]. There was also a suggestive trend that "master regulators" (transcription factors known to control broad developmental programs) showed higher AUROC than other TF categories, though this trend did not survive multiple testing correction given the small sample of evaluable TFs.
This matters because it suggests the blanket conclusion "attention edges are useless for regulatory inference" is too strong as a claim about all regulatory relationships. For some transcription factors, operating in some contexts, attention-derived edges may genuinely capture pairwise regulatory information. Identifying which TFs and which contexts is a direction for future work that could turn the current vague hope into a targeted extraction strategy.
4.5. Cross-species conservation reveals biologically meaningful structure in edge scores
As a separate validation axis, I compared correlation-based TF-target edge scores computed independently in human and mouse lung tissue, matched via one-to-one orthologs. The global conservation was striking: Spearman ρ = 0.743 across 25,876 matched edges, p < 10^(-300), with 88.6% sign agreement and top-k overlaps enriched by 8× to 484× over random expectation.
But what makes this finding informative rather than just impressive is that the conservation is not uniform across transcription factors. Lineage-specifying TFs (those that define cell identity, like NKX2-1 for lung epithelium) show near-perfect cross-species transfer, while signaling-responsive TFs (those that respond to environmental stimuli, like STAT1 or HIF1A) transfer poorly. This pattern makes perfect biological sense: lineage specification is deeply conserved across mammalian evolution, while signal-responsive regulation adapts to species-specific environmental niches. The fact that edge scores recapitulate this known biological pattern, and that the recapitulation is TF-class-dependent in the predicted direction, provides converging evidence that these scores capture real biological structure, even though they may not capture it in the causal form that the strongest interpretability claims require.
5. What Doesn't Work: The Key Negative Findings and Why They Matter
This is where the stress-testing framework earns its keep. Each negative finding survived multiple robustness checks and cross-context replications, and together they present a coherent picture that is hard to dismiss as artifact or bad luck.
5.1. Gene-level baselines dominate perturbation prediction, and you don't need a foundation model for that
This is the single most important negative finding, and it reframes everything else. When I tested how well different features predict which genes will respond to a CRISPR perturbation, the ranking was:
All comparisons with the gene-level baselines are significant at p < 10⁻¹². The implication is that most of what looks like "regulatory signal" in pairwise edge scores, whether derived from attention or from correlation, is actually reflecting univariate gene properties: genes that are highly variable, highly expressed, or frequently detected are more likely to be differentially expressed in response to any perturbation, and pairwise edges are largely tracking this property rather than specific regulatory relationships.
It is the most boring possible explanation for the observed performance, and it explains the bulk of the variance.
5.2. Pairwise edge scores add literally zero incremental value over gene-level features
The gene-level baseline dominance could in principle coexist with genuine incremental value from pairwise edges: maybe edges add a small amount of unique information on top of what gene-level features provide. I tested this with a conditional incremental-value analysis on 559,720 observation pairs, with statistical power exceeding 99% to detect ΔAUROC = 0.005.
The result: adding attention edges to gene-level features yields ΔAUROC = −0.0004. Adding correlation edges yields ΔAUROC = −0.002. These are essentially exact zeros, and they persist across all tested generalisation protocols (cross-gene splits, cross-perturbation splits, joint splits), both linear and nonlinear models (logistic regression and GBDT), and multiple metrics (AUROC, AUPRC, top-k recall). The same pattern replicates independently in RPE1 cells, where gene-level features alone achieve AUROC = 0.942 and adding attention edges yields ΔAUROC = +0.0001.
The supplement exhaustively tests this null against every objection I could think of: different metrics, different model classes, different split designs, different feature encodings. The biggest improvement found anywhere was ΔAUPRC ≈ +0.009 under one specific parameterization, which is less than 4% relative improvement and does not survive correction. Whatever biological structure attention edges contain, it is completely redundant with gene-level features for predicting what happens when you perturb genes, at least under the evaluation protocols I tested.
5.3. Causal ablation reveals that "regulatory" heads are the most dispensable ones
This result is, in my opinion, the most striking finding in the entire paper from the standpoint of mechanistic interpretability methodology.
Geneformer V2-316M has 324 attention heads across 18 layers. I ranked heads by their alignment with known regulatory relationships (TRRUST database) and then ablated them. If attention patterns at regulatory-aligned heads are where the model stores and uses regulatory knowledge, removing those heads should degrade the model's ability to predict perturbation responses.
What actually happened: ablating the top-5, top-10, top-20, or top-50 TRRUST-ranked heads produced zero significant degradation in perturbation-prediction. Meanwhile, ablating 20 randomly selected heads caused a significant performance drop. I also tested uniform attention replacement (forcing attention weights to 1/n while preserving value projections) on the TRRUST-ranked heads, with no degradation. I tested MLP pathway ablation in the purported "regulatory" layers: still no degradation, while MLP ablation in random layers could cause significant drops.
Crucially, intervention-fidelity diagnostics confirmed that these ablations were actually changing the model's internal representations: TRRUST-ranked heads produce 23× larger logit perturbation when ablated compared to random heads. The interventions were material; the model just did not rely on those heads for perturbation prediction. The computation that matters for predicting what happens when you knock down a gene appears to live in the value/FFN pathway, distributed across many components in a redundant fashion, rather than in the learnable attention patterns that interpretability pipelines extract.
I also tested the obvious "fix": if the relevant computation is in the value pathway rather than the attention pattern, maybe we should extract edge scores from the context layer (softmax(QK^T)·V) using value-weighted cosine similarity. This does not help. Value-weighted scores actually underperform raw attention and correlation, and adding them to gene-level features slightly hurts incremental value. The context vectors appear to represent a blended "information receipt" signal rather than direct pairwise coupling, and whatever perturbation-predictive computation the model performs is distributed in a way that no simple pairwise score extraction can recover.
5.4. Do these models know about gene regulation at all, or did we just fail to extract it?
The negative results above establish that I could not extract meaningful gene regulatory network information from attention patterns using the methods I tested. But this leaves a crucial epistemic question open: are we looking at an extraction failure (the knowledge is in the model somewhere, but not in the attention weights and not in a form our methods can reach), or a knowledge absence (the models simply never learned causal regulatory relationships in the first place)? These are very different claims, and the second is substantially stronger than the first.
One natural way to probe this distinction is through surface capabilities. If a model can accurately predict what happens when you knock down a gene, then it must have learned something about gene regulation internally, regardless of whether that knowledge is accessible through attention pattern analysis. Surface capabilities provide a minimum baseline for internal knowledge: the model knows at least as much as its best task performance implies, even if our interpretability tools cannot locate where that knowledge lives.
Unfortunately, the evidence on surface capabilities of single-cell foundation models is quite conflicting, and the field is in the middle of a heated debate about it. On one hand, the original papers make strong claims: Theodoris et al. (2023) reported that Geneformer's in silico perturbation approach identified a novel transcription factor in cardiomyocytes that was experimentally validated, and scGPT (Cui et al., 2024) claimed state-of-the-art performance on perturbation prediction, cell type annotation, and gene network inference after fine-tuning. These results suggest that the models have learned something biologically meaningful during pretraining.
On the other hand, a growing body of independent benchmarking work paints a much more skeptical picture. Ahlmann-Eltze et al. compared five foundation models against deliberately simple linear baselines for perturbation effect prediction and found that none of the foundation models outperformed the baselines, concluding that pretraining on atlas data provided "only a small benefit over random embeddings." Csendes et al. found that even the simplest baseline of taking the mean of training examples outperformed scGPT and scFoundation. Wenteler et al. showed that both scGPT and Geneformer perform worse than selecting highly variable genes and using established methods like Harmony or scVI in zero-shot cell type clustering. Bendidi et al. ran a comprehensive perturbation-oriented benchmark and concluded that foundation models show competitive performance only in batch effect reduction, where even random embeddings achieve near-optimal results. Perhaps most provocatively, Chen & Zou showed that GenePT, which simply uses ChatGPT text embeddings of gene descriptions from NCBI (containing zero expression data), achieves comparable or better performance than Geneformer and scGPT on many of the same downstream tasks!
A consistent pattern in this debate is that the original model papers evaluate primarily with fine-tuning, while independent benchmarks emphasize zero-shot performance. Fine-tuned models can look strong, but it becomes difficult to disentangle whether the strong performance comes from pretrained representations or from the fine-tuning data itself. Zero-shot evaluation is arguably the fairer test of what pretraining actually learned, and this is precisely where the models tend to struggle.
What does this mean for interpreting my results? The honest answer is that I cannot fully resolve the extraction-vs.-absence question with the data we have. Both model families converge to similar near-random unstratified GRN recovery despite fundamentally different architectures (gene-token vs. rank-based tokenization), different training objectives, and different scales, which suggests this is not a model-specific quirk. But the convergence is consistent with both interpretations: either both architectures fail to learn causal regulation from observational expression data (because co-expression is the dominant signal and the training objectives do not specifically incentivize causal structure), or both architectures learn it but encode it in representations that neither attention-based nor simple pairwise extraction methods can reach. The mixed evidence on surface capabilities does not decisively resolve this in either direction, though the weight of the independent benchmarking evidence leans toward the more pessimistic interpretation for current-generation models. The next obvious question is, will stacking more layers help?
6. What the Biological Setting Reveals About Activation Patching
Most of the findings in Sections 4 and 5 are primarily about biology. This section is rather about a methodological result about activation patching itself that I, as far as I know, is novel and directly relevant to anyone using this technique on any transformer model, biological or otherwise.
6.1. The non-additivity problem is formal, quantifiable, and large
Activation patching (sometimes called causal mediation analysis) is one of the workhorse tools of current mechanistic interpretability. The standard workflow is: intervene on one component at a time (a head, an MLP block, a residual stream position), measure the effect on some downstream behavior, and rank components by their individual effects. The components with the largest effects are declared to be "the circuit" responsible for that behavior.
This workflow implicitly assumes additivity: that the effect of the full model is well-approximated by the sum of individual component effects. When this assumption holds, single-component rankings are meaningful. When it fails, they can be systematically wrong in ways that are not just noisy but structurally biased.
The mech interp community is well aware that interactions can matter in principle. Nanda explicitly notes that attribution patching "will neglect any interaction terms, and so will break when the interaction terms are a significant part of what's going on." Heimersheim & Nanda discuss backup heads and the Hydra effect as specific instances of non-additive behavior, where ablating one component causes others to compensate in ways that confound single-component attribution. Makelov et al. demonstrate a related failure mode at the subspace level, showing that patching can activate dormant parallel pathways that produce illusory interpretability signals. The qualitative concern is not new, and I want to credit the people who have been raising it. What has been missing, to my knowledge, is (a) a formal framework for quantifying how much the standard single-component workflow's rankings are biased by interactions, (b) empirical measurement of how large that bias actually is in a real model rather than a constructed example, and (c) certificates for which pairwise rankings survive the observed non-additivity. That is what I provided.
I formalize the bias using a decomposition involving Möbius interaction coefficients. The key quantity is the relationship between single-component mediation estimates and Shapley values (which are interaction-aware by construction). Single-component estimates equal Shapley values only when all interaction terms vanish; otherwise, the discrepancy is a structured function of the interaction landscape, and it can push the ranking in a consistent wrong direction rather than just adding noise.
The empirical question is whether this matters in practice. In the biological transformers I studied, the answer is clearly yes. Using frozen cross-tissue mediation archives, I computed lower bounds on aggregate non-additivity (the residual between total effect and the sum of individual component effects, adjusted for measurement uncertainty). In 10 of 16 run-pairs, this lower bound was positive, meaning the observed non-additivity exceeds what measurement noise alone could explain. The median lower-bound ratio relative to the total effect was 0.725, which means interactions account for a substantial fraction of the overall model behavior in the median case.
6.2. Ranking certificates collapse under structural bias assumptions
The most practically concerning result is not just that non-additivity exists, but what it does to the reliability of component rankings. I introduced "ranking certificates" that ask: given the observed level of non-additivity, what fraction of pairwise comparisons between components (e.g., "head A matters more than head B") can we certify as robust to interaction-induced bias?
Under the structural-bias assumptions informed by the empirical non-additivity measurements, the fraction of certifiably correct pairwise rankings collapses by an order of magnitude or more compared to what the single-component estimates naively suggest. In concrete terms: if you rank 50 heads by their individual activation patching effects and declare the ranking meaningful, the certification analysis suggests that only a small fraction of the pairwise orderings in that ranking are robust to interaction effects. The rest could be wrong, and wrong in a way that is invisible to the standard workflow because the standard workflow does not check for it.
6.3. What this means for mech interp practice
I have demonstrated the non-additivity bias and its consequences in biological transformers with 316 million parameters. I have not demonstrated it in GPT-2, Llama, or any other language model, and the magnitude of the effect could be different in those architectures. The formal framework applies to any transformer (it is architecture-agnostic), but the empirical severity is an open question for LLMs.
That said, I think the results warrant concrete changes to standard practice for anyone doing activation patching or similar single-component mediation analysis:
First, report the residual non-additivity. This is the gap between the total effect of a multi-component intervention and the sum of corresponding single-component effects. It is cheap to compute (you need one additional intervention beyond what you already do) and it directly tells you how much of the model's behavior lives in interactions rather than in individual components. If this residual is large, your single-component rankings are unreliable, and you should know that before you build a mechanistic story on top of them.
Second, compute ranking certificates for your top-ranked components. If you are going to claim "these are the most important heads for behavior X," you should check whether that ranking is robust to the level of non-additivity you actually observe. If only 10% of pairwise orderings survive certification, your "top 5 heads" may not actually be the top 5 heads.
Third, for your most important mechanistic claims, consider using interaction-aware alternatives like Shapley-based decompositions. These are more expensive (combinatorially so in the worst case, though sampling-based approximations exist), but they handle interactions correctly by construction. The synthetic validation in my supplement shows that Shapley-value estimates recover true interaction rankings with approximately 91% improvement in rank correlation compared to single-component estimates, which suggests the additional cost is worth it when the claim matters.
The broader methodological point is that "patch one component, measure effect, rank components" feels like a clean experimental design, and it is, as long as additivity holds. But additivity is an empirical property of the specific model and behavior you are studying, not a logical guarantee, and in the systems I studied, it fails badly enough to undermine the rankings it produces. I suspect this is not unique to biological transformers.
6.4. A note on metric sensitivity across scales
One additional observation that may be useful, though it is less novel than the non-additivity result: I found that the same underlying attention scores can show degrading top-K F1 with more data (all 9 tier×seed pairs, sign test p = 0.002) and improving AUROC with more data (mean 0.858 → 0.925 → 0.934) simultaneously. This reflects the difference between evaluating the extreme tail of a ranking under sparse references versus evaluating the full ranking. But it means that claims about how "interpretability quality scales with data/compute/parameters" are only meaningful if you specify which metric you are tracking and why, because different metrics can give exactly opposite answers about the same underlying scores.
7. Next Steps: Toward a Program for Knowledge Extraction from Biological Foundation Models
The negative results in the current paper close off some paths but open others. If you accept the evidence that attention-based GRN extraction does not work, the question becomes: what might? This section outlines what I think are the most promising directions, ordered roughly from most to least concretely specified.
7.1. Intervention-aware pretraining
The most direct response to the optimization landscape concern raised in Section 5.5 is to change the training data. Current single-cell foundation models are pretrained on observational expression profiles, where co-expression is the dominant statistical signal and causal regulatory relationships are a much weaker, sparser, and noisier signal that the training objective does not specifically incentivize. If you want models that learn causal regulation, the most straightforward path is to train them on data that contains causal information.
Concretely, this means pretraining on (or at least fine-tuning with) perturbation experiments: Perturb-seq, CRISPRi/CRISPRa screens, and similar interventional datasets where you observe what happens when you knock a gene down and can therefore learn which genes are causally upstream of which others.
The challenge is scale. Perturbation datasets are orders of magnitude smaller than the observational atlases used for pretraining (tens of thousands of perturbations versus tens of millions of cells). Whether this is enough data to learn robust regulatory representations, or whether the perturbation signal will be drowned out by the much larger observational pretraining corpus, is an open empirical question, but I think my other research on scaling laws for biological foundation models may shed some light on it.
7.2. Geometric and manifold-based interpretability
One of the most important recent developments in mechanistic interpretability, and one that I did not explore in my paper, is the recognition that models encode complex knowledge not as discrete pairwise relationships but as geometric structure in their representation spaces. This is directly relevant to the failure modes documented in this paper.
The most relevant example comes from Goodfire's work on Evo 2, DNA foundation model trained on over 9 trillion nucleotides. Using sparse autoencoders on residual stream activations, they discovered that the phylogenetic tree of life is encoded as a curved manifold in the model's learned feature space: species relationships correspond to geodesic distances along this manifold, with the overall structure organized around a roughly 10-dimensional flat representation overlaid with higher-curvature deviations that capture additional biological properties. This is, to my knowledge, one of the most complex natural manifolds yet characterized in a foundation model, and crucially, it is a biological foundation model where the extracted knowledge was validated against known ground truth (established phylogenies). This is exactly the kind of success story that the single-cell interpretability field needs but does not yet have.
The methodological lesson for single-cell models is pointed: if gene regulatory knowledge is encoded geometrically in the residual stream (as manifolds, subspaces, or curved representations) rather than as discrete pairwise relationships in attention matrices, then no amount of sophisticated attention extraction will find it, because you are looking in the wrong representational format entirely.
This connects to a broader trend in the interpretability community. The linear representation hypothesis (that features correspond to directions in activation space) is being supplemented by the recognition that many important features live on nonlinear manifolds: circles for days of the week, hierarchical trees for taxonomic relationships, tori for periodic quantities, and more complex structures. Goodfire's own researchers note that "manifolds seem to be important types of representations, and ones that are not well-captured by current methods like sparse autoencoders," which suggests that even SAEs, the current dominant tool, may need manifold-aware extensions to fully characterize what these models have learned.
A concrete next experiment would be to train SAEs on residual stream activations of scGPT or Geneformer, look for geometric structures that correlate with known regulatory relationships, and test whether regulatory information that is invisible in attention patterns becomes visible in the learned feature space. If it does, the implication would be that the models have learned more about gene regulation than the attention-based methods could reveal. If it does not, that would strengthen the case for intervention-aware pretraining as the necessary next step.
7.3. Probing residual streams: from aggregate statistics to feature-level analysis
My paper's methodology is primarily macro-level: aggregate statistics across many TF-target pairs, summary measures of head importance, average AUROC across perturbation conditions. This was deliberate (I wanted statistically robust claims with controlled multiple testing), but it means the analyses are inherently insensitive to fine-grained structure that might exist at the level of individual features or small groups of components.
The natural next step is to apply the standard NLP probing toolkit to single-cell foundation models. Train linear probes on residual stream representations at each layer to predict specific regulatory relationships (e.g., "is gene A a direct target of transcription factor B?"). If the probe succeeds where attention extraction fails, it would localize regulatory knowledge to specific layers' representations without requiring that it be readable from attention patterns. If the probe also fails, that is much stronger evidence for knowledge absence rather than mere extraction failure.
Beyond linear probes, the SAE-based feature discovery approach discussed in 7.2 could yield individual interpretable features that correspond to specific regulatory programs or pathway activations. If a sparse autoencoder trained on layer 15 residual streams (where my paper found peak TRRUST alignment in attention) produces features whose activation patterns correlate with known regulatory cascades, that would be a concrete positive result pointing toward the kind of mechanistic understanding the field is seeking.
One important caveat from my paper's own findings: the causal ablation results show that perturbation-predictive computation is distributed across many components in a redundant fashion rather than localized in identifiable circuit components. When ablating the heads most aligned with regulatory ground truth produces zero degradation while random ablation causes significant degradation, this suggests there may not be a clean "regulatory circuit" to find. Fine-grained circuit discovery tools work best when the computation is localized and modular; if it is genuinely distributed and redundant, as the evidence suggests, then even sophisticated circuit analysis may not produce the kind of clean mechanistic story we would like. The honest conclusion might be that these models perform regulatory-relevant computation through distributed, redundant representations that resist clean decomposition, which would be an important finding in its own right even if it is less satisfying than a circuit diagram.
7.4. Hybrid architectures, CSSI, and conformal uncertainty
Two shorter-term practical directions deserve mention, both of which build directly on infrastructure from my paper.
First, hybrid architectures that use foundation model embeddings as inputs to dedicated GRN inference modules rather than trying to extract edges from attention. The idea is to take the residual stream representations that the models learn (which clearly contain biological structure, as demonstrated by the layer-organized findings in Section 4) and feed them into purpose-built GRN inference algorithms as enriched gene features, rather than interpreting the attention matrix itself as a gene regulatory network. This sidesteps the attention extraction problem entirely while still leveraging whatever biological knowledge the foundation model has encoded during pretraining. Several GRN inference methods already accept gene embeddings as inputs (GEARS being a prominent example), and foundation model embeddings could serve as a drop-in upgrade over existing gene embedding approaches.
Second, CSSI framework showed improvements of up to 1.85× in GRN recovery. CSSI could be extended with conformal prediction to provide confidence sets rather than point estimates: instead of extracting a single ranked list of regulatory edges, you would get a set of edges that are certified to contain the true regulatory relationships at a specified confidence level. Conformal prediction is well-suited to this because it provides finite-sample coverage guarantees without distributional assumptions, which is important in a domain where we do not know the distribution of regulatory edge scores. The combination of CSSI (to reduce cell-state heterogeneity) with conformal uncertainty quantification (to provide calibrated confidence) could produce "certified edge sets" that are smaller and more reliable than current approaches, even if the underlying signal is weaker than what the field originally hoped for.
7.5. What this suggests for the broader interpretability-for-biology agenda
Stepping back from the specific technical directions, I think the most important lesson from this work is about the value of systematic stress-testing before building on interpretability claims.
The "attention as GRN" idea in single-cell biology was not unreasonable. There were good theoretical reasons to think it might work (attention patterns represent pairwise gene relationships, regulatory networks are pairwise gene relationships, the models clearly learn biological structure). But it failed at every level that matters for actual biological utility. The positive results (layer structure, context dependence, per-TF heterogeneity) survived the same battery, which gives me much more confidence that they point toward something real.
8. Conclusion
This paper started as an attempt to extract gene regulatory networks from single-cell foundation models and ended as a methodological argument about how to do mechanistic interpretability honestly. The specific biological results matter for the computational biology community, but I think the broader lesson are relevant to anyone working on mechanistic interpretability in any domain.
I want to close with a pitch: if you like mechanistic interpretability, consider working rather on biological foundation models.
Beyond the methodological advantages, biological interpretability is, in my view, both more tractable and less dangerous than frontier LLM interpretability. The models are smaller (hundreds of millions of parameters rather than hundreds of billions), the input domain is more constrained (gene expression profiles rather than arbitrary natural language), and the knowledge you are trying to extract is better defined (regulatory networks, pathway activations, cell state transitions). You are not probing a system that might be strategically deceiving you, and the knowledge you extract has direct applications in drug discovery and disease understanding rather than in capability amplification. And I still really believe that there is non-negligible chance that we can push biology in the remaining time and amplify human intelligence.
TL;DR: I ran the most comprehensive stress-test to date of mechanistic interpretability for single-cell foundation models (scGPT, Geneformer): 37 analyses, 153 statistical tests, 4 cell types. Attention-based gene regulatory network extraction fails at every level that matters, mostly because trivial gene-level baselines already explain the signal and the heads most aligned with known regulation turn out to be the most dispensable for the model's actual computation. But the models do learn real layer-organized biological structure, and I found that activation patching in these models has a large, formally quantifiable non-additivity bias that undermines standard component rankings, which is likely relevant for LLM interpretability too. I urge you: if you like mechanistic interpretability, consider working on biological foundation models. They offer external ground truth for validating your methods, more tractable model scales, and direct biomedical payoff with lower dual-use risk than frontier LLM interpretability. Full research is available here.
1. Why I Work on Mechanistic Interpretability of Biological Models, Not LLMs
It is well accepted that mechanistic interpretability is one of the most naturally attractive research directions for technically oriented people who care about AI safety. It feels like science in the most satisfying sense: you have a complex system, you poke at it with carefully designed experiments, and you try to figure out what it's actually doing inside. It rewards exactly the kind of careful, detail-oriented thinking that draws people into alignment research in the first place, and the dream of understanding what happens between a model's inputs and outputs is compelling enough to sustain years of difficult work.
I want to honestly say that I believe, based both on my own reasoning and on arguments made by people whose judgment I take seriously, that mechanistic interpretability of general-purpose models carries risks that are insufficiently appreciated. The concern is relatively straightforward: deep mechanistic understanding of how capable models work can advance their capabilities (by revealing which circuits to scale, optimize, or compose), and perhaps more critically, early weak superintelligences could leverage interpretability tools and knowledge as a substrate for recursive self-improvement. However, this point is just to explain my motivation - agreeing or disagreeing on it is not important for the comprehension of this article.
At the same time, none of this means that mechanistic interpretability knowledge must remain unused and unapplied across the board. What it means is that we should think about where the risk-benefit calculus is most favorable, and I believe biological foundation models are an unusually good answer to that question, for three reasons that I think are individually sufficient and collectively quite strong.
First, advancing the capabilities of narrow biological models is likely to be locally beneficial. A single-cell foundation model that gets better at predicting gene regulatory responses to perturbations is not going to help anyone build a more capable language model or a more dangerous autonomous agent. These models process transcriptomic profiles, not natural language or general world-knowledge, and making them more capable means making biology research faster, not making general AI systems more dangerous. I mean, eventually it will also probably kill you, but general models will kill you much earlier, so the doom from biological models is "screened off". I do acknowledge that there are still some risks here, but I think it is still net positive because of the reasons I explain below.
Second, biological models are far more tractable as subjects for mechanistic study than LLMs. Geneformer V2, the largest model in my study, has 316 million parameters and 18 transformer layers. This is large enough to be interesting (it clearly learns non-trivial structure) but small enough to be, at least in principle, exhaustively analyzed with current tools. More importantly, biological models can be validated against experimental ground truth in ways that LLM interpretability simply cannot: we have CRISPR perturbation data that tells us what actually happens when you intervene on specific genes, we have curated databases of known regulatory relationships, and we can design targeted experiments to test specific mechanistic claims. This makes biology a better laboratory for developing and stress-testing interpretability methods, because when something looks like a mechanistic discovery, you can check whether it actually is one.
Third, and this is the motivation I care about most, I think biological foundation models have a genuine chance of radically advancing our understanding of human biology at the systems level. We have largely resolved the genomics level (sequencing is cheap and comprehensive) and made enormous progress on the structural level (AlphaFold and its successors). What remains is fundamentally the systems level: understanding how genes, proteins, cell states, tissues, and organisms interact as integrated wholes to produce the phenotypes we observe. Single-cell foundation models, trained on tens of millions of individual cellular transcriptomes, are plausible candidates for learning aspects of this systems-level organization. If we can extract that knowledge mechanistically, rather than treating these models as black boxes, the payoff for biomedicine and for our understanding of human biology could be substantial. I also believe, as I've argued previously, that advancing our understanding of human biology at the systems level is one of the most important things we can do for human intelligence augmentation, which in turn is one of the most important things we can do for alignment, but I will not try to carry that argument here and instead point the interested reader to that earlier post.
So the question becomes practical: can we actually extract meaningful biological knowledge from these models using mechanistic interpretability tools? That is what I spent the last months trying to find out, and the answer is more nuanced than either the optimists or the skeptics would prefer.
2. Brief Note: What Are Single-Cell Foundation Models, and Why Should You Care?
For readers who come from the LLM interpretability side and have not worked with biological data, here is the minimum context you need to follow the rest of this post.
The data. Single-cell RNA sequencing (scRNA-seq) measures the expression levels of thousands of genes in individual cells. Unlike bulk sequencing, which averages over millions of cells and hides all the interesting heterogeneity, single-cell data lets you see that a tissue is composed of distinct cell types and cell states, each with its own gene expression program. Modern datasets contain tens of millions of individually profiled cells across dozens of human tissues.
The models. Single-cell foundation models are transformer architectures trained on these large scRNA-seq corpora using self-supervised objectives, analogous to how LLMs are trained on text. The two main model families I studied are:
scGPT treats each gene as a token and its expression value as the token's "identity," then trains with masked expression prediction: hide some genes' expression values, ask the model to predict them from the remaining context. This is conceptually very close to masked language modeling, with genes playing the role of words and expression levels playing the role of token IDs.
Geneformer takes a different approach: it ranks genes within each cell by their expression level (most expressed first) and then uses the rank-ordered gene sequence as input, training with masked gene prediction. The tokenization is fundamentally different from scGPT (ranks vs. expression values), the training objective is different, and the model scale differs (Geneformer V2-316M vs. scGPT's smaller variants), but both architectures learn to predict gene expression patterns from cellular context.
What people claim these models can do. The published literature (see, for example, here and here) suggests that these models achieve useful performance on several downstream tasks: classifying cell types, predicting how cells respond to genetic perturbations, and, most relevant for this post, inferring gene regulatory networks (GRNs) from their attention patterns. This last claim is the one I tested most thoroughly, because it is the most mechanistically interpretable claim and the one with the most direct implications for biological knowledge extraction. The idea is simple and appealing: if the model has learned that gene A regulates gene B, then the attention weight from gene A to gene B should be high, and by extracting the full attention matrix, you can recover the regulatory network the model has learned.
3. What I Did: The Most Comprehensive Stress-Test of Single-Cell Model Interpretability To Date
The paper I am summarizing here reports, to my knowledge, the most thorough systematic evaluation of mechanistic interpretability for single-cell foundation models published so far. It spans 37 distinct analyses, 153 pre-registered statistical tests, 4 cell types (K562, RPE1, T cells, iPSC neurons), 2 perturbation modalities (CRISPRi gene silencing and CRISPRa gene activation), and 2 model families (scGPT and Geneformer). The full details are on arXiv; here I will focus on the findings that I think are most relevant for the community.
3.1. The evaluation philosophy
A core design principle was that no single test is sufficient to validate or invalidate a mechanistic interpretability claim, because each test addresses a different failure mode and any one of them can miss problems that another catches. I built five interlocking families of tests, and the logic of how they complement each other is worth spelling out, because I think this framework is reusable well beyond my specific setting:
Trivial-baseline comparison asks: "Can a method that requires no model at all achieve the same performance?" If gene-level variance (a property you can compute with a pocket calculator) predicts perturbation responses as well as your fancy attention-derived network, you have not demonstrated that your interpretability method captures anything beyond trivial gene properties. This test catches overconfidence from neglecting simple alternatives.
Conditional incremental-value testing asks: "Given the best simple features, does your interpretability output add anything?" This is more demanding than the first test because it conditions on the simple features rather than just comparing to them. A method can be "significantly above chance" and still add zero incremental value once you control for what was already available.
Expression residualisation and propensity matching asks: "Is your signal actually coming from the thing you think it's coming from, or is it a confound proxy?" This is the biological equivalent of discovering that your "sentiment circuit" is actually a "sentence length detector."
Causal ablation with fidelity diagnostics asks: "Does the model actually use the components that your interpretability method identifies as important?" If your method says "these attention heads encode regulatory knowledge," then removing those heads should degrade the model's performance on tasks that require regulatory knowledge. This is the closest to standard NLP activation patching, but with a critical addition: intervention-fidelity diagnostics that verify the ablation actually changed the model's internal representations. Concretely, this means measuring how much the model's logits or hidden states shift when you zero out a head, because if a head's output was near-zero to begin with, ablating it tells you nothing about whether the model relies on it. A null result from ablation is only informative if you can show the intervention was materially disruptive to the computation passing through that component, and the fidelity check is what separates "the model doesn't need this head" from "your ablation didn't actually do anything."
Cross-context replication asks: "Does this hold up in a different cell type, a different perturbation modality, or a different model?" A result that appears in K562 CRISPRi but vanishes in RPE1 or T cells is a dataset-specific observation.
A result that survives all five families is genuinely robust. A result that fails any one of them has a specific, identifiable weakness. And the convergence of multiple independent tests pointing in the same direction provides stronger evidence than any single test can offer, regardless of how well-powered it is.
3.2. A note on the cautionary nature of these results
I want to be upfront about something: I tried a lot of ideas, and many of the simple ones did not work. The field's implicit narrative has been that attention patterns in biological transformers straightforwardly encode regulatory networks (again, here and here, but also in many other places) , and that extracting this information is primarily an engineering challenge (find the right layer, the right aggregation, the right thresholding). What I found instead is that the relationship between attention patterns and biological regulation is far more complex and confound-laden than this narrative suggests.
But I think this negative result is itself highly informative, for two reasons. The first is that it tells the field where not to look, which saves everyone the effort of independently discovering the same dead ends. The second, which I think is more important, is that the systematic framework I built means that when new biological foundation models emerge (and they will, with better architectures, more data, and potentially different training objectives), testing them against this battery of analyses is straightforward rather than requiring reinvention from scratch. The framework accelerates the entire mechanistic interpretability pipeline for this model class, even though many of its current outputs are negative.
3.3. Connections to NLP mechanistic interpretability
Before presenting the specific findings, it is worth noting that several of the phenomena I document have clear parallels in the NLP mechanistic interpretability literature, though the biological setting allows me to push certain questions further than is currently possible with language models. The finding that attention patterns do not reliably indicate computationally important features echoes long existing results on attention and explanation, but my causal ablation findings go beyond showing that many heads are prunable: I show that the heads most aligned with known ground truth are the most dispensable, which is a qualitatively stronger negative result. The layer-structured biological representations I find are reminiscent of the classical layer-specialized circuits documented in LLMs (Olsson et al. 2022 on induction heads, Elhage et al. on superposition), but in biology we can validate the content of each layer against independently curated databases of protein interactions and transcriptional regulation, which is a luxury that NLP interpretability researchers do not currently have. So the methodological tools developed here, particularly the incremental-value framework, the non-additivity diagnostics for activation patching, and the confound decomposition battery, can prove useful to people working on interpretability in general.
4. What Works: Positive and Constructive Findings
The negative results get the headlines (and they should, because the "attention as GRN" claim is the one the field has been banking on), but the positive findings are where the constructive path forward begins. These are the things that survived the full stress-testing battery, and I think each of them points toward something real about what these models have learned.
4.1. Attention patterns encode layer-organized biological structure
When I benchmarked Geneformer attention edges against multiple biological reference databases across all 18 layers, protein-protein interaction signal (measured against the STRING database) was strongest at the earliest transformer layer and decreased monotonically with depth. Transcriptional regulation signal (measured against TRRUST, a curated database of transcription factor targets) showed the opposite pattern: it increased with depth and peaked around L15. The cross-layer profiles for these two types of biological signal are anti-correlated, and functional co-annotation signals from pathway databases showed their own distinct depth profiles.
This is interesting, and not just as a biological finding. It means the model has self-organized its layers into a hierarchy that separates different types of biological relationship: physical protein interactions in the early layers, transcriptional regulation in the late layers, with functional pathway associations distributed in between. This is not something the training objective directly incentivizes (the model is just predicting masked gene identities from context), so the layer specialization reflects structure the model discovered on its own.
Critically, this signal survives expression residualisation. When I controlled for pairwise expression similarity (which would remove any signal that was just "these genes are co-expressed, therefore they look related"), 97% of the TRRUST regulatory signal at L15 was retained. So the layer-organized structure is not just a re-encoding of pairwise co-expression in attention-matrix form; it indeed captures something beyond what simple correlation between gene pairs would give you.
4.2. Cell-State Stratified Interpretability (CSSI) as a constructive methodological tool
One of the things I discovered while investigating why attention-based GRN recovery seemed to get worse as you added more cells (which is the opposite of what you would naively expect) is that the problem is not really about "more data makes models worse." The problem is about heterogeneity dilution: when you pool attention patterns across cells in different states (different cell types, different stages of differentiation, different activation states), you average together cell-state-specific regulatory signals that may point in different directions, and the result is a washed-out mess that retains only the regulatory relationships that are universal across all included states.
The solution I developed, Cell-State Stratified Interpretability (CSSI), is conceptually simple: instead of computing attention-derived edge scores across all cells at once, you first cluster cells into relatively homogeneous cell-state groups (using Leiden clustering on the model's own embeddings, so the stratification is informed by what the model itself has learned), compute edge scores within each stratum separately, and then aggregate across strata using max or mean operations. The optimal number of strata in the datasets I tested was around 5-7, which roughly corresponds to the major cell-state subdivisions present in the data.
The results are substantial: CSSI improves TRRUST regulatory edge recovery by up to 1.85-fold compared to unstratified computation. Null tests with random strata assignments confirm that the improvement is not an artifact of the stratification procedure inflating false positives; it specifically requires biologically meaningful strata. In synthetic experiments where I controlled the ground truth, CSSI with oracle labels maintained F1 ≥ 0.90 across all cell count configurations, while pooled inference dropped from ~0.85 at 200 cells to ~0.51 at 1,000 cells.
4.3. Context-dependent attention-correlation relationships reveal genuine learning beyond co-expression
One of the strongest pieces of evidence that these models have learned something real, rather than just repackaging correlation statistics in a more expensive way, comes from comparing how attention edges and correlation edges perform across different cell types and perturbation modalities:
In K562 cells under CRISPRi (gene silencing), attention and correlation are statistically indistinguishable for predicting perturbation targets. In K562 cells under CRISPRa (gene activation), attention actually performs worse than correlation. In RPE1 cells under CRISPRi, attention significantly outperforms correlation. In iPSC-derived neurons, attention trends better than correlation but the sample is smaller.
If attention were simply a re-encoding of co-expression, you would expect a uniform relationship across contexts: attention and correlation would always perform similarly. The fact that the relationship is context-dependent, and that it flips direction depending on cell type and perturbation modality, means the models have learned something that varies between biological contexts in a way that simple co-expression does not. Whether that something is causal regulatory structure, more complex statistical dependencies, or some other biologically meaningful feature is a question the current evidence cannot fully resolve, but the context-dependence itself is a signal that the models are doing more than just memorizing gene-gene correlations.
(I should note that the RPE1 advantage, despite being statistically robust, turns out to decompose into confound structure when subjected to the full battery, as I discuss in Section 5. But the existence of context-dependence across all four systems is not explained by confounding, and remains a genuine positive finding about the models' representational capacity.)
4.4. Some transcription factors show robust pairwise regulatory signal in attention edges
The aggregate picture (which I discuss more in Section 5) is that attention-derived edges add zero incremental value over gene-level features for predicting perturbation responses. But this aggregate hides real heterogeneity at the level of individual transcription factors. When I performed per-TF bootstrap analyses, 7 out of 18 evaluable transcription factors showed robust edge-level signal, with a global AUROC 95% confidence interval of [0.71, 0.77]. There was also a suggestive trend that "master regulators" (transcription factors known to control broad developmental programs) showed higher AUROC than other TF categories, though this trend did not survive multiple testing correction given the small sample of evaluable TFs.
This matters because it suggests the blanket conclusion "attention edges are useless for regulatory inference" is too strong as a claim about all regulatory relationships. For some transcription factors, operating in some contexts, attention-derived edges may genuinely capture pairwise regulatory information. Identifying which TFs and which contexts is a direction for future work that could turn the current vague hope into a targeted extraction strategy.
4.5. Cross-species conservation reveals biologically meaningful structure in edge scores
As a separate validation axis, I compared correlation-based TF-target edge scores computed independently in human and mouse lung tissue, matched via one-to-one orthologs. The global conservation was striking: Spearman ρ = 0.743 across 25,876 matched edges, p < 10^(-300), with 88.6% sign agreement and top-k overlaps enriched by 8× to 484× over random expectation.
But what makes this finding informative rather than just impressive is that the conservation is not uniform across transcription factors. Lineage-specifying TFs (those that define cell identity, like NKX2-1 for lung epithelium) show near-perfect cross-species transfer, while signaling-responsive TFs (those that respond to environmental stimuli, like STAT1 or HIF1A) transfer poorly. This pattern makes perfect biological sense: lineage specification is deeply conserved across mammalian evolution, while signal-responsive regulation adapts to species-specific environmental niches. The fact that edge scores recapitulate this known biological pattern, and that the recapitulation is TF-class-dependent in the predicted direction, provides converging evidence that these scores capture real biological structure, even though they may not capture it in the causal form that the strongest interpretability claims require.
5. What Doesn't Work: The Key Negative Findings and Why They Matter
This is where the stress-testing framework earns its keep. Each negative finding survived multiple robustness checks and cross-context replications, and together they present a coherent picture that is hard to dismiss as artifact or bad luck.
5.1. Gene-level baselines dominate perturbation prediction, and you don't need a foundation model for that
This is the single most important negative finding, and it reframes everything else. When I tested how well different features predict which genes will respond to a CRISPR perturbation, the ranking was:
Gene-level variance alone: AUROC = 0.881. Mean expression: 0.841. Dropout rate: 0.808. Attention-derived pairwise edges: ~0.70. Correlation-derived pairwise edges: ~0.70.
All comparisons with the gene-level baselines are significant at p < 10⁻¹². The implication is that most of what looks like "regulatory signal" in pairwise edge scores, whether derived from attention or from correlation, is actually reflecting univariate gene properties: genes that are highly variable, highly expressed, or frequently detected are more likely to be differentially expressed in response to any perturbation, and pairwise edges are largely tracking this property rather than specific regulatory relationships.
It is the most boring possible explanation for the observed performance, and it explains the bulk of the variance.
5.2. Pairwise edge scores add literally zero incremental value over gene-level features
The gene-level baseline dominance could in principle coexist with genuine incremental value from pairwise edges: maybe edges add a small amount of unique information on top of what gene-level features provide. I tested this with a conditional incremental-value analysis on 559,720 observation pairs, with statistical power exceeding 99% to detect ΔAUROC = 0.005.
The result: adding attention edges to gene-level features yields ΔAUROC = −0.0004. Adding correlation edges yields ΔAUROC = −0.002. These are essentially exact zeros, and they persist across all tested generalisation protocols (cross-gene splits, cross-perturbation splits, joint splits), both linear and nonlinear models (logistic regression and GBDT), and multiple metrics (AUROC, AUPRC, top-k recall). The same pattern replicates independently in RPE1 cells, where gene-level features alone achieve AUROC = 0.942 and adding attention edges yields ΔAUROC = +0.0001.
The supplement exhaustively tests this null against every objection I could think of: different metrics, different model classes, different split designs, different feature encodings. The biggest improvement found anywhere was ΔAUPRC ≈ +0.009 under one specific parameterization, which is less than 4% relative improvement and does not survive correction. Whatever biological structure attention edges contain, it is completely redundant with gene-level features for predicting what happens when you perturb genes, at least under the evaluation protocols I tested.
5.3. Causal ablation reveals that "regulatory" heads are the most dispensable ones
This result is, in my opinion, the most striking finding in the entire paper from the standpoint of mechanistic interpretability methodology.
Geneformer V2-316M has 324 attention heads across 18 layers. I ranked heads by their alignment with known regulatory relationships (TRRUST database) and then ablated them. If attention patterns at regulatory-aligned heads are where the model stores and uses regulatory knowledge, removing those heads should degrade the model's ability to predict perturbation responses.
What actually happened: ablating the top-5, top-10, top-20, or top-50 TRRUST-ranked heads produced zero significant degradation in perturbation-prediction. Meanwhile, ablating 20 randomly selected heads caused a significant performance drop. I also tested uniform attention replacement (forcing attention weights to 1/n while preserving value projections) on the TRRUST-ranked heads, with no degradation. I tested MLP pathway ablation in the purported "regulatory" layers: still no degradation, while MLP ablation in random layers could cause significant drops.
Crucially, intervention-fidelity diagnostics confirmed that these ablations were actually changing the model's internal representations: TRRUST-ranked heads produce 23× larger logit perturbation when ablated compared to random heads. The interventions were material; the model just did not rely on those heads for perturbation prediction. The computation that matters for predicting what happens when you knock down a gene appears to live in the value/FFN pathway, distributed across many components in a redundant fashion, rather than in the learnable attention patterns that interpretability pipelines extract.
I also tested the obvious "fix": if the relevant computation is in the value pathway rather than the attention pattern, maybe we should extract edge scores from the context layer (softmax(QK^T)·V) using value-weighted cosine similarity. This does not help. Value-weighted scores actually underperform raw attention and correlation, and adding them to gene-level features slightly hurts incremental value. The context vectors appear to represent a blended "information receipt" signal rather than direct pairwise coupling, and whatever perturbation-predictive computation the model performs is distributed in a way that no simple pairwise score extraction can recover.
5.4. Do these models know about gene regulation at all, or did we just fail to extract it?
The negative results above establish that I could not extract meaningful gene regulatory network information from attention patterns using the methods I tested. But this leaves a crucial epistemic question open: are we looking at an extraction failure (the knowledge is in the model somewhere, but not in the attention weights and not in a form our methods can reach), or a knowledge absence (the models simply never learned causal regulatory relationships in the first place)? These are very different claims, and the second is substantially stronger than the first.
One natural way to probe this distinction is through surface capabilities. If a model can accurately predict what happens when you knock down a gene, then it must have learned something about gene regulation internally, regardless of whether that knowledge is accessible through attention pattern analysis. Surface capabilities provide a minimum baseline for internal knowledge: the model knows at least as much as its best task performance implies, even if our interpretability tools cannot locate where that knowledge lives.
Unfortunately, the evidence on surface capabilities of single-cell foundation models is quite conflicting, and the field is in the middle of a heated debate about it. On one hand, the original papers make strong claims: Theodoris et al. (2023) reported that Geneformer's in silico perturbation approach identified a novel transcription factor in cardiomyocytes that was experimentally validated, and scGPT (Cui et al., 2024) claimed state-of-the-art performance on perturbation prediction, cell type annotation, and gene network inference after fine-tuning. These results suggest that the models have learned something biologically meaningful during pretraining.
On the other hand, a growing body of independent benchmarking work paints a much more skeptical picture. Ahlmann-Eltze et al. compared five foundation models against deliberately simple linear baselines for perturbation effect prediction and found that none of the foundation models outperformed the baselines, concluding that pretraining on atlas data provided "only a small benefit over random embeddings." Csendes et al. found that even the simplest baseline of taking the mean of training examples outperformed scGPT and scFoundation. Wenteler et al. showed that both scGPT and Geneformer perform worse than selecting highly variable genes and using established methods like Harmony or scVI in zero-shot cell type clustering. Bendidi et al. ran a comprehensive perturbation-oriented benchmark and concluded that foundation models show competitive performance only in batch effect reduction, where even random embeddings achieve near-optimal results. Perhaps most provocatively, Chen & Zou showed that GenePT, which simply uses ChatGPT text embeddings of gene descriptions from NCBI (containing zero expression data), achieves comparable or better performance than Geneformer and scGPT on many of the same downstream tasks!
A consistent pattern in this debate is that the original model papers evaluate primarily with fine-tuning, while independent benchmarks emphasize zero-shot performance. Fine-tuned models can look strong, but it becomes difficult to disentangle whether the strong performance comes from pretrained representations or from the fine-tuning data itself. Zero-shot evaluation is arguably the fairer test of what pretraining actually learned, and this is precisely where the models tend to struggle.
What does this mean for interpreting my results? The honest answer is that I cannot fully resolve the extraction-vs.-absence question with the data we have. Both model families converge to similar near-random unstratified GRN recovery despite fundamentally different architectures (gene-token vs. rank-based tokenization), different training objectives, and different scales, which suggests this is not a model-specific quirk. But the convergence is consistent with both interpretations: either both architectures fail to learn causal regulation from observational expression data (because co-expression is the dominant signal and the training objectives do not specifically incentivize causal structure), or both architectures learn it but encode it in representations that neither attention-based nor simple pairwise extraction methods can reach. The mixed evidence on surface capabilities does not decisively resolve this in either direction, though the weight of the independent benchmarking evidence leans toward the more pessimistic interpretation for current-generation models. The next obvious question is, will stacking more layers help?
6. What the Biological Setting Reveals About Activation Patching
Most of the findings in Sections 4 and 5 are primarily about biology. This section is rather about a methodological result about activation patching itself that I, as far as I know, is novel and directly relevant to anyone using this technique on any transformer model, biological or otherwise.
6.1. The non-additivity problem is formal, quantifiable, and large
Activation patching (sometimes called causal mediation analysis) is one of the workhorse tools of current mechanistic interpretability. The standard workflow is: intervene on one component at a time (a head, an MLP block, a residual stream position), measure the effect on some downstream behavior, and rank components by their individual effects. The components with the largest effects are declared to be "the circuit" responsible for that behavior.
This workflow implicitly assumes additivity: that the effect of the full model is well-approximated by the sum of individual component effects. When this assumption holds, single-component rankings are meaningful. When it fails, they can be systematically wrong in ways that are not just noisy but structurally biased.
The mech interp community is well aware that interactions can matter in principle. Nanda explicitly notes that attribution patching "will neglect any interaction terms, and so will break when the interaction terms are a significant part of what's going on." Heimersheim & Nanda discuss backup heads and the Hydra effect as specific instances of non-additive behavior, where ablating one component causes others to compensate in ways that confound single-component attribution. Makelov et al. demonstrate a related failure mode at the subspace level, showing that patching can activate dormant parallel pathways that produce illusory interpretability signals. The qualitative concern is not new, and I want to credit the people who have been raising it. What has been missing, to my knowledge, is (a) a formal framework for quantifying how much the standard single-component workflow's rankings are biased by interactions, (b) empirical measurement of how large that bias actually is in a real model rather than a constructed example, and (c) certificates for which pairwise rankings survive the observed non-additivity. That is what I provided.
I formalize the bias using a decomposition involving Möbius interaction coefficients. The key quantity is the relationship between single-component mediation estimates and Shapley values (which are interaction-aware by construction). Single-component estimates equal Shapley values only when all interaction terms vanish; otherwise, the discrepancy is a structured function of the interaction landscape, and it can push the ranking in a consistent wrong direction rather than just adding noise.
The empirical question is whether this matters in practice. In the biological transformers I studied, the answer is clearly yes. Using frozen cross-tissue mediation archives, I computed lower bounds on aggregate non-additivity (the residual between total effect and the sum of individual component effects, adjusted for measurement uncertainty). In 10 of 16 run-pairs, this lower bound was positive, meaning the observed non-additivity exceeds what measurement noise alone could explain. The median lower-bound ratio relative to the total effect was 0.725, which means interactions account for a substantial fraction of the overall model behavior in the median case.
6.2. Ranking certificates collapse under structural bias assumptions
The most practically concerning result is not just that non-additivity exists, but what it does to the reliability of component rankings. I introduced "ranking certificates" that ask: given the observed level of non-additivity, what fraction of pairwise comparisons between components (e.g., "head A matters more than head B") can we certify as robust to interaction-induced bias?
Under the structural-bias assumptions informed by the empirical non-additivity measurements, the fraction of certifiably correct pairwise rankings collapses by an order of magnitude or more compared to what the single-component estimates naively suggest. In concrete terms: if you rank 50 heads by their individual activation patching effects and declare the ranking meaningful, the certification analysis suggests that only a small fraction of the pairwise orderings in that ranking are robust to interaction effects. The rest could be wrong, and wrong in a way that is invisible to the standard workflow because the standard workflow does not check for it.
6.3. What this means for mech interp practice
I have demonstrated the non-additivity bias and its consequences in biological transformers with 316 million parameters. I have not demonstrated it in GPT-2, Llama, or any other language model, and the magnitude of the effect could be different in those architectures. The formal framework applies to any transformer (it is architecture-agnostic), but the empirical severity is an open question for LLMs.
That said, I think the results warrant concrete changes to standard practice for anyone doing activation patching or similar single-component mediation analysis:
First, report the residual non-additivity. This is the gap between the total effect of a multi-component intervention and the sum of corresponding single-component effects. It is cheap to compute (you need one additional intervention beyond what you already do) and it directly tells you how much of the model's behavior lives in interactions rather than in individual components. If this residual is large, your single-component rankings are unreliable, and you should know that before you build a mechanistic story on top of them.
Second, compute ranking certificates for your top-ranked components. If you are going to claim "these are the most important heads for behavior X," you should check whether that ranking is robust to the level of non-additivity you actually observe. If only 10% of pairwise orderings survive certification, your "top 5 heads" may not actually be the top 5 heads.
Third, for your most important mechanistic claims, consider using interaction-aware alternatives like Shapley-based decompositions. These are more expensive (combinatorially so in the worst case, though sampling-based approximations exist), but they handle interactions correctly by construction. The synthetic validation in my supplement shows that Shapley-value estimates recover true interaction rankings with approximately 91% improvement in rank correlation compared to single-component estimates, which suggests the additional cost is worth it when the claim matters.
The broader methodological point is that "patch one component, measure effect, rank components" feels like a clean experimental design, and it is, as long as additivity holds. But additivity is an empirical property of the specific model and behavior you are studying, not a logical guarantee, and in the systems I studied, it fails badly enough to undermine the rankings it produces. I suspect this is not unique to biological transformers.
6.4. A note on metric sensitivity across scales
One additional observation that may be useful, though it is less novel than the non-additivity result: I found that the same underlying attention scores can show degrading top-K F1 with more data (all 9 tier×seed pairs, sign test p = 0.002) and improving AUROC with more data (mean 0.858 → 0.925 → 0.934) simultaneously. This reflects the difference between evaluating the extreme tail of a ranking under sparse references versus evaluating the full ranking. But it means that claims about how "interpretability quality scales with data/compute/parameters" are only meaningful if you specify which metric you are tracking and why, because different metrics can give exactly opposite answers about the same underlying scores.
7. Next Steps: Toward a Program for Knowledge Extraction from Biological Foundation Models
The negative results in the current paper close off some paths but open others. If you accept the evidence that attention-based GRN extraction does not work, the question becomes: what might? This section outlines what I think are the most promising directions, ordered roughly from most to least concretely specified.
7.1. Intervention-aware pretraining
The most direct response to the optimization landscape concern raised in Section 5.5 is to change the training data. Current single-cell foundation models are pretrained on observational expression profiles, where co-expression is the dominant statistical signal and causal regulatory relationships are a much weaker, sparser, and noisier signal that the training objective does not specifically incentivize. If you want models that learn causal regulation, the most straightforward path is to train them on data that contains causal information.
Concretely, this means pretraining on (or at least fine-tuning with) perturbation experiments: Perturb-seq, CRISPRi/CRISPRa screens, and similar interventional datasets where you observe what happens when you knock a gene down and can therefore learn which genes are causally upstream of which others.
The challenge is scale. Perturbation datasets are orders of magnitude smaller than the observational atlases used for pretraining (tens of thousands of perturbations versus tens of millions of cells). Whether this is enough data to learn robust regulatory representations, or whether the perturbation signal will be drowned out by the much larger observational pretraining corpus, is an open empirical question, but I think my other research on scaling laws for biological foundation models may shed some light on it.
7.2. Geometric and manifold-based interpretability
One of the most important recent developments in mechanistic interpretability, and one that I did not explore in my paper, is the recognition that models encode complex knowledge not as discrete pairwise relationships but as geometric structure in their representation spaces. This is directly relevant to the failure modes documented in this paper.
The most relevant example comes from Goodfire's work on Evo 2, DNA foundation model trained on over 9 trillion nucleotides. Using sparse autoencoders on residual stream activations, they discovered that the phylogenetic tree of life is encoded as a curved manifold in the model's learned feature space: species relationships correspond to geodesic distances along this manifold, with the overall structure organized around a roughly 10-dimensional flat representation overlaid with higher-curvature deviations that capture additional biological properties. This is, to my knowledge, one of the most complex natural manifolds yet characterized in a foundation model, and crucially, it is a biological foundation model where the extracted knowledge was validated against known ground truth (established phylogenies). This is exactly the kind of success story that the single-cell interpretability field needs but does not yet have.
The methodological lesson for single-cell models is pointed: if gene regulatory knowledge is encoded geometrically in the residual stream (as manifolds, subspaces, or curved representations) rather than as discrete pairwise relationships in attention matrices, then no amount of sophisticated attention extraction will find it, because you are looking in the wrong representational format entirely.
This connects to a broader trend in the interpretability community. The linear representation hypothesis (that features correspond to directions in activation space) is being supplemented by the recognition that many important features live on nonlinear manifolds: circles for days of the week, hierarchical trees for taxonomic relationships, tori for periodic quantities, and more complex structures. Goodfire's own researchers note that "manifolds seem to be important types of representations, and ones that are not well-captured by current methods like sparse autoencoders," which suggests that even SAEs, the current dominant tool, may need manifold-aware extensions to fully characterize what these models have learned.
A concrete next experiment would be to train SAEs on residual stream activations of scGPT or Geneformer, look for geometric structures that correlate with known regulatory relationships, and test whether regulatory information that is invisible in attention patterns becomes visible in the learned feature space. If it does, the implication would be that the models have learned more about gene regulation than the attention-based methods could reveal. If it does not, that would strengthen the case for intervention-aware pretraining as the necessary next step.
7.3. Probing residual streams: from aggregate statistics to feature-level analysis
My paper's methodology is primarily macro-level: aggregate statistics across many TF-target pairs, summary measures of head importance, average AUROC across perturbation conditions. This was deliberate (I wanted statistically robust claims with controlled multiple testing), but it means the analyses are inherently insensitive to fine-grained structure that might exist at the level of individual features or small groups of components.
The natural next step is to apply the standard NLP probing toolkit to single-cell foundation models. Train linear probes on residual stream representations at each layer to predict specific regulatory relationships (e.g., "is gene A a direct target of transcription factor B?"). If the probe succeeds where attention extraction fails, it would localize regulatory knowledge to specific layers' representations without requiring that it be readable from attention patterns. If the probe also fails, that is much stronger evidence for knowledge absence rather than mere extraction failure.
Beyond linear probes, the SAE-based feature discovery approach discussed in 7.2 could yield individual interpretable features that correspond to specific regulatory programs or pathway activations. If a sparse autoencoder trained on layer 15 residual streams (where my paper found peak TRRUST alignment in attention) produces features whose activation patterns correlate with known regulatory cascades, that would be a concrete positive result pointing toward the kind of mechanistic understanding the field is seeking.
One important caveat from my paper's own findings: the causal ablation results show that perturbation-predictive computation is distributed across many components in a redundant fashion rather than localized in identifiable circuit components. When ablating the heads most aligned with regulatory ground truth produces zero degradation while random ablation causes significant degradation, this suggests there may not be a clean "regulatory circuit" to find. Fine-grained circuit discovery tools work best when the computation is localized and modular; if it is genuinely distributed and redundant, as the evidence suggests, then even sophisticated circuit analysis may not produce the kind of clean mechanistic story we would like. The honest conclusion might be that these models perform regulatory-relevant computation through distributed, redundant representations that resist clean decomposition, which would be an important finding in its own right even if it is less satisfying than a circuit diagram.
7.4. Hybrid architectures, CSSI, and conformal uncertainty
Two shorter-term practical directions deserve mention, both of which build directly on infrastructure from my paper.
First, hybrid architectures that use foundation model embeddings as inputs to dedicated GRN inference modules rather than trying to extract edges from attention. The idea is to take the residual stream representations that the models learn (which clearly contain biological structure, as demonstrated by the layer-organized findings in Section 4) and feed them into purpose-built GRN inference algorithms as enriched gene features, rather than interpreting the attention matrix itself as a gene regulatory network. This sidesteps the attention extraction problem entirely while still leveraging whatever biological knowledge the foundation model has encoded during pretraining. Several GRN inference methods already accept gene embeddings as inputs (GEARS being a prominent example), and foundation model embeddings could serve as a drop-in upgrade over existing gene embedding approaches.
Second, CSSI framework showed improvements of up to 1.85× in GRN recovery. CSSI could be extended with conformal prediction to provide confidence sets rather than point estimates: instead of extracting a single ranked list of regulatory edges, you would get a set of edges that are certified to contain the true regulatory relationships at a specified confidence level. Conformal prediction is well-suited to this because it provides finite-sample coverage guarantees without distributional assumptions, which is important in a domain where we do not know the distribution of regulatory edge scores. The combination of CSSI (to reduce cell-state heterogeneity) with conformal uncertainty quantification (to provide calibrated confidence) could produce "certified edge sets" that are smaller and more reliable than current approaches, even if the underlying signal is weaker than what the field originally hoped for.
7.5. What this suggests for the broader interpretability-for-biology agenda
Stepping back from the specific technical directions, I think the most important lesson from this work is about the value of systematic stress-testing before building on interpretability claims.
The "attention as GRN" idea in single-cell biology was not unreasonable. There were good theoretical reasons to think it might work (attention patterns represent pairwise gene relationships, regulatory networks are pairwise gene relationships, the models clearly learn biological structure). But it failed at every level that matters for actual biological utility. The positive results (layer structure, context dependence, per-TF heterogeneity) survived the same battery, which gives me much more confidence that they point toward something real.
8. Conclusion
This paper started as an attempt to extract gene regulatory networks from single-cell foundation models and ended as a methodological argument about how to do mechanistic interpretability honestly. The specific biological results matter for the computational biology community, but I think the broader lesson are relevant to anyone working on mechanistic interpretability in any domain.
I want to close with a pitch: if you like mechanistic interpretability, consider working rather on biological foundation models.
Beyond the methodological advantages, biological interpretability is, in my view, both more tractable and less dangerous than frontier LLM interpretability. The models are smaller (hundreds of millions of parameters rather than hundreds of billions), the input domain is more constrained (gene expression profiles rather than arbitrary natural language), and the knowledge you are trying to extract is better defined (regulatory networks, pathway activations, cell state transitions). You are not probing a system that might be strategically deceiving you, and the knowledge you extract has direct applications in drug discovery and disease understanding rather than in capability amplification. And I still really believe that there is non-negligible chance that we can push biology in the remaining time and amplify human intelligence.