Epistemic status: I’m using a single clinical case study as a running example to illustrate three empirical aspects of cognition that are well-documented but rarely used together. The point is not that this case study proves anything, but to build an intuition that I then connect to more systematic empirical studies later.
Content warning: Anesthesia, quotes from the patient can be read as body horror.
LLM use: I have used LLMs for a) researching prior work and other sources, b) summarizing and reviewing, c) generating the comics and code for one of the graphics, and d) coming up with structures to make the dry topic more approachable, including finding the case study to illustrate the parameters. All LLM-generated sentences that made it into this document have been heavily rewritten.
Induction
A 33-year-old woman voluntary undergoes a rhinoplasty (a surgical procedure to reshape the nose) under general anesthesia[1]. The intended and expected effect for the patient is induction of anesthesia and then "waking up" in the recovery room with no reportable experience during the operation.
(comic generated with ChatGPT 5.2 to illustrate a normal anesthesia procedure)
In the case study, that hard cut fails.
The case report summarizes: “During the operation, she became aware that she was awake.” But this simplifies and assumes an understanding of this concept that glosses over a perceptual asymmetry: some parts of experience can return while most don't. Instead, there may be an inability to move (as in sleep paralysis), incoherent experienced content (as in fever dreams), impossibilities (like flying in lucid dreaming), and especially, difficulty to communicate (clear internal speech but unintelligible sleep talking).
Partial Wakeup
The case report states: "She heard the conversation among the surgical team members and felt pressure on bone in her nose, but she did not feel pain." Note these two deviations from normal experience:
Auditory content returns but without a richly constructed visual scene.
Somatic content returns selectively, feeling pressure but not pain.
Her internal experience is there, but narrow.
Urgency
The case report continues: "The patient also felt that the breathing tube was pushed up against the inside of her throat, impeding her ability to breathe."
Imagine being the patient: You are vaguely aware that you are in the operating room. Then you become aware that you (may[2]) lack air. Is this real? Can you do something about it? Can you get help?
The report: "She was unable to move." Which is expected:
[neuromuscular blocking agents (NMBAs)] greatly facilitate endotracheal intubation and provide adequate muscle relaxation without requiring very high sedative doses that can precipitate cardiovascular depression and cardiac arrest.6 [...] However, these drugs also significantly increase the incidence of awareness under anesthesia because paralyzed patients cannot move to indicate that they are not sufficiently sedated.7
Survival
Ability to breathe is existential. Air Hunger is the ultimate survival drive. This turns a dream-like state into a single-minded fight for survival[3]. Thinking focuses on the immediate survival-relevant intense details and nothing else. How and why doesn't matter. In this situation, the lack of air doesn't occur suddenly, but the threat is there nonetheless.
You notice you need to move to fix the breathing tube, you try to, but notice that you can't seem to move. You need to signal that something is wrong.
You focus all your intent and available concentration on calling for help and you scream[4]. The report: "She recalls making a 'monumental effort' to utter a small groaning noise, which alerted the surgeon to the fact that she was awake."
The Surgeon
Imagine the operating room: hands moving with trained economy, instruments passed, a routine performed hundreds of times. Imagine being the surgeon. From the report, we know “she heard the surgeons talking.” You are speaking to a colleague about the next step, about something ordinary in medical language. You are not addressing the patient because the patient is offline. Your professional bubble is tight. Then “a groaning sound.” You are surprised; later described as embarrassed. The patient is in the picture as a participant. You and your team respond professionally, maybe adapting sedation. You tell her that the operation is “almost over,” hoping she will hear and, without further signs from her, go back to normal. You do not offer explanations because you don't have them either.
(comic generated with ChatGPT 5.2 based on this post)
A Narrow Corridor
From the case report: "It was her impression that the surgeon rushed to finish the operation while full anesthesia was restored." But imagine being the patient on the table, hearing conversation without context. No faces, no sight, no ability to ask for clarification, just confident voices. Feeling pressure but no pain and not knowing why. Feeling the tube, feeling anxious, but not knowing why[2]. Being immobile and not knowing why. Being unable to speak and just barely to scream. Intention, but no clarity about ability. Awareness fading and finally awakening in the recovery room.
The corridor of awareness can be coherent without being wide, and this awareness is not reliably captured by obvious signs exactly because these channels are suppressed:
In the past, anesthesiologists relied solely on clinical signs... to judge the depth of anesthesia. However, these clinical signs often fail to detect awareness. A closed claims analysis reported 61 cases of awareness during general anesthesia; only 9 of these cases were associated with hypertension, 4 with tachycardia, and 1 with movement. [emphasis mine]
The Parameters
While most of us have not had experiences like the patient, we have all experienced sleep, and most of us have experienced meditation, exhaustion, drugs, or fever. But do you have a gears-level model of what was going on in these cases? Could you model the effects with numbers or develop a program to measure them? In the following, I will connect the regularities pointed out above to existing empirical measures of metacognition that are well-studied but rarely connected together. I propose to use the parameters working memory bandwidth, nested observer depth, and metacognitive intransparency to quantify mental states like the one in the case report.
Working Memory Bandwidth (B)
In the case report, we see multiple indications that the experiential field is reduced to a narrow corridor in both the amount of detail (Tunnel vision) as well as in the available sensory channels. The patient could hear but not see, and feel pressure but not pain.
We can summarize this as a parameter B[5] that describes the width of the stable experiential field. We can ask: How much differentiation can the inner experience sustain during a given interval? The experiential field is what is reported by the patient or anybody else. Thus, in practice, it is limited by working memory or how much of the experience can be remembered. There may be other ways to measure the bandwidth of the experiential fields (discussed in the appendix) that do not depend on memory or potentially biased self-reports.
The easiest way to approach this is by tests how well people can report differences in the perceptions they become aware of. Naturally, B would be in bits/s. Cognitive psychology tells us the number of items perceived simultaneously, but usually doesn't ask for bits[6] - we need to multiply by the number of bits each item can vary in. Recent studies of working memory[7] find a consistent bandwidth of 10 to 50 bits/s. And working memory is known to be reduced under anesthesia[8]. Thus working memory bandwidth seems like a promising parameter to measure this aspect of experience.
Nested Observer Depth (d)
The paper says that awareness under anesthesia during surgical procedures is an uncommon event. But even in everyday life, we are not always equally aware of ourselves[9]. In a flow state, we may get so immersed in an activity that we are not aware that we are aware. We just are. At other times, we are dozing and just barely having any thoughts. Or fully asleep. And during a dream, we are also usually not aware that we are dreaming. At the other end, deep introspection or meditation can lead to higher levels of awareness and noticing that we are thinking about our thoughts.
In the case report, we can see that too (though we have to guess): Because this started as a normal procedure, the patient had little reason to worry how the operation might affect their mind, and was probably at a baseline level of self-awareness, and then the ability to introspect faded with induction. When becoming aware during the procedure, the patient tried to make sense of their condition and act coherently under constraints. After the procedure, the patient probably wondered a lot about what had happened to them.
All of this points to differences in the depth of reflection and nested self-observation like the one studied by Nested Observer Windows Theory[10] after which I'm naming this parameter d. We can use existing measures of the amount of metacognition[11], such as based on self-reports, to approximate how many self-modeling steps are maintained under reflection. How far you can go in “I notice that I notice…” before it collapses. Additional ways to measure this depth are discussed in the appendix.
Metacognitive Intransparency (τ)
More heavily adapted from Scott Garrabrant
“I notice that I notice…” before it collapses. Why does it collapse? Or, more generally, why is it so difficult to get accurate information about our own reasoning? The more we try to observe ourselves or even to observe how we observe, the more difficult it seems to get. We seem to have many biases without being aware of them. Immediate experience, such as the breath impediment in the case study, can be simultaneously vivid (the patient "felt that the breathing tube was pushed up against the inside of her throat"), while its internal causes and effects remain unclear. The case report states "impeding her ability to breathe," but the patient likely couldn't make that causal connection, and was likely interpreting the sensation of the tube as obstruction, while the airflow to the lungs was likely adequate. The lack of transparency about the actual causes and interrelations of our sensations is known to contribute to stress and anxiety[12].
I'm calling the degree to which we lack clarity of the underlying causes and effects of our experience Metacognitive Intransparency τ. τ = 1 implies complete intransparency of the underlying mechanisms. When we feel something, it is not clear why. When we think something, it is not clear what led to the thought. τ = 0 is the ideal limit at which introspection tracks all the contributing factors and causes.
The Paralyzed State
With the parameters B, d, and τ, we can describe the case numerically.
When the patient became partially aware, they had limited Bandwidth B, moderate depth d, and high τ: she can partially experience her situation, represent her predicament, form intentions, but lacks clarity of her state, both physically and mentally.
But d is functionally misaligned: it can model the trap without delivering effective control. In her case, she could sustain the effort to signal her distress, but that may not always be the case (which is why the article urges: "Verbal communication provides reassurance.").
In this patient's case, the low-to-moderate B leads to a lack of information about the operating room, but we could also imagine that seeing too many details could also be distressing. The locked-in state thus seems primarily characterized by the combination of high d × high τ that may amplify distress.
A Phase Diagram of Mental States
Now we can replay the patient's case through the lens of the parameters. A patient with normal waking state (high B), no reason to worry about the operation (normal d), and normal ability to introspect their mental states (moderate τ) is anesthetized. A transition into a state intended to contain no stabilized experiential field (B and d ~zero) reverses into a thin corridor of content: voices, pressure, breath (low B). It is a state of high confusion about the state and its inner and outer causes (high to extreme τ). Intention, reflection, and conscious effort persists without feedback about motion (d without control). After the operation the experiential field is restored (high B) but a lack of explanation and felt integration prolongs confusion about what was happening (high τ) despite reflection or ruminations (high d).
For a table with illustrative data for the points in this chart, see this footnote[13].
Generalization
This case is not unique. It doesn't cover the full range of the parameter space, but illustrates that degrees of awareness are more of a phase space. Fragmentary awareness under anesthesia occupies a specific region of this space, and meditation and different stages of sleep occupy other. Instead of asking "were they awake/aware?" you can point to a region and ask if they were in one region or another.
I believe the combination of these parameters is quite general and useful to describe a wide range of mental phenomena. This doesn't rule out other parameters that could be used to quantify aspects of experience, such as the felt valence or urgency. I am just convinced that these three together span an interesting section of cognition[14] worth further investigation.
I thank the reviewers Jonas Hallgren, Christian Kleineidam, Cameron Berg, Justis Mills, and Chris Pang for their helpful comments.
Technical Appendix
Above, I'm introducing the parameters in the context of the case study. But this is mostly for intuition-building purposes. As shown below, these parameters are well-studied, and there are multiple lines of research for each of them, even if they are rarely connected in the tight way offered here. I offer parameter definitions that capture the essence of independent lines of research and orthogonal theories of cognition and consciousness. I will explain and motivate each parameter in detail, provide an information-theoretic formalization of the underlying logic, and give different existing and often quite well-studied ways to estimate (a proxy) for each parameter. At the end, I offer some synthesis based on these parameters beyond the case study.
The bandwidth parameter B measures how much information is stably present in the overall recurrent processing system.
Why we should expect a low-dimensional core?
Many high-dimensional systems effectively reduce to a low-dimensional sub-space, which captures their meaningful long-term dynamics. This is well-known in fluid dynamics[15] and in robotics[16].
Predictive Coding (PC) implies a compression of sensory data down to the latents governing the highest level of operation[17]. But PC doesn't say anything about the personal (subjective) level.
So we know that there are low-dimensional latent representations of all the agent's senses, and the question is how those relate to subjective perception. It is clear that the highest level of the predictive hierarchy does not coincide with the subjective experiential field, because that top level consists mostly of slow-changing hyperparameters (and also seems to have more bits and dimensions than the subjective experiential field).
If the personal "level" is not the top of the hierarchy, where is it? PC doesn't try to answer it[17], but we can look at other indications.
Only certain circuit topologies[18] allow stable conditional influence across the multiple systems required for reporting.
A reportable mental state requires coordinated access across perceptual systems, memory, multiple motor systems (including e.g. for language), and multiple others. No single circuit can drive all of these unless it participates in a persistent, self-sustaining loop with other distant circuits that control action and report (at least in biologically plausible models).
Thus, a state is reportable when its encoding variables exert a stable causal influence on the set of systems required for action (which includes communication).
Selection is necessary because the system cannot simultaneously propagate all latent representations through long-range loops. PC processes in the brain update with time constants of 10-50ms[19], but global loops stabilize within Δt≈200 to 500ms[20]. Which subset of representations has the characteristics required to be candidates for selection?
GNW says[21] that the representations participate in a globally coherent explanation (in PC terms: a multi-level configuration of latents that jointly minimize prediction error). It has to be stable with low error and high prediction value (studies show[22] that error signals scale with prediction error × precision) over the duration of a global loop.
The ability to stabilize a representation across multiple regions (perception, memory, motor control, etc) increases coherence, coordination, and communication (a compressed, discrete, reportable state is efficient for mutual prediction).
In such a configuration, a stable state can influence a sequence of states, which enables planning and, e.g., conditional reasoning. More speculatively, sequencing enables long-range credit assignment[23].
Thus, if you want sample-efficient learning (which Predictive Coding predicts), global control, and communicable outputs, and have cost constraints (as is a biological evolved brain), then you need a bottleneck that selects few coherent predictive states. What is the Bandwidth B of this sequential bottleneck? This Bandwidth B should be constrained neuroanatomically by cortical surface area, thalamocortical connectivity, and energetic limits.
Information-theoretic formalization
Formally, we can model B as the capacity in bits per time of a global recurrent state space (aka workspace) to maintain mutually consistent, jointly addressable state vectors.
The (implied) vectors need to be mutually consistent because the overall state needs to be stable (for a reportable time). Any inconsistency between state vectors in a recurrent process, would destabilize at least one of these vectors quickly, while consistent vectors would self-reinforce each other (in the overall context, including perceptions that may change and thus lead to new consistent stable states).
Jointly addressable means each representation can be independently queried, recombined, or acted upon, while the others remain stably represented. Without this, you can't report, reason, or plan. Querying/reporting etc. means implied transition operators that depend on one state and are conditionally invariant to the other states.
Let G(t) be the global state space state (a high-dimensional vector). Let S(t)={s1,…,sn} be the set of jointly addressable concurrently stabilized sub-states (e.g., object tokens, intentions, feelings). Then we can express B with standard mutual information I as
B:=1ΔtmaxSmin0≤u≤ΔtI(S(t);G(t+u))
for a threshold θ of mutual information that ensures stable integration.
Proxies for Working Memory Bandwidth B
All the following are empirical proxies for this conceptual property. Each is noisy and confounded in its own way, but if the assumption that the experiential field corresponds to the sequential processing bottleneck is true, then all should arrive at proximately the same value for comparable Δt.
Behavioural bandwidth: Take the number of items in working memory (reported or measured[6]) times. Each item contributes the size in bits of its corresponding vectors to the representational complexity. This can be estimated from change detection, multiple-object tracking, and attentional blink tests to be about 10[7] to 50[24] bits/s. This measure is confounded by chunking, encoding, and executive control limitations. The behavioral bandwidth serves as a lower bound for B.
Neural integration measures: At the neural level, we can measure the system’s ability to sustain highly integrated yet differentiated activity patterns over the workspace timescale (~200–500 ms). Integrated Information Theory’s Φ would, in principle, measure this, but has not been calculated for a full human brain so far. More practically, the Perturbational Complexity Index (PCI)[25] quantifies this, but not in terms of bits/s. PCI does show across conditions that conscious states are reliably associated with high integration and high differentiation, whereas unconscious states show a collapse. It is not clear how to convert this to bits/s.
Phenomenal richness: Structured introspection shows how much structured content seems jointly present in experience. This can be estimated with visual[26], auditory[27], tactile[28], and other questionnaires. While an estimation of corresponding bits/s is not documented in the literature, it seems practicable to perform, and a bandwidth in the range of 10 to 50 bits/s seems plausible. Additionally, subjective reports show that vividness of experience during anesthesia, fatigue, or low-dose sedation versus alert, task-engaged states match the physiological effects[29].
The depth parameter d is the number of recursive self-modeling steps for which the system can maintain stable fixed points. This means
The system models itself.
It models itself modeling itself.
It models itself modeling itself modeling itself.
…and so on…
For d steps before the representations stop converging (e.g. stable cortical loop) and begin to distort or collapse, e.g. from noise.
Where the system models itself means some of its internal states encode predictions about other internal states.
Recursive modeling is the domain of consciousness by Recurrent Processing Theory[30], Higher Order Theory[31], Attention Schema Theory[32], or the Nested Observer Window model[10].
In practice deeper recursion might be rarer or less stable, which could be modelled as fractional depth.
Recursion depth is limited by architectural constraints as outlined by the theories above, but the degree to which it is realized depends on development and training[33], i.e. education or other opportunities, such as meditation[34][35].
Information-theoretic formalization
Let M(0) be first-order representations. Let M(k)=Model(M(k−1)) be the k-th self-model.
d = max k such that M(k)=f(M(k−1)) remains stable within a stability threshold ∥M(k)−f(M(k))∥<ε
Proxies for recursion depth
Recursive ToM performance: Behavioural tests have measured[11] how many explicit levels the system can maintain and how confidently[36].
Neural higher-order thought availability: Overlapping vmPFC/dmPFC activation in neuroimaging studies can be interpreted as supporting “second-order representations”[37]. This can plausibly be extended to detect higher levels.
Phenomenological meta-awareness: Take the subject’s explicit awareness of their own ongoing mental states as a proxy for at least one level of recursive self-modelling[38] which can be tested e.g. with SART. Deeper nestings of awareness are commonly reported in meditation studies[39] but depth of recursion is not systematically reported. Such reports are confounded by reporting biases, task demands, and retrospective reconstruction.
Metacognitive Intransparency (τ)
More heavily adapted from Scott Garrabrant
Above, we established that the parameter τ measures how opaque the process that generates your cognition is to you. More precisely, τ measures the degree of information loss in the mapping from generative states and processes to introspectively accessible meta-representations. Metacognitive Intransparency is partly a result of neuroanatomy. We have already established that bandwidth for integrated processing is limited. But total sensory processing has OOMs higher bandwidth - and that includes the bandwidth of the self-feedback channels. Any introspection channel must compress both external and internal channels massively. Thus, there is a floor set by anatomy or bandwidth B. Additionally, external signals often carry high valence, thus competing for self-modeling resources. Thus, intransparency is expected to be high unless the external environment is unusually quiet and low-entropy. On the other hand, intransparency can be clearly be reduced by training[34], which often involves quietness and repeated practice.
Information-theoretic formalization
Let M be the generative model states. Let ^M be the introspective model’s estimate of M. Using mutual information I and entropy Η, intransparency can be expressed as the normalized information loss:
τ=1−I(M;^M)H(M)
τ=0 means transparency: introspection tracks generative causes closely. τ=1 means the system’s introspection is blind to its own machinery.
Proxies for Metacognitive Inefficieny τ
Metacognitive inefficiency: Using well-known measures of metacognitive sensitivity[40], such as perceptual discrimination tasks, we can use the derived metacognitive efficiency[41] to approximate τ as 1−meta-d′d′. This serves as an upper bound for τ due to the limited domain of the discrimination task.
Neural decoding gap: A measure of which fraction of stimuli can be decoded in early layers but not in later layers. Calculate τ as Accearly−AcchigherAccearly (for decoding accuracies Acc for different layers). No study measuring this exact representational MI gap was found.
Emotional Clarity: Psychologists can measure the emotional clarity[42] of patients with instruments like the Emotional Clarity Questionnaire (ECQ)[43] or the Trait Meta-Mood Scale. As many such measures these are confounded by reporting biases.
Subjective ineffability: Ineffability is having an experience consciously present but resistant to stable conceptualization or verbal report. Or without having a sense of having it[44] (at least until probed). As a subjective measure it felt as a matter of degree. Ineffability is one measure of the Mystical Experience Questionnaire[45]. Reports of ineffability are common in consciousness research[46] and can also be found in the Meditation survey[39]. It is confounded by reporting biases.
There may also be theoretical limitation for τ[47].
A 33-year-old woman in good physical health presented to the hospital for elective rhinoplasty. During the operation, she became aware that she was awake. She heard the conversation among the surgical team members and felt pressure on bone in her nose, but she did not feel pain. The patient also felt that the breathing tube was pushed up against the inside of her throat, impeding her ability to breathe. She was unable to move but recalls making a “monumental effort” to utter a small groaning noise, which alerted the surgeon to the fact that she was awake. She heard the surgeon verbally acknowledge her condition and offer reassurance that the operation was almost over. It was her impression that the surgeon rushed to finish the operation while full anesthesia was restored, and she later awoke in the recovery room without complications. During the first follow-up visit, the surgeon did not address the situation, so the patient brought it up at the end of the visit. The surgeon seemed surprised and embarrassed that the patient remembered waking up during the operation but could not explain what happened.
General anesthesia suppresses central nervous system activity and results in unconsciousness and total lack of sensation. Her case additionally involved routine neuromuscular blockade with NMBAs, so behavioral signs were suppressed.
The case study is not clear whether the patient actually lacked air due to the positioning of the tube or just felt obstructed. In any case, it is plausible that the patient had no clarity about that due to their partial awareness in the same way it is possible to feel anxious without knowing about what.
In this experiment, the sedation levels were changed step-by-step using anaesthesia, and the performance accuracy during the execution of working memory was assessed using a dual-task paradigm. [...] The results of the short-delay recognition task showed that the performance was lowest at the deep stage. The performance of the moderate stage was lower than the baseline.
The question of whether we are aware is just not coming up very often. We just know latently that we are aware
Ask yourself this question ‘Am I conscious now?’ and you will reply ‘Yes’. Then, I suggest, you are lured into delusion – the delusion that you are conscious all the time, even when you are not asking about it.
Now ask another question, ‘What was I conscious of a moment ago?’ This may seem like a very odd question indeed but lots of my students have grappled with it and I have spent years playing with it, both in daily life and in meditation. My conclusion? Most of the time I do not know what I was conscious of just before I asked.
The model likens the mind to a hierarchy of nested mosaic tiles-where an image is composed of mosaic tiles, and each of these tiles is itself an image composed of mosaic tiles. Unitary consciousness exists at the apex of this nested hierarchy where perceptual constructs become fully integrated and complex behaviors are initiated via abstract commands. We define an observer window as a spatially and temporally constrained system within which information is integrated, e.g. in functional brain regions and neurons.
Having reviewed the desirable properties of measures of metacognition, let us now turn our attention to the existing measures of metacognitive ability. One popular measure is the area under the Type 2 ROC function31, also known as AUC2. Other popular measures are the Goodman–Kruskall Gamma coefficient (or just Gamma), which is essentially a rank correlation between trial-by-trial confidence and accuracy32 and the Pearson correlation between trial-by-trial confidence and accuracy (known as Phi33). Another simple but less frequently used measure is the difference between average confidence on correct trials and the average confidence on error trials (which I call ΔConf).
In the emotional intelligence framework, emotions are regarded as an important source of information, and clearly identifying one’s emotions is required to adaptively utilize the information emotions provide. [...] This suggests that a lack of emotional clarity may interfere with achieving goals in a given situation, rendering individuals susceptible to psychological distress or maladjustment.
If you would go beyond these parameters and tried to be more systematic, you'd want to use something like PCA on a larger number of measures of cognition.
If many modes [...] decay exponentially, then all that is left after the transients decay are the relatively slowly evolving modes of long-term importance. The evolution of these few significant modes effectively forms a low-dimensional dynamical system on a low-dimensional set of states in state space.
We identify the dynamics on generic, low-dimensional attractors embedded in the full phase space [...] This allows us to obtain computationally-tractable models for control which preserve the system’s dominant dynamics
[Predictive Coding] aims to be complete: it offers not just part of the story about cognition, but one that stretches all the way from the details of neuromodulator release to abstract principles of rational action governing whole agents. [page 2]
The more accurately the brain’s internal assumptions reflect its incoming sensory stream, the less information would need to be stored or transmitted inwards from the sensory periphery. All that would need to be sent inwards would be an error signal – what is new or unexpected – with respect to those predictions. [page 4]
a generative model could help the brain to distinguish between changes to its sensory data that are self-generated and externally generated [...] regulating its motor control based, not on actual sensory feedback, but on expected sensory feedback, [... and] be, inverted to produce a discriminative model. [page 7]
For predictive coding to say something specific about the existence or character of top-down effects at the personal level, it would need to say which aspects of that subpersonal information give rise to which personal-level states (beliefs and perceptual contents). These assumptions – which connect the subpersonal level to the personal level – are currently not to be found anywhere within predictive coding’s computational model. [page 7-8]
A non-linear network ignition associated with recurrent processing amplifies and sustains a neural representation, allowing the corresponding information to be globally accessed by local processors.
This observer- or reader-defined synchrony is critical in brain operations. If the action potentials from many upstream neurons arrive within the membrane time constant of the target (reader) neuron (τ: 10–50 ms for a typical pyramidal neuron), their combined action is cooperative because each of them contributes to the discharge of the reader neuron.
recurrent processing between anterior and posterior ventral temporal cortex relates to higher-level visual properties prior to semantic object properties, in addition to semantic-related feedback from the frontal lobe to the ventral temporal lobe between 250 and 500ms after stimulus onset.
Once we are conscious of an item, we can readily perform a large variety of operations on it, including evaluation, memorization, action guidance, and verbal report.
Response strength should therefore always be a function of both the size of the error and its precision.
Attention is the weighting of sensory signals by their precision (inverse variance).
Empirical evidence:
trial-by-trial estimates of four key inferential variables: prediction error, surprise, prediction change and prediction precision (where surprise is the precision-weighted variant of prediction error). [...] gamma was predicted by surprise (more so than by prediction error). Moreover, beta-band modulations were significantly predicted by prediction change. [...] alpha-band modulations were significantly predicted by the precision of predictions
While "credit assignment" is ML terminology and not clearly known to be implied in sequential or "system 2" reasoning, a related terminology is "learning by thinking":
Canonical cases of learning involve novel observations external to the mind, but learning can also occur through mental processes such as explaining to oneself, mental simulation, analogical comparison, and reasoning. Recent advances in artificial intelligence (AI) reveal that such learning is not restricted to human minds: artificial minds can also self-correct and arrive at new conclusions by engaging in processes of 'learning by thinking' (LbT).
When researchers sought to measure information processing capabilities during ‘intelligent’ or ‘conscious’ activities, such as reading or piano playing, they came up with a maximum capability of less than 50 bits per second.
As soon as the FFS [feedforward sweep] has reached a particular area, horizontal connections start to connect distant cells within that area, and feedback connections start sending information from higher level areas back to lower levels, even all the way down to V1. Together, these connections provide what is called recurrent processing [(RP)].
Neurons in lower regions modify their spiking activity so as to reflect the higher level properties. For example, a V1 neuron receiving feedback signals will fire more strongly when it is responding to features that are part of an object.
RP allows for dynamic interactions between areas… RP may thus form the basis of dynamic processes such as perceptual organization, where different aspects of objects and scenes are integrated into a coherent percept.
The remaining difference between Stage 4 and Stages 1 and 2 is that in the latter there is only feedforward processing, while in Stage 4 (and Stage 3) there is recurrent processing. Could that be the essential ingredient that gives phenomenality…? That recurrent processing is necessary for visual awareness is now fairly well established.
Introspective consciousness occurs when a mental state is accompanied both by such a second-order thought, and also by a yet higher-order thought that one has that second-order thought. [page 48]
Third-order thoughts do occur when we introspect; can fourth-order thoughts also occur? There is reason to think so. Sometimes we are actually conscious of our introspecting, and that means having a fourth-order thought about the third-order thought... [page 344]
We propose that the top–down control of attention is improved when the brain has access to a simplified model of attention itself. The brain therefore constructs a schematic model of the process of attention, the ‘attention schema,’ in much the same way that it constructs a schematic model of the body, the ‘body schema.’ The content of this internal model leads a brain to conclude that it has a subjective experience. One advantage of this theory is that it explains how awareness and attention can sometimes become dissociated; the brain’s internal models are never perfect, and sometimes a model becomes dissociated from the object being modeled.
The practice of meditation [...] offers the ability, with practice, to enable the development of awareness of awareness itself. The aim is also to reduce suffering as a consequence of this greater openness, through reduced reactivity to experience
we show that human observers are able to produce nested, above-chance judgements on the quality of their decisions at least up to the fourth order (i.e. meta-meta-meta-cognition).
A domain-general network, including medial and lateral prefrontal cortex, precuneus, and insula was associated with the level of confidence in self-performance in both decision-making and memory tasks.
By comparing our results to meta-analyses of mentalising, we obtain evidence for common engagement of the ventromedial and anterior dorsomedial prefrontal cortex in both metacognition and mentalising, suggesting that these regions may support second-order representations for thinking about the thoughts of oneself and others.
‘Meta-awareness,’ a term often used interchangeably with metaconsciousness, is the state of deliberatively attending to the contents of conscious experience.
Initially in this example [reading a book], meta-awareness would be absent until you notice that you are mind wandering. This abrupt realization (almost like waking up) represents the dawning of metaconsciousness, in which you take stock of what you are thinking about and realize that it has nothing to do with what you are reading.
our measure meta-d', which reflects how much information, in signal-to-noise units, is available for metacognition. Applying this novel method in a 2-interval forced choice visual task, we found that subjects' metacognitive sensitivity was close to, but significantly below, optimality.
Emotional clarity refers to the extent to which you know, understand and are clear about which emotions you are feeling and why you are feeling them. If you have poor emotional clarity, you may have a difficult time understanding the origins of your emotions. For example, you may say things like, “I feel bad and I don’t understand why”.
There is evidence for two types of dissociations between consciousness and meta-consciousness, the latter being defined as the intermittent explicit re-representation of the contents of consciousness. Temporal dissociations occur when an individual, who previously lacked meta-consciousness about the contents of consciousness, directs meta-consciousness towards those contents; for example, catching one's mind wandering during reading.
The “hard problem” of consciousness is to explain why and how physical processes are accompanied by experience at all, not just how they support discrimination, report, etc. It is the problem of explaining “why there is ‘something it is like’ for a subject in conscious experience,” and why this “cannot be captured by purely structural or functional description.
Even with perfect introspection channels, we might run into limitations because Löb's theorem shows that it is not possible to have a complete and sound self-model (in the sense of trusting provable by me about myself). The ways around that are a) to leave the semantics out of the internal representation (i.e., not transmitting that information internally) or b) to add probabilistic uncertainty to the reflective self-representation (making it lossy). Both can be seen as a τ > 0 gap in the reflective self-trust/semantics channel.
Simply put, intrinsic and causal information does not involve anything outside of the system [...] This intrinsicness renders integrated information irrelevant to the functions of the system. [...]
It is at least theoretically possible that we do whatever we are doing without consciousness. Such a state makes the ‘use’ of consciousness mysterious. [...]
For the sake of future development, IIT should more seriously take metacognitive accessibility to experience into account.
“Illusionism” about consciousness, a label designed to help indicate why it seems to us that phenomenal consciousness is real (Frankish, 2016, 2017). Illusionism is motivated in part by broader theoretical considerations, such as the problematic nature of consciousness from the standpoint of physicalism and the observation that even reductive accounts of phenomenal experience typically suggest some sort of misapprehension of what is really going on. Illusionism claims that introspection involves something analogous to ordinary sensory illusions; just as our perceptual systems can yield states that radically misrepresent the nature of the outer world, so too, introspection yields representations that substantially misrepresent the actual nature of our inner experience. In particular, introspection represents experiential states as having phenomenal properties—the infamous and deeply problematic what-it-is-likeness of our qualitative mental states. Illusionists claim that these phenomenal properties do not exist, making them eliminativists about phenomenal consciousness. What is real are quasi-phenomenal properties—the non-phenomenal properties of inner states that are detected by introspection and misrepresented as phenomenal.
Epistemic status: I’m using a single clinical case study as a running example to illustrate three empirical aspects of cognition that are well-documented but rarely used together. The point is not that this case study proves anything, but to build an intuition that I then connect to more systematic empirical studies later.
Content warning: Anesthesia, quotes from the patient can be read as body horror.
LLM use: I have used LLMs for a) researching prior work and other sources, b) summarizing and reviewing, c) generating the comics and code for one of the graphics, and d) coming up with structures to make the dry topic more approachable, including finding the case study to illustrate the parameters. All LLM-generated sentences that made it into this document have been heavily rewritten.
Induction
A 33-year-old woman voluntary undergoes a rhinoplasty (a surgical procedure to reshape the nose) under general anesthesia[1]. The intended and expected effect for the patient is induction of anesthesia and then "waking up" in the recovery room with no reportable experience during the operation.
In the case study, that hard cut fails.
The case report summarizes: “During the operation, she became aware that she was awake.” But this simplifies and assumes an understanding of this concept that glosses over a perceptual asymmetry: some parts of experience can return while most don't. Instead, there may be an inability to move (as in sleep paralysis), incoherent experienced content (as in fever dreams), impossibilities (like flying in lucid dreaming), and especially, difficulty to communicate (clear internal speech but unintelligible sleep talking).
Partial Wakeup
The case report states: "She heard the conversation among the surgical team members and felt pressure on bone in her nose, but she did not feel pain." Note these two deviations from normal experience:
Her internal experience is there, but narrow.
Urgency
The case report continues: "The patient also felt that the breathing tube was pushed up against the inside of her throat, impeding her ability to breathe."
Imagine being the patient: You are vaguely aware that you are in the operating room. Then you become aware that you (may[2]) lack air. Is this real? Can you do something about it? Can you get help?
The report: "She was unable to move." Which is expected:
Survival
Ability to breathe is existential. Air Hunger is the ultimate survival drive. This turns a dream-like state into a single-minded fight for survival[3]. Thinking focuses on the immediate survival-relevant intense details and nothing else. How and why doesn't matter. In this situation, the lack of air doesn't occur suddenly, but the threat is there nonetheless.
You notice you need to move to fix the breathing tube, you try to, but notice that you can't seem to move. You need to signal that something is wrong.
You focus all your intent and available concentration on calling for help and you scream[4]. The report: "She recalls making a 'monumental effort' to utter a small groaning noise, which alerted the surgeon to the fact that she was awake."
The Surgeon
Imagine the operating room: hands moving with trained economy, instruments passed, a routine performed hundreds of times. Imagine being the surgeon. From the report, we know “she heard the surgeons talking.” You are speaking to a colleague about the next step, about something ordinary in medical language. You are not addressing the patient because the patient is offline. Your professional bubble is tight. Then “a groaning sound.” You are surprised; later described as embarrassed. The patient is in the picture as a participant. You and your team respond professionally, maybe adapting sedation. You tell her that the operation is “almost over,” hoping she will hear and, without further signs from her, go back to normal. You do not offer explanations because you don't have them either.
A Narrow Corridor
From the case report: "It was her impression that the surgeon rushed to finish the operation while full anesthesia was restored." But imagine being the patient on the table, hearing conversation without context. No faces, no sight, no ability to ask for clarification, just confident voices. Feeling pressure but no pain and not knowing why. Feeling the tube, feeling anxious, but not knowing why[2]. Being immobile and not knowing why. Being unable to speak and just barely to scream. Intention, but no clarity about ability. Awareness fading and finally awakening in the recovery room.
The corridor of awareness can be coherent without being wide, and this awareness is not reliably captured by obvious signs exactly because these channels are suppressed:
The Parameters
While most of us have not had experiences like the patient, we have all experienced sleep, and most of us have experienced meditation, exhaustion, drugs, or fever. But do you have a gears-level model of what was going on in these cases? Could you model the effects with numbers or develop a program to measure them? In the following, I will connect the regularities pointed out above to existing empirical measures of metacognition that are well-studied but rarely connected together. I propose to use the parameters working memory bandwidth, nested observer depth, and metacognitive intransparency to quantify mental states like the one in the case report.
Working Memory Bandwidth (B)
In the case report, we see multiple indications that the experiential field is reduced to a narrow corridor in both the amount of detail (Tunnel vision) as well as in the available sensory channels. The patient could hear but not see, and feel pressure but not pain.
We can summarize this as a parameter B[5] that describes the width of the stable experiential field. We can ask: How much differentiation can the inner experience sustain during a given interval? The experiential field is what is reported by the patient or anybody else. Thus, in practice, it is limited by working memory or how much of the experience can be remembered. There may be other ways to measure the bandwidth of the experiential fields (discussed in the appendix) that do not depend on memory or potentially biased self-reports.
The easiest way to approach this is by tests how well people can report differences in the perceptions they become aware of. Naturally, B would be in bits/s. Cognitive psychology tells us the number of items perceived simultaneously, but usually doesn't ask for bits[6] - we need to multiply by the number of bits each item can vary in. Recent studies of working memory[7] find a consistent bandwidth of 10 to 50 bits/s. And working memory is known to be reduced under anesthesia[8]. Thus working memory bandwidth seems like a promising parameter to measure this aspect of experience.
Nested Observer Depth (d)
The paper says that awareness under anesthesia during surgical procedures is an uncommon event. But even in everyday life, we are not always equally aware of ourselves[9]. In a flow state, we may get so immersed in an activity that we are not aware that we are aware. We just are. At other times, we are dozing and just barely having any thoughts. Or fully asleep. And during a dream, we are also usually not aware that we are dreaming. At the other end, deep introspection or meditation can lead to higher levels of awareness and noticing that we are thinking about our thoughts.
In the case report, we can see that too (though we have to guess): Because this started as a normal procedure, the patient had little reason to worry how the operation might affect their mind, and was probably at a baseline level of self-awareness, and then the ability to introspect faded with induction. When becoming aware during the procedure, the patient tried to make sense of their condition and act coherently under constraints. After the procedure, the patient probably wondered a lot about what had happened to them.
All of this points to differences in the depth of reflection and nested self-observation like the one studied by Nested Observer Windows Theory[10] after which I'm naming this parameter d. We can use existing measures of the amount of metacognition[11], such as based on self-reports, to approximate how many self-modeling steps are maintained under reflection. How far you can go in “I notice that I notice…” before it collapses. Additional ways to measure this depth are discussed in the appendix.
Metacognitive Intransparency (τ)
“I notice that I notice…” before it collapses. Why does it collapse? Or, more generally, why is it so difficult to get accurate information about our own reasoning? The more we try to observe ourselves or even to observe how we observe, the more difficult it seems to get. We seem to have many biases without being aware of them. Immediate experience, such as the breath impediment in the case study, can be simultaneously vivid (the patient "felt that the breathing tube was pushed up against the inside of her throat"), while its internal causes and effects remain unclear. The case report states "impeding her ability to breathe," but the patient likely couldn't make that causal connection, and was likely interpreting the sensation of the tube as obstruction, while the airflow to the lungs was likely adequate. The lack of transparency about the actual causes and interrelations of our sensations is known to contribute to stress and anxiety[12].
I'm calling the degree to which we lack clarity of the underlying causes and effects of our experience Metacognitive Intransparency τ. τ = 1 implies complete intransparency of the underlying mechanisms. When we feel something, it is not clear why. When we think something, it is not clear what led to the thought. τ = 0 is the ideal limit at which introspection tracks all the contributing factors and causes.
The Paralyzed State
With the parameters B, d, and τ, we can describe the case numerically.
When the patient became partially aware, they had limited Bandwidth B, moderate depth d, and high τ: she can partially experience her situation, represent her predicament, form intentions, but lacks clarity of her state, both physically and mentally.
But d is functionally misaligned: it can model the trap without delivering effective control. In her case, she could sustain the effort to signal her distress, but that may not always be the case (which is why the article urges: "Verbal communication provides reassurance.").
In this patient's case, the low-to-moderate B leads to a lack of information about the operating room, but we could also imagine that seeing too many details could also be distressing. The locked-in state thus seems primarily characterized by the combination of high d × high τ that may amplify distress.
A Phase Diagram of Mental States
Now we can replay the patient's case through the lens of the parameters. A patient with normal waking state (high B), no reason to worry about the operation (normal d), and normal ability to introspect their mental states (moderate τ) is anesthetized. A transition into a state intended to contain no stabilized experiential field (B and d ~zero) reverses into a thin corridor of content: voices, pressure, breath (low B). It is a state of high confusion about the state and its inner and outer causes (high to extreme τ). Intention, reflection, and conscious effort persists without feedback about motion (d without control). After the operation the experiential field is restored (high B) but a lack of explanation and felt integration prolongs confusion about what was happening (high τ) despite reflection or ruminations (high d).
For a table with illustrative data for the points in this chart, see this footnote[13].
Generalization
This case is not unique. It doesn't cover the full range of the parameter space, but illustrates that degrees of awareness are more of a phase space. Fragmentary awareness under anesthesia occupies a specific region of this space, and meditation and different stages of sleep occupy other. Instead of asking "were they awake/aware?" you can point to a region and ask if they were in one region or another.
I believe the combination of these parameters is quite general and useful to describe a wide range of mental phenomena. This doesn't rule out other parameters that could be used to quantify aspects of experience, such as the felt valence or urgency. I am just convinced that these three together span an interesting section of cognition[14] worth further investigation.
I thank the reviewers Jonas Hallgren, Christian Kleineidam, Cameron Berg, Justis Mills, and Chris Pang for their helpful comments.
Technical Appendix
Above, I'm introducing the parameters in the context of the case study. But this is mostly for intuition-building purposes. As shown below, these parameters are well-studied, and there are multiple lines of research for each of them, even if they are rarely connected in the tight way offered here. I offer parameter definitions that capture the essence of independent lines of research and orthogonal theories of cognition and consciousness. I will explain and motivate each parameter in detail, provide an information-theoretic formalization of the underlying logic, and give different existing and often quite well-studied ways to estimate (a proxy) for each parameter. At the end, I offer some synthesis based on these parameters beyond the case study.
Working Memory Bandwidth (B)
The bandwidth parameter B measures how much information is stably present in the overall recurrent processing system.
Why we should expect a low-dimensional core?
Many high-dimensional systems effectively reduce to a low-dimensional sub-space, which captures their meaningful long-term dynamics. This is well-known in fluid dynamics[15] and in robotics[16].
Predictive Coding (PC) implies a compression of sensory data down to the latents governing the highest level of operation[17]. But PC doesn't say anything about the personal (subjective) level.
So we know that there are low-dimensional latent representations of all the agent's senses, and the question is how those relate to subjective perception. It is clear that the highest level of the predictive hierarchy does not coincide with the subjective experiential field, because that top level consists mostly of slow-changing hyperparameters (and also seems to have more bits and dimensions than the subjective experiential field).
If the personal "level" is not the top of the hierarchy, where is it? PC doesn't try to answer it[17], but we can look at other indications.
Only certain circuit topologies[18] allow stable conditional influence across the multiple systems required for reporting.
A reportable mental state requires coordinated access across perceptual systems, memory, multiple motor systems (including e.g. for language), and multiple others. No single circuit can drive all of these unless it participates in a persistent, self-sustaining loop with other distant circuits that control action and report (at least in biologically plausible models).
Thus, a state is reportable when its encoding variables exert a stable causal influence on the set of systems required for action (which includes communication).
Selection is necessary because the system cannot simultaneously propagate all latent representations through long-range loops. PC processes in the brain update with time constants of 10-50ms[19], but global loops stabilize within Δt≈200 to 500ms[20]. Which subset of representations has the characteristics required to be candidates for selection?
GNW says[21] that the representations participate in a globally coherent explanation (in PC terms: a multi-level configuration of latents that jointly minimize prediction error). It has to be stable with low error and high prediction value (studies show[22] that error signals scale with prediction error × precision) over the duration of a global loop.
The ability to stabilize a representation across multiple regions (perception, memory, motor control, etc) increases coherence, coordination, and communication (a compressed, discrete, reportable state is efficient for mutual prediction).
In such a configuration, a stable state can influence a sequence of states, which enables planning and, e.g., conditional reasoning. More speculatively, sequencing enables long-range credit assignment[23].
Thus, if you want sample-efficient learning (which Predictive Coding predicts), global control, and communicable outputs, and have cost constraints (as is a biological evolved brain), then you need a bottleneck that selects few coherent predictive states. What is the Bandwidth B of this sequential bottleneck? This Bandwidth B should be constrained neuroanatomically by cortical surface area, thalamocortical connectivity, and energetic limits.
Information-theoretic formalization
Formally, we can model B as the capacity in bits per time of a global recurrent state space (aka workspace) to maintain mutually consistent, jointly addressable state vectors.
Let G(t) be the global state space state (a high-dimensional vector). Let S(t)={s1,…,sn} be the set of jointly addressable concurrently stabilized sub-states (e.g., object tokens, intentions, feelings). Then we can express B with standard mutual information I as
B:=1ΔtmaxSmin0≤u≤ΔtI(S(t);G(t+u))
for a threshold θ of mutual information that ensures stable integration.
Proxies for Working Memory Bandwidth B
All the following are empirical proxies for this conceptual property. Each is noisy and confounded in its own way, but if the assumption that the experiential field corresponds to the sequential processing bottleneck is true, then all should arrive at proximately the same value for comparable Δt.
Nested Observer Depth (d)
The depth parameter d is the number of recursive self-modeling steps for which the system can maintain stable fixed points. This means
Where the system models itself means some of its internal states encode predictions about other internal states.
Recursive modeling is the domain of consciousness by Recurrent Processing Theory[30], Higher Order Theory[31], Attention Schema Theory[32], or the Nested Observer Window model[10].
In practice deeper recursion might be rarer or less stable, which could be modelled as fractional depth.
Recursion depth is limited by architectural constraints as outlined by the theories above, but the degree to which it is realized depends on development and training[33], i.e. education or other opportunities, such as meditation[34][35].
Information-theoretic formalization
Let M(0) be first-order representations. Let M(k)=Model(M(k−1)) be the k-th self-model.
d = max k such that M(k)=f(M(k−1)) remains stable within a stability threshold ∥M(k)−f(M(k))∥<ε
Proxies for recursion depth
Metacognitive Intransparency (τ)
Above, we established that the parameter τ measures how opaque the process that generates your cognition is to you. More precisely, τ measures the degree of information loss in the mapping from generative states and processes to introspectively accessible meta-representations. Metacognitive Intransparency is partly a result of neuroanatomy. We have already established that bandwidth for integrated processing is limited. But total sensory processing has OOMs higher bandwidth - and that includes the bandwidth of the self-feedback channels. Any introspection channel must compress both external and internal channels massively. Thus, there is a floor set by anatomy or bandwidth B. Additionally, external signals often carry high valence, thus competing for self-modeling resources. Thus, intransparency is expected to be high unless the external environment is unusually quiet and low-entropy. On the other hand, intransparency can be clearly be reduced by training[34], which often involves quietness and repeated practice.
Information-theoretic formalization
Let M be the generative model states. Let ^M be the introspective model’s estimate of M. Using mutual information I and entropy Η, intransparency can be expressed as the normalized information loss:
τ=1−I(M;^M)H(M)
τ=0 means transparency: introspection tracks generative causes closely. τ=1 means the system’s introspection is blind to its own machinery.
Proxies for Metacognitive Inefficieny τ
There may also be theoretical limitation for τ[47].
Full case description:
A 33-year-old woman in good physical health presented to the hospital for elective rhinoplasty. During the operation, she became aware that she was awake. She heard the conversation among the surgical team members and felt pressure on bone in her nose, but she did not feel pain. The patient also felt that the breathing tube was pushed up against the inside of her throat, impeding her ability to breathe. She was unable to move but recalls making a “monumental effort” to utter a small groaning noise, which alerted the surgeon to the fact that she was awake. She heard the surgeon verbally acknowledge her condition and offer reassurance that the operation was almost over. It was her impression that the surgeon rushed to finish the operation while full anesthesia was restored, and she later awoke in the recovery room without complications. During the first follow-up visit, the surgeon did not address the situation, so the patient brought it up at the end of the visit. The surgeon seemed surprised and embarrassed that the patient remembered waking up during the operation but could not explain what happened.
General anesthesia suppresses central nervous system activity and results in unconsciousness and total lack of sensation. Her case additionally involved routine neuromuscular blockade with NMBAs, so behavioral signs were suppressed.
Christian Bohringer et al, 2024 Intraoperative Awareness during Rhinoplasty PSNet/WebM&M
The case study is not clear whether the patient actually lacked air due to the positioning of the tube or just felt obstructed. In any case, it is plausible that the patient had no clarity about that due to their partial awareness in the same way it is possible to feel anxious without knowing about what.
Air hunger (resulting eg from high heart rate, "Condition Black") is associated with Tunnel Vision.
Dave Grossman, On Combat,
Arnal et al 2015, Human Screams Occupy a Privileged Niche in the Communication Soundscape
The letters were chosen because B commonly denotes bandwidth, τ information loss by analogy to decay constants, and d for depth (of recursion).
Baars et al, 2023 Global workspace dynamics: cortical “binding and propagation” enables conscious contents
The Unbearable Slowness of Being: Why do we live at 10 bits/s? (press news)
The arousal level of consciousness required for working
memory performance: An anaesthesia study
The question of whether we are aware is just not coming up very often. We just know latently that we are aware
Susan Blackmore, A question of consciousness
Also known as (Ned Block's) Refrigerator Lights Illusion.
Riddle and Schooler, 2024 Hierarchical consciousness: the Nested Observer Windows model
A comprehensive assessment of current methods for measuring metacognition
Is More Emotional Clarity Always Better? An Examination of Curvilinear and Moderated Associations Between Emotional Clarity and Internalizing Symptoms
If you would go beyond these parameters and tried to be more systematic, you'd want to use something like PCA on a larger number of measures of cognition.
Low-dimensional modelling of dynamical systems (page 6)
Data-Driven Spectral Submanifold Reduction for Nonlinear Optimal Control of High-Dimensional Robots
Mark Sprevak, Predictive coding I: Introduction
Conscious Processing and the Global Neuronal Workspace Hypothesis
Buzsáki, 2023 Brain rhythms have come of age
von Seth, 2023 Recurrent connectivity supports higher-level visual and semantic object representations in the brain
Dahaene A neuronal model of a global workspace in effortful cognitive tasks
PC predictions:
Empirical evidence:
Great Expectations: Is there Evidence for Predictive Coding in Auditory Cortex?
While "credit assignment" is ML terminology and not clearly known to be implied in sequential or "system 2" reasoning, a related terminology is "learning by thinking":
Learning by thinking in natural and artificial minds
Britannica Physiology
Perturbational Complexity Index (PCI) is A theoretically based index of consciousness independent of sensory processing and behavior
Vividness of Visual Imagery Questionnaire
Bucknell Auditory Imagery Scale (BAIS)
VMIQ-2 - Vividness of Movement Imagery Questionnaire 2
Lamme, How neuroscience will change our view on consciousness
Rosenthal, 2005 Consciousness and Mind
Graciano, 2015 The attention schema theory: a mechanistic account of subjective awareness
Developing Metacognition of 5- to 6-Year-Old Children: Evaluating the Effect of a Circling Curriculum Based on Anji Play
The Psychology of Meditation Research and Practice
Mindfulness training with adolescents enhances metacognition and the inhibition of irrelevant stimuli: Evidence from event-related brain potentials
Recht et al 2022 Confidence at the limits of human nested cognition
Vaccara and Flemming, 2018 Thinking about thinking: A coordinate-based meta-analysis of neuroimaging studies of metacognitive judgements
Chin and Schooler, 2009 Meta-Awareness
The Minimal Phenomenal Experience Project: Towards a minimal-model explanation for consciousness
Maniscalco and Lau 2012 A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratings
Where d′ is standard sensitivity and for meta-d′ see above.
Fleming and Lau 2014 How to measure metacognition
https://www.berkeleywellbeing.com/emotional-clarity.html
Flynn et al, 2010, Emotional Clarity Questionnaire
Salovey et al, 1995 Emotional attention, clarity, and repair: Exploring emotional intelligence using the Trait Meta-Mood Scale
Schooler 2002 Re-representing consciousness: dissociations between experience and meta-consciousness
Factor Analysis of the Mystical Experience Questionnaire: A Study of Experiences Occasioned by the Hallucinogen Psilocybin
The hard problem of consciousness is itself a statement of ineffability - we exactly lack introspective ability into it:
Even with perfect introspection channels, we might run into limitations because Löb's theorem shows that it is not possible to have a complete and sound self-model (in the sense of trusting provable by me about myself). The ways around that are a) to leave the semantics out of the internal representation (i.e., not transmitting that information internally) or b) to add probabilistic uncertainty to the reflective self-representation (making it lossy). Both can be seen as a τ > 0 gap in the reflective self-trust/semantics channel.
Making Sense of Consciousness as Integrated Information: Evolution and Issues of IIT
Stanford Encyclopedia of Philosophy -> Eliminative Materialism