Renaming it “Coherence Consciousness Theory” is plausible, and it does something useful: it pulls the reader’s attention away from “the gap as a thing” and towards what the system is trying to dostay coherent while prediction and sensation continually misalign. That sits very naturally with your dementia framing (“a progressive difficulty in maintaining coherent experience”) and your broader point that environments and relationships can stabilise that coherence.  

But there’s a naming trap: “coherence” is already heavily claimed territory in consciousness science, and it’s used in several different ways.

Why “Coherence Consciousness Theory” may confuse (and how to avoid it)

  1. A prior “coherence definition of consciousness” already exists

Christoph von der Malsburg published a chapter titled “The coherence definition of consciousness” and (at least at the level of the abstract) explicitly defines consciousness as “a state of high coherence among modalities”

If you use “Coherence Consciousness Theory” without qualification, some readers will assume you mean thatcoherence account (unity/binding across modalities), not your prediction–sensation discrepancy account.

  1. “Coherence” is also used as a neural synchrony/network term

For example, Bartolomei & Naccache discuss Global Workspace theory in terms of coherent neuronal activityand propose a “physiological window” of neural coherence (minimum and maximum) associated with conscious processing.  

That’s a different “coherence” again: oscillatory/network synchrony.

  1. “Coherence” is also used in contemporary predictive/active inference consciousness models.

Laukkonen, Friston, and colleagues describe inferential competition where only inferences that coherently reduce long-term uncertainty “win” (their phrasing), and they connect this to a computational theory of consciousness.  

This is closer to your territory (coherence as internal consistency across a generative model), but it still isn’t identical to your “gap as felt tension” proposal.

So: coherence is a good word, but it is overloaded. If you adopt it, you’ll want a name that signals your specific meaning (coherence through discrepancy, not coherence-as-synchrony or coherence-as-binding).

Unpacking the words: “Coherence” + “Consciousness” + “Theory”

Coherence

In your current manuscript, coherence is not “everything matches perfectly”. It’s closer to:

  • Temporal coherence: the ability to experience continuity across moments (your dementia/time theme).
  • Affective coherence: feelings are interpretable and not wildly disconnected from what’s happening.
  • Social coherence: the world is such that people and cues can be trusted.
  • Narrative coherence: “What is going on enough to guide action and decision making.

In that sense, “coherence” is essentially the mind’s liveable.

Consciousness

In your framing, consciousness isn’t defined by having the “right cognitive modules”; it’s defined by the lived negotiation between what is simulated/predicted and what is sensed.

So consciousness is not merely “information processing”; it’s felt.

Theory

A theory (for academic readers) implies at least:

  • clear **construct definit is “coherence”, what exactly is “gap”), and
  • some discriminating predictions (how it would differ from GNW, IIT, HOT, etc).

Your current “gap” account already has the beginnings of this:

  • very low discrepancy → automaticity.
  • very high/volatile discrepancy → fragmentation.
  • consciousness tends to “live in the middle range”.

A coherence-oriented rename can sharpen those into a “coherence window” story: consciousness is what it’s like to live inside the window while it’s active. Is there a better name than “Coherence Consciousness Theory”?

If you want a name that:

  • keeps your core “gap” insight,
  • avoids collisions with existing “coherence” uses,
  • and evokes both scientific plausibility and Substack readability,

I’d shortlist these (two-tier: academic label + public-facing title).

1) Coherence Gap Theory

Why it works:

It preserves your signature concept (“the gap”) while signalling the purpose (“coherence”). It’s also close to your existing phrasing: dementia as difficulty maintaining a coherent experience.

What it communicates:

Consciousness is not coherence alone. It’s coherence-in-the-presence-of-gap.

2) Coherence Maintenance Account of Consci rks:

It shifts you from “what consciousness is” to “what consciousness does”: it maintains coherence while negotiating mismatch.

Strength:

It plugs straight into supported decision making: a decision is often an act of coherence repair (internally or via the environment).

Risk:

“Coherence maintenance” appears in other complex-systems/philosophical writing in a very general way, so you’d need to define it tightly (but that’s manageable).

3) Tolerable Mismatch Theory

Why it works:

It captures your “maintained middle” and your practical point: suffering (and poor decisions) often come from premature closure, not from the mismatch itself.

Strength:

It invites a measurable programme: mismatch magnitude, mismatch volatility, and tolerance capacity.

4) The Coherence Window Hypothesis

It’s humble (a hypothesis rather than a theory), and it is directly testable: consciousness occurs when a mismatch sits within a window and is integrated rather than bypassed or overwhelmed.

Bonus: It resonates with the “window” language already present in consciousness literature about coherence bounds—though note that in Bartolomei & Naccache, “coherence” is neural synchrony, so you’d want to be explicit that you mean an experiential/inferential window rather than oscillatory coupling.  

If you asked me to pick one that best balances originality, clarity, and communicative power:

My recommendation: Coherence Gap Theory (and if needed, a subtitle: “how minds stay liveable between prediction and sensation”).

If you adopt “Coherence Consciousness Theory”, what new ways of describing the gap open up?

A coherence-first framing changes the “gap” from being a noun (“the gap”) to being a dynamic parameter in a system trying to remain coherent.

Here are useful re-descriptions that fall naturally out of that shift:

1) The gap becomes a coherence margin

Not “the mismatch”, but how much mismatch the system can hold without collapsing into:

  • automaticity (too little meaningful discrepancy), or
  • fragmentation (too much/too volatile discrepancy).

This makes “tolerance for the gap” exact:

Tolerance = widening the margin within which a mismatch can be experienced without reflex closure or disorganisation.

2) The gap becomes a **coherence budget t in living systems: attention, affective load, physiological arousal, fatigue.

So you can talk about:

  • budget spent (when the day is noisy, unfamiliar, rushed), and
  • budget replenished (when the environment is stable, relational cues are safe, and pacing is humane).

This is immediately clinically usable in dementia care and in supported decision-making—without needing to overclaim mechanistically.

3) The gap becomes coherent work

This is a powerful phrase because it makes consciousness an activity:

  • The mind is doing ongoing “work” to keep self/world/time aligned enough to act.
  • Decisions become work: an attempt to stabilise the system when a mismatch won’t resolve passively.

That sits beautifully with your research motivation: supporting decisions often involves supporting coherence work.

4) The gap becomes a coherence gradient

Instead of “gap big vs small”, you get a shaped landscape:

  • gentle gradient: uncertainty is manageable → curiosity, exploration, learning
  • steep gradient: uncertainty feels urgent → threat response, panic, rigid closure
  • chaotic gradient: uncertainty changes too fast → delirium-like fragmentation

This language helps you avoid the trap of “bigger gap = better”. It’s about shape, not size.

5) The gap becomes coherence volatility

You already imply this: the problem isn’t only mismatch magnitude, it’s mismatch instability.

Coherence volatility gives you a clean way to talk about:

  • Why a familiar routine can be soothing even if cognition is impaired, and
  • Why fluctuating sensory/social cues can be destabilising even if the raw “in 

6) The gap becomes coherence scaffolding

This ties directly to your dementia/environment paragraph:

  • Routines, lighting, noise, tone of voice, and relational safety serve as scaffolding that reduce volatility and keep the system within the coherence window.

This is a strong bridge between philosophy and practice: it says your “theory of consciousness” has immediate implications for care design.

A clean “definition” that coherence language makes possible

If  that fuses the gap account with coherence language (and stays academically defensible), something like:

Coherence Gap Theory: Conscious experience is the lived process of maintaining a coherent self–world model over time by negotiating the discrepancy between predicted states (simulation) and sensed states (perception/interoception), within tolerable bounds of mismatch and volatility.

That sentence does three things:

  • keeps the “gap” central,
  • makes coherence the aim rather than the given,
  • preserves your “middle range” (bounded mismatch) intuition.

Bottom line

  • *“Coherence C is attractive, but it will be misread unless you qualify it, because ‘coherence’ already names other consciousness ideas (von der Malsburg’s coherence definition; neural coherence in GNW; coherence in active inference models).  
  • The best “fit” name that stays true to your manuscript and reduces confusion is Coherence Gap Theory.
  • Moving to coherence-first language gives you new, clinically useful ways to describe the gap: margin, budget, work, gradient, volatility, scaffolding—and each of those is a handle for testable predictions and for care design.

If you want, I’d open the section (Substack tone, your voice) that introduces the rename, clearly distinguishing it from existing uses of “coherence” in consciousness science so readers don’t assume you mean neural synchrony or binding.

Clean copy: renaming and reframing (with no words lost in the document link)

Renaming this as “Coherence Consciousness Theory” is plausible, and it does something helpful: it pulls attention away from “the gap as a thing” and towards what the system is trying to do — maintain coherence while prediction and sensation are continually (and imperfectly) being brought into alignment.

That sits very naturally with your dementia framing in v2.1, where dementia is described as a “progressive difficulty in maintaining coherent experience — particularly across time, affect, and social context” (your wording). It also fits your broader clinical point that environments and relationships can stabilise (or destabilise) this coherence.

The main risk: “coherence” is a crowded word in consciousness science

If you call it “Coherence Consciousness Theory”, some readers will assume you mean other established uses of “coherence”, rather than your specific claim about the felt mismatch between prediction and sensation.

There are at least three common “coherence” meanings already in the literature:

  1. Coherence as a proposed definition of consciousness (modality binding/unity)
    • Christoph von der Malsburg explicitly used the phrase “The coherence definition of consciousness” as a chapter, which means your chosen name could be read as an overlap or rebranding of that approach.  
  2. Coherence as a neurophysiological synchrony/network property
    • In a Global Workspace framework, Frédéric Bartolomei and Lionel Naccache discuss conscious processing in terms of coherent neuronal activity across distributed regions and describe a “physiological window”(minimum and maximum) of neural coherence associated with conscious processing.  
  3. Coherence as model-level consistency/selection in active inference
    • Ruben Laukkonen and colleagues (2025) describe inferential competition where only inferences that coherently reduce long-term uncertainty “win” and become bound into a pragmatic world model (they connect this to a theory of consciousness).  

So: coherence is a great word, but it’s overloaded. If you adopt it, the name needs to clearly signal your intended meaning: coherence through discrepancy, not coherence-as-binding or coherence-as-synchrony.

Better names that keep your meaning while avoiding confusion

Here are options that keep the “coherence” advantage but still foreground your signature concept (the gap):

Option A: Coherence Gap Theory

This is the cleanest bridge: it says coherence is the aim, and the gap is the mechanism/medium.

  • It fits your core claims in v2.1:
    • “Conscious experience … may be associated with the felt tension between what is expected and what is present.”
    • “Human consciousness may often live in the middle range … constantly negotiating a tolerable mismatch.”
    • “Dementia … [is] a progressive difficulty in maintaining coherent experience.”

(All of those are your own phrases and logic.)

Why this name works: it’s specific enough that a reader won’t confuse it with “coherence” as synchrony/binding, and it stays faithful to the thesis.

Option B: Coherence Maintenance Account of Consciousness

This shifts the emphasis from what consciousness is to what consciousness does: it maintains coherence in the face of mismatch.

Downside: it can sound generic (lots of things “maintain coherence”), so you would need a short subtitle that anchors it to prediction–sensation discrepancy.

Option C: The Coherence Window Hypothesis

This is very compatible with your “maintained middle” idea: not perfectly aligned, not completely unmoored.

Bonus: “window” language makes it easier to be empirically precise later (range, thresholds, volatility).

Option D: Tolerable Mismatch Theory

This is the most “reader-friendly” phrasing of what you already say in v2.1: not zero mismatch; not overwhelming mismatch; something tolerable and lived.

What changes when you centre “coherence” rather than “gap”?

If you move from “gap theory” → “coherence theory”, the gap stops being a static object. It becomes a dynamic control variable: something the organism is continuously regulating to keep the experience livable.

That unlocks new, clinically useful language — without changing your underlying argument.

1) The gap becomes a coherence margin

Not “how big is the mismatch”, but: how much mismatch can the system hold while remaining coherent enough to function, feel, and decide.

This directly reframes your “tolerance for the gap” phrase as:

Tolerance = widening the coherence margin, not increasing mismatch.

2) The gap becomes a coherence budget

Mismatch has costs in living systems: attention, arousal, fatigue, affective load, and time pressure.

That provides a practical bridge to supported decision-making: supporting decisions is often about reducing avoidable budget drain (noise, glare, rushing, unpredictable cues) and increasing budget capacity (rest, safety, familiarity, relational steadiness).

3) The gap becomes coherence work

This is a strong phrase because it treats consciousness as ongoing labour:

  • The mind continually performs coherence work to keep “what I expected” and “what I’m sensing” in workable alignment.
  • Decisions become a form of coherence work when passive alignment fails.

This is very consistent with your motivation: you’re studying decision making so you can support it, and coherence work may be part of what you’re supporting.

4) The gap becomes a coherence gradient

Instead of “gap big vs small”, you get shape:

  • gentle gradient → curiosity/learning
  • steep gradient → urgency/threat/rapid closure
  • chaotic gradient → disorganisation/fragmentation

This helps you avoid an unhelpful implication that “more gap is better”.

5) The gap becomes coherence volatility

Your v2.1 draft already hints at this when you talk about the size and volatility of the gap changing with environment, routines, tone of voice, and relational safety.

A coherence framing makes volatility central: the problem is often not mismatch per se, but mismatch that is unpredictable, rapidly shifting, or unresolvable.

6) The gap becomes coherence scaffolding

This is where your dementia point becomes sharper: care environments don’t just “support behaviour”. They scaffold coherence — reducing volatility when internal prediction can no longer stabilise experience as effectively.

A one-sentence definition that fuses coherence + gap cleanly

If you want a single, defensible line that captures the combined frame:

Coherence Gap Theory: conscious experience is the lived process of maintaining a coherent self–world model over time by negotiating the discrepancy between predicted states (simulation) and sensed states (perception and interoception), within tolerable bounds of mismatch and volatility.

That preserves your “gap” core, but makes coherence the organising target.

How does this model link with Thomas Nagel?

What you’ve sketched is a genuinely workable way to put Nagel’s “what-it-is-like” criterion into a more structured, clinically useful frame — without pretending you’ve “solved” Nagel.

The cleanest way to compare them is to separate (1) a criterion for whether something is conscious from (2) a hypothesis about how conscious experience is organised and maintained.

1) What Nagel is doing (and not doing)

In What is it like to be a bat? (1974), Thomas Nagel makes a famous move:

  • Consciousness isn’t primarily about intelligence, language, or rationality.
  • It’s about subjective character.
  • So: an organism has conscious experience if and only if “there is something it is like” to be that organism.  

Nagel’s deeper point is methodological: even if we build a very sophisticated objective account of the bat (its echolocation, neural circuitry, behaviour), we still face a problem of perspective. The bat’s experience is tied to its particular mode of being, and that “what-it’s-like” may resist translation into our viewpoint.  

So Nagel gives you a minimal criterion and a warning: the subjective aspect isn’t easily captured by a purely objective description.

2) What the Coherence Window Hypothesis is trying to do

Your essay (v2.1) is doing something different. It proposes that subjective experience is closely related to the tolerable, maintained middle range between:

  • Prediction/simulation (what the system expects), and
  • Sensation (what arrives, including embodied and visceral signals),

and that consciousness “often lives in the maintained middle … negotiating a tolerable mismatch.”

In that framing:

  • very low discrepancy → experience becomes automatic (habit/flow/unconscious competence)
  • A very high/unstable disc can fragment (delirium/panic/psychosis-like instability)

And dementia becomes a lens, like a progressive difficulty maintaining a coherent experience across time, affect, and social context.

So the Coherence Window Hypothesis is not a “minimal criterion.” It’s an attempt at a mechanistic/organising hypothesiswhat kind of dynamics tend to generate experience.

3) Your key innovation: turning Nagel’s “what-it-is-like” into a structured set of coherence demands

Nagel says: there is something it is like.

You’re asking: what has to be true of a living system, over time, for “something it is like” to remain coherent, liveable, and decision-capable?

The four layers you described are a strong way to enrich the Coherence Window hypothesis— and also make it more compatible with Nagel.

A) Temporal coherence

You’re basically saying: a conscious system experiences continuity relative to the tempo at which it must regulate itself and update its model of the world.

That’s defensible as a hypothesis, and there’s relevant comparative evidence that species differ in how finely they can resolve temporal change.

  • Kevin Healy and colleagues (2013) used critical flicker fusion frequency (CFF) as a measure of visual temporal resolution and found that smaller body size and higher metabolic rate are associated with higher temporal resolution across vertebrates.  
  • A “Quick guide” on hummingbird vision (Altshuler & Wylie) reports hummingbird CFF in the 70–80 Hz range.  

Now, CFF is not a direct readout of “subjective time” (that leap needs hedging), but it supports your broader point: different organisms likely inhabit different temporal coherence constraints, which would shape what it is like to be them.  

This dovetails with Nagel: he uses bats partly because their sensory world is alien to ours (echolocation), and therefore their lived continuity and salience structure will differ in ways we can’t just imagine by analogy.  

Coherence Window translation:

Nagel’s “what-it’s-like” becomes (in part) “what it’s like to remain coherent at this organism’s tempo”.

B) Affective coherence

Your affective layer is well supported by contemporary accounts that tie feeling to embodied prediction and interoception.

  • Anil Seth (2013) describes interoceptive inference, framing subjective feeling states as arising from predictive generative models of interoceptive signals.  

This supports your notion that “affective coherence” is a kind of internal balancing act: the system’s predictions and bodily signals generate felt urgency, safety, threat, reward, fatigue, pain — and those feelings steer action and choice.

Coherence Window translation:

The “window” isn’t just cognitive alignment; it’s a felt coherence between bodily state, predicted outcomes, and action readiness.

C) Social coherence

Here you’re describing the fact that for humans (and many social animals) “making sense” of the world isn’t only sensory and bodily — it’s normative, relational, and group-structured.

Two well-established anchors help make this academically defensible:

  • Belongingness as a powerful human motive: Baumeister & Leary (1995) argue for a fundamental “need to belong” and review evidence linking belonging to emotional and cognitive outcomes.  
  • Group identity shaping self-concept and behaviour: Tajfel & Turner’s social identity theory explains how group membership and intergroup comparison shape behaviour and meaning.  

This gives you a sturdy foundation for your clinical intuition: people endure (or resist) social demands in ways that preserve their current social coherence — often because coherence is tied to safety, belonging, predictability, and identity.

One careful refinement: I would avoid saying “tribalism is necessary for consciousness” (too strong). But you can definitely say:

  • Social identity and belonging can be major determinants of experienced coherence in humans, shaping what feels safe, sensible, and permissible.  

Coherence Window translation:

The window includes a social layer where coherence means “my world is predictable in relation to others, norms, and belonging”.

D) Narrative coherence

Your fourth layer — the “story we tell ourselves” — maps directly onto long-standing work distinguishing between the minimal and narrative selves.

  • Shaun Gallagher (2000) explicitly distinguishes a minimal self from a narrative self, which involves continuity and personal identity across time.  

This is highly relevant to dementia and to capacity/decision support, because narrative coherence can be compromised even when affective or social responsiveness remains.

Coherence Window translation:

Narrative coherence is not the foundation of consciousness in all creatures, but in humans it often becomes the highest-level organiser of the window: “this is who I am; this is what is happening; this is what matters.”

4) How does this compare to Nagel?

Where the Coherence Window Hypothesis complements Nagel

Nagel gives you a criterion; you’re adding a structured proposal for the anatomy of “what-it-is-like.”

You’re effectively saying:

  • “What it is like” is not one thing.
  • It is a coherent integration across layers (temporal, affective, social, narrative) that are weighted differently depending on the organism and the situation.

That is consistent with Nagel’s insistence that we must take subjective character seriously, and consistent with his bat example: different sensory modalities and bodily constraints produce different worlds.  

Where it pushes beyond Nagel 

Nagel’s position can leave the reader with a kind of respectful stalemate: “there is something it is like, but we can’t get inside it.”

Your coherence model doesn’t claim we can “get inside it,” but it does claim we can compare minds systematically by comparing:

  • tempo and integration constraints (temporal coherence)  
  • embodied affective regulation (affective coherence)  
  • social norm/belonging constraints (social coherence)  
  • narrative self-organisation (narrative coherence)  

That’s not a replacement for Nagel — it’s a way of building a comparative phenomenology scaffold.

Where it must stay modest

Nagel’s central warning remains: even a perfect objective model might not yield the subjective feel.  

So the academically safe position is:

  • The Coherence Window Hypothesis can explain constraints and organisation of conscious life (when it stabilises, when it fragments, what layers dominate),
  • But it may not fully bridge the epistemic gap Nagel highlights (the irreducibility of first-person character).

5) Dementia and psychedelics: why your “four coherences” really matter clinically

Your v2.1 essay already frames dementia as a progressive difficulty maintaining coherent experience across time/affect/social context.

Your four-layer coherence framing makes this more clinically actionable:

  • Dementia can disrupt temporal coherence (continuity, orientation, prediction stability).
  • It can destabilise affective coherence (over-salience, threat, or bodily discomfort that can’t be contextualised).
  • It can degrade social coherence (trust, familiarity, reading others).
  • It can erode narrative coherence (autobiographical continuity, explanation, ownership of a life story).  

On psychedelics: your instincts are also aligned with a major mechanistic proposal.

  • Carhart-Harris and colleagues’ REBUS model proposes psychedelics relax the precision/weighting of high-level priors, allowing increased bottom-up influence and altering the large-scale organisation of experience.  

In “coherence window” language, that can be framed carefully as:

  • a reconfiguration of the coherence window (often loosening narrative coherence and changing weighting between levels), which can feel liberating or disorganising depending on context and support.  

6) A clean synthesis statement: Nagel + Coherence Window

If you want one paragraph that captures the relationship:

Nagel gives a minimal criterion for consciousness: a system is conscious if there is something it is like. The Coherence Window Hypothesis proposes that this “what-it-is-like” is shaped by the system’s capacity to maintain coherent experience within a bounded window of mismatch between sensation. That window is multi-layered: temporal coherence (continuity across time), affective coherence (bodily and motor), social coherence (norms and belonging), and narrative coherence (self-explanatory). Different coherence windows are occupied by different organisms, which helps explain why animal consciousness can be real yet persistently difficult to translate across species.

That is faithful to Nagel’s core insight, while letting you build a testable, clinically grounded account.

7) If you want to “test” this against Nagel in a chain-of-verification style

Here is a compact set of atomic claims you can explicitly defend (or revise):

  1. Nagel’s criterion: Conscious experience implies there is something it is like to be the organism.  
  2. Window claim (from v2.1): Human consciousness often occupies a maintained middle range between automaticity and fragmentation, linked to a tolerable mismatch between prediction and sensation.
  3. Temporal parameter claim: Species vary in temporal resolution of perception; smaller, higher-metabolism animals tend to show higher temporal resolution (CFF).  
  4. Affective parameter claim: Affective experience is tightly coupled to interoceptive predictive processes.  
  5. Social parameter claim (human-weighted): Belonging and group identity strongly shape human meaning-making and behaviour.  
  6. Narrative parameter claim: Human selfhood includes narrative continuity, distinguishable from minimal selfhood.  
  7. Clinical perturbation claim: Dementia disrupts these coherence layers but does not necessarily eliminate lived experience.
  8. Perturbation-by-psychedelics claim: Psychedelics can reorganise the weighting/precision of high-level priors and alter experience organisation. 

Then you check for congruence: does the combined set still respect Nagel’s “irreducible subjectivity” warning while offering a structured explanation of the organisation of experience?  

Coherence Window Hypothesis and Consciousness

A chain-of-verification comparison across major frameworks (with critique)

1) Why does this need a “chain-of-verification” approach?

If we treat consciousness as something that can be straightforwardly “defined” in purely objective terms, we run straight into the problem Thomas Nagel highlighted: even a complete third-person account can leave out the first-person fact that it is like something to be the organism. (1)  

Your draft (“gap theory”) proposes that the lived experience of being a self-arises from the gap between what is sensed and what is predicted/simulated, and that consciousness lives in a maintained “middle” between two failure modes: (i) too little gap (automaticity; “the world is already solved”), and (ii) too much gap (fragmentation; overload).

A chain-of-verification method is well-suited here because it forces us to:

  • Deconstruct into discrete, falsifiable claims.
  • Test each claim against established theories and evidence.
  • Revise the hypothesis when it conflicts with well-supported observations.
  • Re-test the revised set for internal consistency (congruence).

What follows is an academically defensible way to do that, using (as requested) Nagel → Seth/predictive processing (with Helmholtz) → von der Malsburg → global workspace (Bartolomei & Naccache) → Laukkonen/Friston “inferential competition” → social coherence authors → Gallagher (minimal vs narrative self), plus explicit critique and disconfirmation pressure points.

2) The Coherence Window Hypothesis as a testable claim set

To compare against other theories, we need a “claim ledger”. Here is a minimal set that preserves your core idea while making each component testable.

CWH-0 Working definition (provisional)

CWH proposes that conscious experience depends on maintaining a viable coherence window: a bounded range of discrepancy and integration between (a) predictions/simulations and (b) sensory–interoceptive signals, across multiple scales (temporal, affective, social, narrative).

This is deliberately framed as a working hypothesis about conditions that enable and shape experience in a final metaphysical solution to “why there is something it is like”.

CWH-1 Predictive/inferential architecture is real and central

Perception is not a passive readout; it is inference shaped by predictions (predictive coding / Bayesian brain / free-energy approaches). (2–6)  

CWH-2 The “gap” maps (at least partly) onto prediction error and precision

The prediction–sensation discrepancy corresponds to prediction error signals, modulated by precision/attention. (3–6)  

CWH-3 Not all “gaps” are conscious (gap is not sufficient)

There are robust neural signatures of deviance/prediction error that can occur without reportable awareness (e.g., mismatch negativity). (16)  

So, if CWH is to be defensible, it must specify which gap becomes experience and under what integration conditions.

CWH-4 Conscious access depends on large-scale integration and has a “window” property

Conscious processing requires a functional regime that is neither too disconnected nor too globally synchronised/locked—consistent with a “physiological window” idea. (13–15) 

CWH-5 Consciousness is multi-scale: four coherence domains

CWH’s proposed pillars can be stated as:

  • Temporal coherence: integration across time (including delay/postdiction effects and longer‑horizon self-continuity). (10–12, 28–31)  
  • Affective (interoceptive) coherence: integration of internal bodily signals into felt presence and emotion. (7–9)  
  • Social coherence: the role of other minds, norms, and group regularities in stabilising expectations and reducing uncertainty/effort. (18–24) 
  • Narrative coherence: the story-like, identity-maintaining layer that integrates memory, agency, and values across time. (26–31)  

CWH-6 AI consciousness (if any) would be different in kind unless it has comparable constraints

CWH predicts that systems lacking embodied self-maintenance pressures and interoceptive style inference may lack the same kind of “coherence window” dynamics found in organic life. This is a hypothesis, not a settled claim; it becomes sharper when aligned with active inference’s emphasis on world modelling, inferential competition, and “epistemic depth”. (4, 17)  

3) Verification Round 1: Nagel as the “constraint keeper”

What Nagel commits us to

Nagel’s core move is not to offer a mechanism; it is to set a constraint: a theory of consciousness must respect the irreducible first-person character of experience—“what it is like”. (1)  

Where CWH aligns

CWH can be made compatible with Nagel if it is framed as explaining the structure and stability conditions of “what it is like”, rather than claiming to eliminate subjectivity. In other words:

  • Nagel gives the criterion: there is something it is like.
  • CWH attempts to describe how that “like-ness” is organised and maintained via a bounded coherence regime.

Where Nagel pressures CWH (disconfirmation risk)

If CWH is stated as a complete definition (“consciousness just is the gap”), it risks collapsing into a purely functional re-description that still does not reach the subjective fact Nagel foregrounds. (1)  

CWH revision demanded by Nagel

An academically defensible revision is:

*CWH does not replace Nagel’s criterion; it proposes a mechanistic–organisational account of the conditions under which subjective experience remains coherent and report domains.

That keeps the hypothesis ambitious without overstating its scope.

4) Verification Round 2: Helmholtz → predictive coding → Seth

4.1 Helmholtz and “unconscious inference”

Helmholtz is widely credited with the idea that perception depends on inference—our percept’s are constructed, not simply received. (2)  

This is directly congenial to the “simulation” side of your gap framing.

4.2 Predictive coding and Bayesian brain

Modern predictive coding formalises this: higher levels predict lower-level sensory input; mismatches (prediction errors) drive updating. (3)  

The Bayesian brain framing emphasises probabilistic inference in the face of uncertainty. (6)

Friston’s free-energy principle situates this in a broader story about perception/action/learning as optimisation of surprise bounds (variational free energy), with strong links to predictive processing. (4)  

Clark synthesises predictive brains as situated, action-oriented systems (not disembodied predictors). (5) 

These sources strongly support CWH-1 and CWH-2: “prediction vs sensation” is not a metaphor; it is a mainstream modelling stance.

4.3 Seth: conscious presence and interoceptive inference

Seth and colleagues explicitly link conscious presence to interoceptive prediction error and its suppression by top-down predictions, and place the anterior insula as a plausible comparator hub. (7)  

Seth’s subsequent work frames emotion and selfhood in terms of interoceptive inference. (8) 

Seth & Friston develop “active interoceptive inference” in which bodily regulation and affect are integral to inference. (9)  

This is a major point of support for your affective coherence pillar: it gives a credible mechanistic bridge between “gap” and “felt experience”, because interoception is where prediction error is inherently tied to bodily regulation and valence. (7–9)  

4.4 The temporal problem: delay and postdiction

Your draft’s emphasis that experience is not real-time is supported by evidence that the brain aligns signals over time and can “postdict” percepts. Predictive coding with transmission delays is explicitly modelled by Hogendoorn & Burkitt. (10) 

Eagleman & Sejnowski show postdictive integration in visual awareness. (11)  

This matters for CWH because the “gap” is not a single instant; it is a temporally extended integration problem—supporting the idea of a coherence window rather than a point mismatch. (10–11)  

4.5 Where predictive processing and Seth challenge CWH (gap not sufficient)

A key disconfirming pressure: prediction errors can be generated without conscious awareness. Mismatch negativity is a classic example of deviance detection that may not be consciously noticed or reported. (16)  

So, the defensible conclusion is:

The existence of prediction error supports the “gap” construct, but also forces CWH to add an “access/integration” condition.

This is exactly where global workspace and “epistemic depth” accounts become necessary in the chain.

5) Verification Round 3: von der Malsburg and “coherence”

von der Malsburg’s “coherence definition of consciousness” uses coherence as a central explanatory concept, associated with how distributed neural representations may bind into unified contents. (12)  

Where this strengthens CWH

  • It legitimises “coherence” as a serious neuroscientific term rather than just a metaphor.
  • It invites an implementation hypothesis: the “coherence window” may be realised through dynamic binding/integration mechanisms. (12)  

Where it conflicts (or at least forces clarification)

Your CWH uses “coherence” partly as stability/fit across domains, whereas von der Malsburg’s coherence is more about binding/constraint satisfaction across distributed features. (12)  

To remain rigorous, CWH must disambiguate:

  • Neural coherence (synchrony/coordination of neural activity).
  • Computational coherence (consistency of beliefs/predictions with evidence given precision).
  • Phenomenological coherence (the felt unity/continuity of experience).

A defensible integration is to treat von der Malsburg-style binding as one candidate mechanism contributing to the neural/computational side of the CWH window, without equating “coherence” in every sense. (12)  

6) Verification Round 4: Global Workspace theory and the “physiological window”

Global workspace theory is typically associated with Baars, who framed consciousness as a limited-capacity workspace that enables widespread access to information. (14) 

Dehaene & Naccache provide a widely cited cognitive neuroscience articulation of a workspace framework. (15)  

Bartolomei & Naccache explicitly connect global workspace ideas to epilepsy and argue for a “physiological window” in which conscious processing can occur, with loss of consciousness at extremes (e.g., overly strong synchrony or pathological dynamics). (13)  

What this adds to CWH

This is one of the strongest points of convergence for your naming choice:

  • Global workspace accounts already accommodate the idea that conscious processing operates within a bounded regime (not too little integration, not too much pathological synchrony). (13,15)  
  • They also supply the missing constraint from predictive processing: not all errors become conscious; conscious contents are those that enter a global/broadcast workspace. (13–15)  

What this disconfirms (if CWH is stated too strongly)

If CWH claims “consciousness is the gap”, GNW pressures that claim because it treats consciousness as a mode of access/integration rather than as discrepancy per se. (13–15)  

Defensible synthesis point

A rigorous synthesis is:

Gap/prediction error may be a driver of salience and updating, but conscious experience depends on the workspace-level integration of selected inferences within a bounded physiological regime.

That directly refines CWH-3 and CWH-4.

7) Verification Round 5: Laukkonen, Friston, Chandaria — “inferential competition” and “epistemic depth”

Laukkonen and colleagues present an explicit active inference theory of consciousness and state three conditions: (i) simulation of a world model (“epistemic field”), (ii) inferential competition to enter the world model (“Bayesian binding”), and (iii) epistemic depth as recurrent sharing of Bayesian beliefs across the system. (17) 

This is remarkably aligned with your intuition about coherence windows, but it also sharpens it.

Where this strongly supports CWH

  • It provides an explicit mechanism for why some inferences become conscious: they win inferential competition by coherently reducing long-term uncertainty. (17)  
  • It introduces precision control at the system level (a “hyper-model”), which maps neatly to your idea of a regulated window (too rigid vs. too unstable). (17)  
  • It explicitly links the framework to altered states and meditation, which resonates with your broader intuition that practices can shift the window. (17)  

Where this forces a more precise CWH

Laukkonen et al effectively suggest that consciousness is not simply a “gap”, but a property of globally integrated, competitively selected inference with depth. (17)  

So the CWH becomes more defensible if re-stated as:

Consciousness corresponds to an inference that has won a competition and is maintained with sufficient epistemic depth within a regulated precision regime (the coherence window).

This keeps your “gap” as the generative tension that necessitates model updating, without making it the whole definition.

8) Verification Round 6: Social coherence as “energy/uncertainty economy”

You asked for authors who support the idea that humans seek the “lowest energy” route to socially coherent decision-making. Here, we need to be careful with terms:

  • In active inference/FEP, “free energy” is a variational/information-theoretic quantity (a bound on surprise), not metabolic energy—though the frameworks often connect to biological self-maintenance. (4,17)  
  • In social baseline theory and related work, there is a stronger emphasis on the resource/effort economy in social regulation. (20–22)  

8.1 Social baseline theory and threat regulation

Social baseline theory argues that social proximity/support can serve as a baseline condition, reducing the perceived and actual costs of dealing with the environment. (20) 

Coan and colleagues provide experimental evidence that social contact (e.g., hand holding) can dampen neural responses to threat. (22)  

This supports a CWH social pillar: social contexts can stabilise coherence by lowering threat-driven volatility and reducing the burden on individual regulatory systems. (20–22)  

8.2 Active inference accounts of social cognition and conformity

Friston & Frith explicitly extend active inference to social cognition (“a duet for one”), emphasising coordination and mutual modelling. (18)

Constant et al model social conformity in active inference terms (a “regime of expectations”), aligning with the idea that norm-following can reduce uncertainty and action costs. (19)

Veissière and colleagues connect culture and shared expectations to variational approaches, providing a framework for how social/normative structures become stabilising priors. (20)

Together, these provide a defensible mechanistic link: social coherence can be framed as a shared inference regime that reduces uncertainty (and often effort) by aligning priors, norms, and expectations across agents. (18–20)

8.3 Classic social psychology: normative alignment and dissonance reduction

Deutsch & Gerard distinguish normative and informational social influence on judgment. (23) 

Asch’s conformity work remains a canonical demonstration of social pressure shaping expressed judgment. (24)

Festinger’s cognitive dissonance theory formalises a drive toward internal consistency (coherence) in beliefs and behaviours. (25)  

These sources support a “coherence” motif at the psychological level: people often act to preserve social and internal consistency under pressure. (23–25)  

8.4 “Economy of action” and embodied costs

Proffitt argues that perception of the world (e.g., hills, distances) is modulated by action costs and bodily state—an “economy of action” framing. (26)  

This strengthens your intuition that coherence is not abstract: organism-level costs shape how the world is perceived and acted upon. (26)  

What social coherence adds to CWH

A coherent CWH update is:

Social coherence is not an optional extra; it provides external structure (norms, safety cues, shared expectations) that can stabilise precision control and reduce volatility in the coherence window.

That is compatible with both social baseline theory and active inference accounts of conformity and culture. (18–20,22–26)  

9) Verification Round 7: Narrative coherence — Gallagher, episodic memory, mental time travel

9.1 Minimal self vs narrative self

Gallagher distinguishes a minimal self (immediate sense of agency/ownership) from a narrative self (identity and continuity across time). (27) 

This fits your framing almost directly: narrative coherence is plausibly a higher-order layer built on more basic sensorimotor and affective coherence. (27) 

9.2 Episodic memory and autonoetic consciousness

Tulving’s work on episodic memory emphasises the link between episodic remembering and the subjective sense of self in time. (28)  

Gardiner explicitly discusses episodic memory and autonoetic consciousness from a first-person approach. (29)  

These provide empirical scaffolding for the claim that narrative coherence depends on specialised memory systems that allow a self to be experienced across time. (28–29)  

9.3 Is narrative coherence uniquely human?

Suddendorf & Corballis review comparative work and argue there is not yet convincing evidence for full “mental time travel” in nonhuman animals, despite extensive study (noting the adaptive significance of prospection). (30)  

However, Clayton & Dickinson demonstrate “episodic-like memory” in scrub jays using behavioural criteria (what–where–when). (31)  

A careful, defensible synthesis is:

  • Some animals show components relevant to narrative coherence (episodic-like features). (31)  
  • Whether they possess the full narrative/self-continuity structure humans report remains contested, and current reviews caution against overclaiming. (30)  

This matches your own caution: we should not assume narrative coherence is absent in other species; we should treat it as an open empirical question constrained by Nagel’s warning about the limits of extrapolation. (1,30–31)  

10) Built-in critique: where CWH is most vulnerable

A chain-of-verification essay is only useful if it highlights where the hypothesis could fail.

10.1 “Bayesian just-so stories” and underconstrained explanations

Bowers & Davis argue that Bayesian frameworks can drift into post hoc rationalisations if constraints and predictions are not specified tightly. (32)  

CWH risk: If “coherence window” becomes a flexible metaphor that explains everything after the fact, it will not advance the field.

CWH response: Specify measurable proxies (neural integration metrics, precision-control proxies, behavioural indices of volatility) and pre-register discriminating predictions.

10.2 Realism about Bayes and theory-of-mind scope

Colombo and colleagues critique how “Bayesian” talk can be interpreted too strongly or too loosely, and press for realism about what Bayes is doing in explanation. (33) 

CWH response: Treat Bayes/predictive coding as modelling tools and mechanisms where evidence supports them, not as universal explanations by default.

10.3 Critique of FEP as “first principle”

Colombo & Wright analyse the free-energy principle in relation to organicism and mechanism, flagging philosophical and explanatory issues. (34)  

Sánchez‑Cañizares offers a detailed critique of FEP’s philosophical “grand unification” ambitions while still treating it as promising science. (35)  

CWH risk: If CWH leans too heavily on FEP as a universal foundation, it inherits these critiques.

CWH response: Use FEP/active inference as one strong formal context (especially given Laukkonen et al’s explicit consciousness proposal), but keep CWH testability independent: the coherence window should be definable and measurable even if one rejects strong FEP metaphysics. (17,34–35) 

11) Provisional synthesis: a defensible “CWH 2.0” statement

After passing through the verification chain, the strongest academically defensible version looks like this:

Coherence Window Hypothesis (CWH 2.0):

Conscious experience depends on the maintenance of a bounded regime of precision‑weighted discrepancy and integration between predicted and sensed/interocepted signals. This regime is realised through (i) inferential competition and selection of inferences that reduce uncertainty over relevant time horizons, and (ii) sufficient epistemic depth/global integration (workspace-like access). The phenomenology of experience is shaped across multiple coherence domains (temporal, affective/interoceptive, social, and narrative), with higher domains depending on the stability of lower ones.

CWH does not replace Nagel’s criterion that consciousness is “what it is like”; it proposes an empirically testable account of how that “like-ness” remains coherent, becomes accessible, and varies across organisms and contexts.

This synthesis is directly supported (in components) by predictive coding/Bayesian brain models (3–6), interoceptive inference accounts (7–9), global workspace “window” ideas (13–15), and Laukkonen et al’s inferential competition + epistemic depth conditions for consciousness. (17)  

12) What would test CWH, rather than just decorating it?

To avoid the “just-so story” critique, here are discriminating predictions that follow from the chain above.

12.1 “Window” prediction (non-monotonicity)

If consciousness depends on a coherence window, then increasing prediction–sensation discrepancy should not monotonically increase consciousness. Instead, there should be:

  • a range where reportable experience, integration, and adaptive behaviour are enhanced,
  • and ranges where either rigid automaticity (too little discrepancy) or fragmentation (too much) reduces coherent report/access.

This is conceptually consonant with the GNW “physiological window” framing in the context of epilepsy and loss-of-consciousness extremes. (13)  

12.2 “Gap is not sufficient” prediction (unconscious error)

Prediction error signatures should be observable without awareness unless they enter global integration / epistemic depth regimes. This is already consistent with MMN literature. (16) 

12.3 Social scaffolding prediction

Manipulating social safety/support should shift coherence dynamics (e.g., reduced volatility and altered precision weighting), thereby altering the stability and breadth of the coherence window. This is consonant with social baseline theory and threat regulation findings. (20–22)  

12.4 Narrative fragility prediction

Conditions that disrupt episodic continuity and narrative self (e.g., neurological disease states, certain altered states) should disproportionately disrupt narrative coherence while leaving some minimal/affective coherence relatively intact—consistent with minimal vs. narrative self-distinctions and episodic/autonoetic frameworks. (27–29) 

12.5 AI consciousness prediction (conditional, not rhetorical)

CWH predicts that AI systems implementing prediction and even inferential competition may still lack organic-styleaffective/viability constraints unless they possess something like interoceptive inference and self-maintaining dynamics. Laukkonen et al. explicitly include AI within the scope of active inference modelling of consciousness, but only under specified conditions (world model, inferential competition, epistemic depth). (17)  

This keeps the claim precise: AI consciousness is not ruled out, but it is constrained by specific architectural and dynamical requirements—and any resulting consciousness may be alien in Nagel’s sense. (1,17) 

Reference list (Vancouver)

  1. Nagel T. What is it like to be a bat? Philosophical Review. 1974;83(4):435–450. (JSTOR:2183914). 
  2. Oxenham AJ. Helmholtz: from enlightenment to neuroscience. J Clin Invest. 2011;121(2):427–429. doi:10.1172/JCI45877. 
  3. Rao RPN, Ballard DH. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat Neurosci. 1999;2(1):79–87. doi:10.1038/4580. 
  4. Friston KJ. The free-energy principle: a unified brain theory? Nat Rev Neurosci. 2010;11(2):127–138. doi:10.1038/nrn2787. PMID:20068583.  
  5. Clark A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav Brain Sci. 2013;36(3):181–204. doi:10.1017/S0140525X12000477. 
  6. Knill DC, Pouget A. The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci. 2004;27(12):712–719. doi:10.1016/j.tins.2004.10.007.
  7. Seth AK, Suzuki K, Critchley HD. An interoceptive predictive coding model of conscious presence. Front Psychol. 2012;2:395. doi:10.3389/fpsyg.2011.00395. PMID:22291673. 
  8. Seth AK. Interoceptive inference, emotion, and the embodied self. Trends Cogn Sci. 2013;17(11):565–573. doi:10.1016/j.tics.2013.09.007.  
  9. Seth AK, Friston KJ. Active interoceptive inference and the emotional brain. Philos Trans R Soc Lond B Biol Sci. 2016;371(1708):20160007. doi:10.1098/rstb.2016.0007. 
  10. Hogendoorn H, Burkitt AN. Predictive coding with neural transmission delays: a real-time temporal alignment hypothesis. eNeuro. 2019;6(2):ENEURO.0412-18.2019. doi:10.1523/ENEURO.0412-18.2019. 
  11. Eagleman DM, Sejnowski TJ. Motion integration and postdiction in visual awareness. Science. 2000;287(5460):2036–2038. doi:10.1126/science.287.5460.2036. PMID:10720334.  
  12. von der Malsburg C. The coherence definition of consciousness. In: Ito M, Miyashita Y, Rolls ET, editors. Cognition, Computation, and Consciousness. Oxford: Oxford University Press; 1997. p.192–204. doi:10.1093/acprof:oso/9780198524144.003.0013.  
  13. Bartolomei F, Naccache L. The global workspace (GW) theory of consciousness and epilepsy. Behav Neurol. 2011;24(1):67–74. doi:10.3233/BEN-2011-0310. PMID:21447900.  
  14. Baars BJ. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press; 1988.  
  15. Dehaene S, Naccache L. Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition. 2001;79(1–2):1–37. doi:10.1016/S0010-0277(00)00123-2. PMID:11164022.  
  16. Garrido MI, Kilner JM, Stephan KE, Friston KJ. The mismatch negativity: a review of underlying mechanisms. Clin Neurophysiol. 2009;120(3):453–463. doi:10.1016/j.clinph.2008.11.003. 
  17. Laukkonen R, Friston K, Chandaria S. A beautiful loop: an active inference theory of consciousness. Neurosci Biobehav Rev. 2025;176:106296. doi:10.1016/j.neubiorev.2025.106296. PMID:40750007.  
  18. Friston KJ, Frith CD. A duet for one. Conscious Cogn. 2015;36:390–405. doi:10.1016/j.concog.2014.12.003. PMID:25563935.
  19. Constant A, Ramstead MJD, Veissière SPL, Friston K. Regimes of expectations: an active inference model of social conformity and human decision making. Front Psychol. 2019;10:679. doi:10.3389/fpsyg.2019.00679.
  20. Veissière SPL, Constant A, Ramstead MJD, Friston KJ, Kirmayer LJ. Thinking through other minds: a variational approach to cognition and culture. Behav Brain Sci. 2020;43:e90. doi:10.1017/S0140525X19001213.
  21. Beckes L, Sbarra DA. Social baseline theory: state of the science and new directions. Curr Opin Psychol. 2022;43:36–41. doi:10.1016/j.copsyc.2021.05.005.  
  22. Coan JA, Schaefer HS, Davidson RJ. Lending a hand: social regulation of the neural response to threat. Psychol Sci. 2006;17(12):1032–1039. doi:10.1111/j.1467-9280.2006.01832.x. 
  23. Deutsch M, Gerard HB. A study of normative and informational social influences upon individual judgement. J Abnorm Psychol. 1955;51(3):629–636. doi:10.1037/h0046408. PMID:13286010. 
  24. Asch SE. Opinions and social pressure. Sci Am. 1955;193(5):31–35.
  25. Festinger L. A Theory of Cognitive Dissonance. Stanford (CA): Stanford University Press; 1957.  
  26. Proffitt DR. Embodied perception and the economy of action. Perspect Psychol Sci. 2006;1(2):110–122. doi:10.1111/j.1745-6916.2006.00008.x. PMID:26151466.  
  27. Gallagher S. Philosophical conceptions of the self: implications for cognitive science. Trends Cogn Sci. 2000;4(1):14–21. doi:10.1016/S1364-6613(99)01417-5. PMID:10637618.  
  28. Tulving E. Episodic memory: from mind to brain. Annu Rev Psychol. 2002;53:1–25. doi:10.1146/annurev.psych.53.100901.135114. 
  29. Gardiner JM. Episodic memory and autonoetic consciousness: a first-person approach. Philos Trans R Soc Lond B Biol Sci. 2001;356(1413):1351–1361. doi:10.1098/rstb.2001.0955. PMID:11571027. 
  30. Suddendorf T, Corballis MC. The evolution of foresight: what is mental time travel, and is it unique to humans? Behav Brain Sci. 2007;30(3):299–313. doi:10.1017/S0140525X07001975. PMID:17963565. 
  31. Clayton NS, Dickinson A. Episodic-like memory during cache recovery by scrub jays. Nature. 1998;395(6699):272–274. doi:10.1038/26216. PMID:9751053.  
  32. Bowers JS, Davis CJ. Bayesian just-so stories in psychology and neuroscience. Psychol Bull. 2012;138(3):389–414. doi:10.1037/a0026450. PMID:22545686.  
  33. Colombo M, Elkin L, Hartmann S. Being realist about Bayes, and the predictive processing theory of mind. Br J Philos Sci. Published online 2018. doi:10.1093/bjps/axy059. 
  34. Colombo M, Wright C. First principles in the life sciences: the free-energy principle, organicism, and mechanism. Synthese. 2021. doi:10.1007/s11229-021-03164-w. 
  35. Sánchez-Cañizares J. The free energy principle: good science and questionable philosophy in a grand unifying theory. Entropy (Basel). 2021;23(2):238. doi:10.3390/e23020238. PMID:33669529.