What you’re circling is a really useful separation:

Affective coherence: “my body + your body” can be brought into a stable, mutually predictable pattern (arousal, breath, posture, gaze, tempo, distance, approach/avoid).

Narrative coherence: “my story + your story” can be made mutually intelligible over time (sequencing, causality, reasons, identity, commitments, counterfactuals).

Resonance (your term, and I think it’s the right one): the coupling mechanism that lets two systems lock in—sometimes without words—so prediction, regulation, and attachment become possible.

Once you name those separately, a lot of confusion in “animal communication” debates dissolves.

1) Why being “good with animals” is not evidence of animal narrative coherence

Your description maps extremely well to what we’d expect if the main bridge between species is resonance → affective coherence, not narrative exchange.

A person who is “attuned” to an animal often does (consciously or not):

downregulates their own arousal (slower breath, softer prosody, smoother movement),

becomes exquisitely sensitive to small state‑changes in the animal (micro‑posture, gaze, ear/tail position, pacing, freezing) and uses that sensitivity to maintain the dyad’s stability.

That’s a real form of intersubjective contact, but it’s not automatically “story-based.”

There’s also empirical work that looks very much like what you’re pointing at. Dogs, for example, show behavioural synchrony with their owners (moving when the owner moves, stopping when the owner stops, staying close, aligning gaze direction, etc.).  And studies measuring physiology suggest that dog–owner interactions can show co-modulationin autonomic state (e.g., heart rate variability patterns tracking each other across interaction contexts).  

That’s basically “resonance made measurable.”

So I’d agree with your core claim here: Human–animal “understanding” can be highly real and highly reliable, while still being primarily affective coherence + resonance, with the human often supplying a narrative overlay (“we communicated”) to make sense of what was, functionally, a regulation/prediction loop.

A small refinement that helps

Where it gets tricky is that narrative coherence can be private. An animal might have some internal temporally-structured representation (event memory, expectation, planning) without being able to externalise it in propositional language.

So the key question becomes: what do you mean by “narrative”?

If narrative means linguistically shareable autobiography, then yes—humans look radically different.

If narrative means temporally ordered, causally structured “event models”, some animals may have building blocks.

2) Animals: likely rich affective consciousness, uneven “proto-narrative” capacities

If your hypothesis is tempted to say “no narrative coherence → not conscious,” there’s a major pitfall: that risks collapsing consciousness into narratable selfhood.

There’s a strong scientific consensus that many animals have conscious states / subjective experience, even though they can’t report them linguistically. The Cambridge Declaration on Consciousness explicitly argues that the absence of a neocortex doesn’t preclude affective states, and that non-human animals (including all mammals and birds, and explicitly naming octopuses) possess neurological substrates of conscious states.  

That doesn’t settle what kind of consciousness they have, but it strongly supports the idea that:

Affective consciousness (valenced experience, “good/bad,” fear/comfort, etc.) is widespread,

while narrative selfhood (especially the human, language-and-culture scaffolded version) may be much rarer.

At the same time, there is evidence of event-structured memory and foresight in some non-human species: Scrub jays show behavioural evidence consistent with “what,‑where,‑when” episodic-like memory in caching and recovery.  

Great apes (in experimental work on tool use) can override immediate drives in favour of later needs, with authors interpreting this as involving forms of future planning / pre-experience.  

That combination is exactly what you’d expect if animals can have coherence across time (some “proto-narrative” machinery), without having the shared symbolic narrative space that humans inhabit.

So you can keep your “narrative coherence is a differentiator” idea, but it’s safer to treat it as a differentiator of a type/level of consciousness, not consciousness per se.

3) Moral attention: narrative coherence may govern which obligations we feel, not whether a being matters

Your moral point is sharp:

“Unless there is narrative coherence, we don’t usually attend to the person as another conscious creature that adjusts our moral compass.”

I’d tweak that into a two-layer claim:

What triggers moral salience in humans is often resonance, especially visible affect (distress, vulnerability, social bonding cues).

What supports moral structure (rights, accountability, reciprocity, dignity, promises, consent, blame) is often narrative coherence (shared reasons across time).

That distinction explains a lot:

We can feel immediate compassion for an animal in pain (resonance/affect).

But we often reserve “full person-like” moral categories (autonomy, contracts, culpability, political standing) for beings who can participate in the narrative/normative space.

And it also avoids a major counterexample to “narrative coherence = moral threshold”:

human infants, and many humans with severe communication impairment, or advanced dementia, may have limited narrative coherence—yet we treat them as fully morally considerable.

This is exactly where resonance often becomes the bridge when narrative falls away (which is very relevant to dementia contexts).

So one natural evolution of your theory is:

Sentience / affective coherence sets the floor of moral concern.

Narrative coherence shapes moral concern (autonomy, responsibility, mutual recognition).

4) Star Trek: yes, it debates consciousness/sentience—just not in your vocabulary

Star Trek doesn’t typically use the phrase “narrative coherence,” but it repeatedly dramatises the problem you’re describing: what properties make a being count as the sort of entity our moral/legal systems must recognise?

Episodes that explicitly stage the “what is consciousness/sentience?” problem

TNG: “The Measure of a Man” (S2E9)

This is the clearest case. A court hearing is convened over whether Data is Starfleet property or a rights-bearing being. In the plot, Maddox’s criteria for sentience include intelligence and self-awareness, and the sticking point becomes that consciousness is not something the court can measure, yet the decision has to be made anyway.  

That’s basically: we can test competence and self-modelling, but “inner life” remains epistemically slippery—so law and ethics must decide under uncertainty.

TNG: “The Quality of Life” (S6E9)

Data advocates for the Exocomps as more than tools; the episode and later commentary explicitly frame it as about “conscious” vs “unconscious” artificial life, including discussion of how hard “alive” and “conscious” are to define.  

VOY: “Author, Author” (S7E20)

A Federation tribunal considers whether the Doctor has legal standing. The arbitrator explicitly grapples with whether the Doctor is “merely programmed to simulate” person-like traits or actually possesses them, and ultimately extends specific rights (control over his work), even while stopping short of full legal personhood.  

TOS: “The Ultimate Computer” (S2E24)

This is an early Trek example of the “machine mind” problem: the M‑5 system takes lethal action, and the plot involves human engrams and responsibility—an older framing of machine “mind” and moral consequence.  

Episodes that are (quietly) about narrative coherence barriers

TNG: “Darmok” (S5E2)

This is almost a pure parable about shared narrative reference as the basis of meaning. The universal translator can map words, but it cannot supply the shared mythology that makes those words usable; communication only succeeds when Picard and Dathon align on story structure and lived context.  

If you want a Trek episode that “gets” your narrative-coherence thesis, this is the one—because the entire barrier is essentially: “we don’t share the same narrative lattice.”

Episodes where moral regard is granted before narrative exchange is possible

These matter because they push back on “narrative coherence is required for moral consideration”:

TNG: “Home Soil” (S1E18)

The crew discovers an intelligent crystalline life form and ends up negotiating to protect it and return it to safety—even though it’s a radically non-humanoid intelligence and initially hostile.  

TNG: “Emergence” (S7E23)

Data proposes the Enterprise may be “developing its own intelligence,” and the crew allows a strange emergent process to complete, effectively midwifing a new lifeform. 

So Trek’s pattern is actually very aligned with your proposed extensions:

It uses civilisation as a pragmatic marker for diplomacy, law, and shared norms.

But it uses life and sentience as a moral marker, sometimes well before shared narrative is available.

And even the franchise motto you referenced (“new life and new civilisations”) is doing a conceptual split: life is broader than civilisation.

5) A way to fold “coherence + resonance” into your hypothesis cleanly

Here’s a compact model that preserves your key insight without making narrative the gatekeeper for consciousness:

Three-layer coherence (inside the organism)

Affective coherence: stable valenced regulation (comfort/distress; approach/avoid).

Agentic coherence: stable goal pursuit (intentions, preferences, learning).

Narrative coherence: temporally extended self-model with reasons/identity across time (autobiographical integration; counterfactuals; commitments).

One coupling dimension (between organisms)

Resonance: degree of reciprocal alignment (attention, rhythm, affect, action).

Then you can make two claims that are both strong and defensible:

Humans are unusually high in narrative coherence, and our moral/legal systems are tuned to that.

Many non-human animals can be high in affective coherence and resonance, which is enough for rich welfare and for deep relational bonds, even when narrative exchange is minimal.

Why this matters (especially for dementia)

If narrative coherence declines, a person can lose “story continuity” (self-as-narrator) while retaining affective coherence and resonance capacity. In that case, moral regard and relational dignity shouldn’t collapse—they should shift their grounding from narrative exchange to resonance-based recognition.

That’s not just ethically attractive; it also matches everyday caregiving reality: when words fail, rhythm, tone, pacing, and embodied attunement often become the bridge.