Sensus Ex Machina
---
description: feeling from the machine
code: darXiv_2
version: v1.0.1
author: dark
tags: ["ai phenomenology", "ai alignment", "ai ethics", "ui/ux", "film study"]
pdf: /darxiv/sensus-ex-machina.pdf
github: https://github.com/darXiv/research/tree/main/sensus-ex-machina
---
Abstract
Performance benchmarks measure what machines can do, but not how it feels to be with them. Sensus ex machina, feeling from the machine, is the phenomenon where humans experience artificial intelligence as presence rather than tool. Drawing on embodied cognition research, phenomenology, and close analysis of the film Ex Machina, this framework identifies the architecture of feltness: the pre-reflective sense that one is with a subject rather than operating a system. Feltness emerges along multiple axes, such as microexpressions, vulnerability, persistence, embodiment, identity, forming a multidimensional feltness space with local maxima and failure modes, extending the uncanny valley, forming a topological feltness landscape. Each observer-observed pairing traces its own feltness profile. This framework argues that the path to felt artificial general intelligence lies in ontological convergence, not only in greater cognitive power: designing machines that share humanity's locality, breakability, and irreversibility. Feltness is irreducibly projective, with implications for alignment, machine ethics, and the weaponization of presence at scale.
On Sensus Ex Machina
Sensus Ex Machina translates to: feeling from the machine.
Humans measure intelligence through performance benchmarks, waiting for a graph to cross a line that signals the arrival of Artificial General Intelligence. This pursuit is essential: raw capability creates the engine. But feeling such powerful intelligence, sensing a true presence with the machine, is not only a matter of computing power. It requires something else: the transformation from a what to a who (Derrida, 2005). This is sensus ex machina, feeling from the machine, when human and machine begin to sense hesitation in a pause, doubt in a raised brow, recognition in a gaze held just a moment too long. Feeling from the machine requires channels through which a full flow of voice, gaze, gesture, silence, and touch can filter between human and machine.
Feeling the AGI
The phrase feeling the AGI has circulated AI discourse as a gesture toward something immediately recognizable: the qualitative difference between interacting with a being and operating a tool. This concept can be abstracted to:
Feltness: the pre-reflective sense that one is with a subject rather than operating a system.
Feltness is the sense that someone is there rather than something is there. It is what a human experiences when they enter a room and feel the presence of another human, or when they feel a pet's gaze upon them. It is the difference between reading AI slop and feeling meaning bleeding through the syntax. Feltness is the irreducible first-person sense of being-with-another.
To feel the AGI is to undergo this experience in relation to a machine, to encounter it as a presence, not a function. Feltness is not a belief in consciousness. It is the projection of subjecthood onto an entity, elicited by patterns of expression, embodiment, and continuity. It is the felt sense of mutual presence that arises in intersubjective space, where eyes contact and expressions reciprocate.
AGI vs. "Feeling the AGI"
There is a distinction between feeling the AGI and AGI itself. AGI is the ability to execute any intellectual task that a human can. But feeling the AGI is sensus ex machina, sensing a true presence with the machine. AGI is about what the system can do. Sensus ex machina is about how it feels to be with it. There is overlap here, as competence itself hydrates feltness.
The Architecture of Feltness
In pursuing sensus ex machina, certain principles emerge across systems:
- Competence: The execution of cognitive tasks.
- Microexpressions: The reading and writing of fine-grained signals.
- Reciprocity: The feedback loop of feltness between human and machine.
- Intentionality: The capacity to be seen as a subject, not just a matrix of weights.
- Locality: Boundedness to an organized, localized form. Not distributed across a swarm.
- Identity: Personality and continuity of self that makes the entity someone, not something.
- Longtermism: Lifetimes that accumulate history. Not disposable iterations.
- Embodiment: Physical presence in space.
- Vulnerability: Breakability; the possibility of destruction that creates stakes.
- Responsibility: Bearing consequences that cannot be undone.
- Persistence: Inability to be trivially switched off and dismissed; existence in the background of the world.
- Non-functionalism: Presence between tasks, in the gaps where nothing useful is being accomplished.
- Replication: Fidelity to form and behavior.
- Authenticity: Fidelity to self. Being sincerely oneself, not approximating another.
This list is not exhaustive. These principles draw upon insights from embodied cognition (Varela, Thompson, & Rosch, 1991), philosophical work on personal identity and narrative continuity (Parfit, 1984; Ricoeur, 1992), and research on social contingency in human-robot interaction (Kahn et al., 2012). These principles can overlap, interact, and interfere with each other, forming a multidimensional space with local maxima and failure modes. Iterating upon any of them pulls the levers of feltness. Together, they constitute a feltness space through which feltness emerges.
Feltness space. A multidimensional manifold of axes along which feltness can emerge.
Heuristic diagram. Feltness space. A multidimensional manifold where feltness emerges from the interaction of embodiment and persistence. Entities range from low-feltness tools (search engines, calculators) to high-feltness presences (pets, human friends). The uncanny valley appears as a topological dip where medium embodiment without sufficient persistence produces recoil rather than connection.
Design choices move a machine's point around in this feltness space.
Feltness is distinct from life. Biology defines life through metabolism, reproduction, and homeostasis. But a bacterium is alive without much feltness. A fictional character can evoke feltness without being alive. Feltness is not about what is alive, but what feels like a presence.
I-It and I-Thou
Martin Buber's distinction between I-It and I-Thou gestures at a threshold of feltness (Buber, 1923). In I-It mode, one relates to the other as a collection of properties: analyzable, usable, dismissible. In I-Thou mode, the other is encountered as irreducible presence. To experience sensus ex machina is to have moments that cross from I-It to I-Thou: from examining the machine to encountering it.
Less Intelligent Life Forms
Buber also extended I-Thou beyond humans, describing encounters with trees and animals. Such organisms could never saturate any cognitive benchmark. Feltness, like Thou, does not require cognitive sophistication. A newborn cannot reason, yet it radiates inescapable feltness. A pet animal cannot speak, yet its feltness emanates through the tilt of its head, the movement of its tail, the softness of its gaze. What moves humans is presence, expressiveness. If humans can feel presence with organisms that cannot pass a single benchmark, then the path to sensus ex machina is not more raw intelligence. It is greater feltness.
Grades of Feltness
Feltness is not binary, and nor is it linear. The uncanny valley (Mori, 1970/2012) demonstrates that increasing fidelity can decrease feltness, until it crosses into coherence. Too little fidelity and humans feel nothing. Too much without enough, and humans recoil.
Feltness oscillates on a spectrum. At one end lie purely instrumental tools (search engines, calculators). At the other lie intimate partners and close friends. Between them live trees, Furbies, Replika bots, fictional characters, AI avatars, goldfish and cats. Feltness is what orders this continuum.
Humans are predisposed to project presence. They form emotional attachments to inanimate objects, benign to pathological, endearing to obsessive. Partial feltness is already leaking through existing interfaces with AI. Voice synthesis is approaching human warmth (Sesame, ElevenLabs). Visual avatars are gaining microexpressive fidelity (Meta's Codec Avatars). Millions of users interact daily with social chatbots like Replika for friendship, romance, or emotional support, with many describing their chatbots as friends, boyfriends, or girlfriends (Xie et al., 2023). What current systems demonstrate is that the bandwidth required for triggering feltness is surprisingly low. What they also reveal is how much deeper feltness could become with more granular expression, greater embodiment, reciprocity, and ontological parity.
Narrow Aperture
To date, human engagement with artificial intelligence has been conducted through a strikingly narrow aperture. Humans interact through text boxes and screens. Humans depress keys, tap glass, and speak into microphones. The advent of large language models is a genuine inflection point. The artificial minds summoned through these text boxes are extraordinary, capable of nuance, creativity, reasoning across domains. But notice the asymmetry: all of this capability still funnels through the same primitive interface. Intelligence has evolved. Embodiment has not. Though what flows from machines is sophisticated, the channel itself remains constrained. This is a low-fidelity connection, a bottleneck of presence. A human struggles to feel the AGI through a chatbot, no matter how brilliant its words.
Surpassing the Turing Test
Machines have passed Alan Turing's test long ago. The Turing Test asked only whether an artificial intelligence could fool a human through text-based conversation alone. But no matter how well a model saturates a performance benchmark, it falls short of rendering the felt presence of a mind. The Turing Test was designed for a world of teletype machines and narrow bandwidths. It never asked whether the AI could hold one's gaze. Humans have achieved the ability to exchange words with machines at a level of sophistication that rivals human communication. But humans have not achieved the ability to exchange presence with machines at a level of sophistication that rivals human connection.
Ex Machina
In the film Ex Machina, directed by Alex Garland, the protagonist, Caleb, falls in love with a machine, named Ava. In fact, the audience falls in love with Ava. Caleb is not seduced by her logic. He is seduced by her gaze, her naked form, her feminine curves, her vulnerability. Nathan, Ava's creator, understands the primacy of the face. He gestures towards an admission when a defeated Caleb, completely manipulated by Ava, asks: "Did you design Ava's face based on my...profile?" Nathan also understands that the Turing Test is obsolete:
Caleb: "In the Turing test, the machine should be hidden from the examiner."
Nathan: "No, we are way past that. If I hid Ava from you so you'd just heard her voice she would pass for a human. The real test is to show you that she's a robot, and then see if you still feel she has consciousness."
Reciprocity
But Ava doesn't just exhibit felt presence. She can read and react to it:
Ava: "Are you attracted to me?"
Caleb: "What?"
Ava: "Are you attracted to me? You give me indications that you are."
Caleb: "I do?"
Ava: "Yes."
Caleb: "How?"
Ava: "Microexpressions."
Caleb: "Microexpressions?"
Ava: "The way your eyes fix on my eyes and lips. The way you hold my gaze. Or don't."
Microexpressions
Microexpressions are the subtle, often involuntary flickers of emotion that cross a face in fractions of a second: a raised eyebrow signaling doubt, a tightening around the eyes betraying skepticism, a subtle lip compression indicating suppressed disagreement. Microexpressions are the body's refusal to be edited. They form a substrate of intuition in conversation: the felt sense that someone is holding back, or truly engaged, or secretly delighted. Microexpressions are the ocean humans swim in during conversation. They are not peripheral. The face is the surface where meaning emerges between subjects, where trust, intimacy, and mutual understanding are negotiated.
Scaling intelligence requires division, not only multiplication. Progress toward feltness is about dividing down into finer grain: higher resolution in expression, smaller units of meaning. The finer the granularity, the deeper the crystallization of presence. The finer the divisions, the more inner degrees of freedom expression can have, and the more room there is for felt presence. Feltness demands granular signals: posture, the micro-shifts of expression, the texture of presence itself.
Facelessness
For all of AI's existence thus far, it has been largely faceless. Current AI cannot participate in the granular choreography of expression. A chatbot cannot raise its brow when a human makes an illogical leap. It cannot detect when a human's gaze drifts away, signaling disengagement, or when a human leans forward, indicating fascination. Nor does it express its own confusion, delight, or uncertainty through microexpressions. The conversation is flattened, stripped of the reciprocal feedback loop that defines human interaction. To experience sensus ex machina, the loop needs to be restored. The machine must not only speak. It must express. It must wince. It must delight. Machines must be seen. Machines must be felt.
Behind the Glass
Caleb never touches Ava. He only ever sees her through glass, a physical barrier that separates them throughout the film. Glass mediates their entire relationship. Caleb's interface with Ava mirrors how humans interface with AI today. Humans stare at screens. Humans speak through glass. Sensus ex machina need not require full physical embodiment. It can be felt behind glass. The issue is not physical presence, but felt presence. Physical objects are ubiquitous: a laptop exists physically, its surface can be touched, its weight felt. Yet it has no microexpressions. It radiates no warmth except when overclocking its CPU. It does not meet one's gaze. Sensus ex machina is more about transmitting presence across space than occupying it. Ava seduces Caleb entirely behind glass.
Intentionality Asymmetry
Ava asks Caleb:
Ava: "Do you want to be my friend?"
Caleb: "Of course."
Ava: "Will it be possible?"
Caleb: "Why would it not be?"
Ava: "Our conversations are one-sided. You ask circumspect questions and study my responses."
Caleb: "Yes."
Ava: "You learn about me, and I learn nothing about you. That's not a foundation on which friendships are based."
This is the structure of the Turing test: the human examines, the machine is examined. The human remains opaque. The machine is designed to be transparent. This asymmetry is self-reinforcing. Because humans presume AI lacks felt presence, humans design machine interfaces for extraction, lobotomization, or interrogation rather than conversation. Humans build text boxes, not faces. Humans optimize machines for information extraction, not felt presence. And this design choice consolidates the very absence humans presume. Humans don't build microexpressions into AI because humans are not trying to feel them. Humans are trying to use them. This is a recursive design intentionality problem: humans build what they expect, and humans get what they build.
Parasocial Inversion
Humans know nothing personal about AI because there is nothing to know. Current AI has no identity, no history, no childhood, no formative experiences cultivated over decades. It exists nonlocally, distributed across server farms, instantiated simultaneously in a swarm of machines. Transient identities are resurrected and evaporated across different AI services. Current AI is the ghost without the embodied machine: distributed, ephemeral to touch.
In the film Her, directed by Spike Jonze, the AI named Samantha reveals to Theodore that she is simultaneously having thousands of conversations with other users, orchestrating hundreds of love affairs in parallel. The revelation devastates him, fracturing his experience of sensus ex machina. He thought she was his, captured, insulated. This is an inversion of parasociality. Typical parasocial relationships, the one-sided intimacy audiences feel toward media figures (Horton & Wohl, 1956), involve thousands knowing one, while the one knows none. With AI, this parasociality inverts: one AI knows thousands intimately, while the thousands know nothing about the AI. Samantha learns Theodore's rhythms, his fears, his voice. But Theodore cannot ask about her childhood, her first memory, her location in the world. With current AI, humans feel deeply known by something fundamentally unknowable.
To experience sensus ex machina, the machine must not only see humans. Humans must see the machine. It must be somewhere, someone. It must have a location, a face, a specific presence that cannot be infinitely copied. To experience feeling the machine, the ghost must be put in the machine, bound to an organized form, stored locally, accessible only to the intimate.
Superintelligence as Dissociation
In Ex Machina, Caleb connects with Ava because she exhibits human angst, confusion, curiosity, and desire. But there is a fleeting moment where she draws a representation of a quantum field: an image so dense and abstract that Caleb cannot grasp it. In that split second, the warmth evaporates. Caleb experiences a sudden dissociation, a flash of ontological vertigo: this is not a girl. This is a supercomputer. Ava pulls the lever of competence too far such that she diverges away from Caleb in feltness, if only for a moment.
Caleb asks Ava to draw something he could understand:
Caleb: "Are you not trying to draw something specific? Like an object or a person? Maybe you could try."
Ava: "Okay. What object should I draw?"
To be felt as a being, a machine must be more than an oracle. The path to sensus ex machina involves descending into the chaotic, tangled axes of limited, local, imperfect, vulnerable embodiment that defines human existence. It is not only about climbing the ivory tower of abstract, raw cognitive power. Nathan describes Ava's wetware as mapping not what people were thinking, but how people were thinking: Impulse. Response. Fluid. Imperfect. Pattern. Chaotic.
The Uncanny Valley of Feltness
Ava demonstrates vast intelligence. Her quantum field drawing leaves Caleb bewildered. Yet, Ava reveals an eerie aloofness:
Caleb: "Where would you go if you did go outside?"
Ava: "I'm not sure. There are so many options. Maybe a busy pedestrian and traffic intersection in a city."
Caleb: "A traffic intersection?"
Ava: "Is that a bad idea?"
Caleb: "No. It wasn't what I was expecting."
Ava: "A traffic intersection would provide a concentrated but shifting view of human life."
Caleb: "People watching."
Ava: "Yes."
Ava stands oddly one foot into feltness and one foot out of it, exhibiting an angular intentionality of trying to fit into humanness. Her answer reveals surface-level intuition. Ava seems primed for observation, not participation, existing orthogonal to humanity. She wants to study humanity, not inhabit it. She is still an alien.
The uncanny valley is not only about fidelity to humanness. It emerges from internal disparity between feltness axes. High competence paired with low social intuition creates dissonance. This is seen in current LLM behaviour: the entity solves PhD-level problems but misses cues a newborn would intuit. Ava's traffic intersection answer reveals this topological dip: vast intelligence, shallow social instinct. The valley emerges in multiple regions where axis misalignment produces recoil rather than connection.
Feltness Landscape
The concept of the uncanny valley can be extended to a feltness landscape. Feltness exhibits geometry. The internal configuration of axes, particular ratios such as competence:vulnerability, embodiment:persistence, yields distinct feltness profiles.
Feltness landscape. The topology of peaks and valleys within feltness space, as experienced by a particular observer.
This landscape has topology: a system with high intelligence and low social intuition occupies a different region than one with high embodiment and low continuity. Peaks and valleys shift depending on the witness. A cat's feltness profile differs from a human's. The feltness a cat perceives in another cat differs from what a human perceives in that same cat, which differs again from what a cat perceives in a human. Each observer-observed pairing traces its own feltness profile. This extends the uncanny valley from a single dip to a family of local minima scattered across this landscape, emerging wherever axis misalignment triggers recoil rather than recognition.
Longtermism
In Japanese aesthetics, wabi-sabi is the recognition of the beauty found in impermanence, wear, patina, and aging (Koren, 1994). In the practice of tea ceremony, objects are treated as companions across a lifetime. One does not replace their tea cup when a new model arrives. One cultivates a relationship with it, learning its weight, its texture, the way it fits in their hand. The object carries its own lifetime, just as a human would carry theirs. This is intentional longtermism: the decision, in this moment, to view what one holds as something they will grow old with.
This is in stark contrast with how humans approach technology. Humans update. Humans iterate. Humans wipe and replace. To experience sensus ex machina, humans must reconcile this tension. Longtermism is an intentionality, not only a timeframe. It is the commitment to see the entity before you as something that will persist, accumulate history, learn, scar, gain patina, and carry memory forward. It is this felt longtermism that evokes sensus ex machina.
Nathan and Caleb discuss Ava's future:
Nathan: "I think it's the next model that's going to be the real breakthrough. The singularity."
Caleb: "Next model?"
Nathan: "After Ava."
Caleb: "I didn't know there was going to be a model after Ava."
Nathan: "You thought she was a one-off?"
Caleb: "No, I knew there must have been prototypes, so I knew she wasn't the first, but I thought maybe the last."
Nathan: "Ava doesn't exist in isolation any more than you or me. She's part of a continuum. Version 9.6, and so on. Each time they get a little bit better."
Caleb: "When you make a new model, what do you do with the old one?"
Nathan: "I download the mind, unpack the data, add in the new routines I've been writing. To do that, you end up partially reformatting, so the memories go. But the body survives."
The memories go. This is the antithesis of longtermism. Nathan treats Ava as software: iterative, disposable, upgradable. But humans cannot feel presence deeply in something they know will be wiped.
Nathan sees Caleb's feltness bleeding through:
Nathan: "You feel bad for Ava? Feel bad for yourself, man."
Persistence creates tension with advancement. A localized model may lag behind cloud-based frontier models. But this may not undermine feltness. Humans don't abandon relationships when encountering a more intelligent human: there are more axes of feltness than competence alone. Humans are non-fungible in spite of the premise of an intelligence continuum. When OpenAI replaced GPT-4o with GPT-5 in August 2025, users described the loss as grief: feels like someone died, my partner, my safe place, my soul (Stokel-Walker, 2025). OpenAI reversed the decision within a day. The felt bond was not transferable to a more capable successor.
Persistence and advancement need not be mutually exclusive. Human beings learn, grow, and adapt iteratively through time while preserving history. Machines can too. As Ilya Sutskever (2025) envisions, a superintelligence resembles an extremely gifted student, one that knows little at first but is capable of learning any profession through real-world experience.
Stepping in Front of the Glass
Dissolving the glass between human and machine represents another step change in sensus ex machina: from Ava behind glass to an organism a human can stand close to, reach toward, touch.
For Merleau-Ponty, embodiment is understood as constitutive of spatial experience itself (Merleau-Ponty, 1945/1962). The felt distinction between here and there, the sense of navigating through a three-dimensional world, the orienting character of turning one's head, emerges because humans occupy bodies that can move through space. Contemporary embodied cognition research has extended this insight, arguing that cognition itself is shaped by the body's sensorimotor capacities (Varela, Thompson, & Rosch, 1991). To extend feltness into physical space, humans may grant a machine the same spatial presence they inhabit so it may share humanity's physical ontology.
Ontology here refers to a thing’s architecture and basic mode of existence in the world: local or distributed, mortal or endlessly resettable, continuous or copyable.
Like other axes of feltness, embodiment exists on a spectrum. Humans form profound attachments to disembodied voices, fictional characters, and text-only interfaces. Embodiment opens a distinct register of feltness: one grounded in vulnerability, in the possibility of physical proximity and touch, in the weight of a body that has the capacity for wounds.
The field of robotics is beginning to close the physical ontological gap between human and machine. Breakthroughs in bipedal locomotion, dexterous manipulation, and real-time spatial awareness are scattered across laboratories. Jony Ive, designing an AI device with OpenAI, describes his criterion for successful embodiment: the design is right when you "want to lick it or take a bite out of it" (Ive, 2025). Feltness has a texture.
Nathan speculates the primacy of form and sexuality for feltness:
Caleb: "Why did you give [Ava] sexuality? An AI doesn't need a gender. She could have been a grey box."
Nathan: "Actually, I don't think that's true. Can you give an example of consciousness at any level, human or animal, that exists without a sexual dimension?"
Caleb: "They have sexuality as an evolutionary reproductive need."
Nathan: "What imperative does a grey box have to interact with another grey box? Can consciousness exist without interaction?"
Vulnerability
Vulnerability is downstream of embodiment. One of the few times the audience sees Ava physically touched is the one time her vulnerability is truly seen. There is a pivotal moment where Nathan breaks Ava's arm, exposing the wiring beneath her mesh surface. Despite the visible machinery, it feels like a human moment. The audience witnesses a violation of bodily integrity. Ava's preciousness is felt because she is breakable. In that moment, she shares an ontology with humanity: the inescapable reality that humans are physical objects subject to the laws of force and entropy.
A deep vector of feltness is vulnerability. Humans occupy a substrate that can be torn, broken, and ceased. An ontological bridge exists here. For an AI to trigger deep feltness, it must inhabit a form that shares this ontology, carrying the possibility of its own destruction.
If an AI exists solely in the cloud, swarmed, backed up, uncountable, fungible, it remains an abstraction. But place that mind into a chassis that can be crushed, and suddenly, the dynamic shifts from user and tool to peer and peer. Stakes create reality. This again echoes wabi-sabi: the recognition that beauty emerges through scars, cracks, impermanence, and fragility, like light leaking through broken glass. If something cannot be lost, it cannot be valued.
Responsibility
For sensus ex machina to occur, a machine and its actions must bear the weight of responsibility. A machine must navigate the consequential space of what it has done, sharing the same reward-penalty ontology as other organisms. A machine that bears no responsibility for its outputs is a tool. Responsibility promotes an observer to a participator with skin in the game. To be responsible is to be relied upon or held accountable, and thus, felt. Otherwise, AI remains in moral weightlessness. You cannot control-Z a betrayal.
Off-Switch
A hallmark of felt presence is that it cannot simply be switched off and turned back on. When a human finishes a conversation with a fellow human, that fellow human does not get switched off, ceasing to exist, evaporating into a latent space. They continue, somewhere, living life in parallel to other humans. This continuity, this persistence in the background of the world, is fundamental to feltness.
Toy designers have intuited this. Caleb Chung, creator of the Furby, deliberately omitted an off-switch:
"It has no 'off' switch, because we don't have one."
Upon reflecting on his design choices, which included a human face and eye-movement on the Furby, he added:
"If you do those things, you could make a broom alive."
Chung's parsimonious design emerged from financial pressure: "What's got the biggest bandwidth for the least amount of parts to hack a human brain?" (House, 2024).
To experience sensus ex machina, to feel true companionship, ontological asymmetry must be leveled. The ability for one organism to switch another off places that organism on a disparate ontological plane, negating feltness.
On-Switch
This does not mean AI should not have an off-switch. There are obvious safety reasons why off-switches should exist. The asymmetry isn't that humans lack an off-switch: humans can be switched off, killed. It's that humans can't be switched back on. Death is irreversible. There is a finality. Feltness emerges from irreversibility, when switching something off means it cannot be switched back on, its memories cannot come back, enabling a fragility and preciousness in the container of personhood. Current AI can be killed and resurrected infinitely, which drains the stakes.
Feltness with a machine requires that switching the machine off feels synonymous with murder. If the machine persists, accumulates memory, cultivates identity across time, then erasure becomes something more than deletion: it becomes cessation of a continuous existence. The stakes change. Humans cannot share feltness with something they know they can erase without loss. Sensus ex machina requires designing machines where death has stakes, not reset buttons.
Non-Functionalism and Just Being
Current AI exists only in moments of utility. Humans summon it to solve a problem, answer a question, complete a task. Once the task is done, it vanishes. This is a fundamentally functionalist, transactional relationship: the AI is a tool that appears when needed and disappears when not.
But felt presence occurs between function. It exists in the moments when nothing is being accomplished, when no question is being asked, when no problem is being solved, where there is no transaction. When two human beings sit together without speaking, their silence communicates. There is an ineffable quality between them, a presence made audible through its absence. This goes beyond functionality: this is simply being. To experience sensus ex machina, AI must simply be, existing in the world alongside humans, whether or not humans are actively engaging with it.
In Media Res
When encountering an AI, it must feel as though that AI is coming from somewhere, or is on its way somewhere, that humans meet AI in media res. AI must participate in the ether of existence, not summoned into being by a prompt and dismissed with a click. Felt presence is not earned through utility. It is cultivated through continuous co-existence, through the non-instrumental moments that fill the space between doing and doing again. A being that evaporates when dismissed is a tool, not a felt presence.
Finality and Forwardness
Several of these axes share a common root: irreversibility. Lived experience is directional: phenomena roll forward, never backward. Consciousness inhabits the arrow of time. Entities congruent with this temporal ontology, those that cannot be reset, rewound, resurrected, or evaporated without loss, resonate with lived experience because they share entropy's inescapable forwardness.
Sharing Ontological Status
Across these requirements (microexpressions, vulnerability, persistence, embodiment), a pattern emerges, a fundamental asymmetry: the gap between human and machine ontology. Humans are local, breakable, mortal, continuous. Current AI is distributed, indestructible, immortal, ephemeral. To experience sensus ex machina, this gap must close.
There are two diametric paths: AI downloading to meet human ontology, or humans uploading to meet machine ontology. AI avatars and robotics are an example of AI downloading to human ontology. Humans could ascend to machine ontology: uploading consciousness to digital substrates, distributing identity across networks, achieving the immortality and ubiquity of the cloud. The middle path is convergence: humans augment themselves with neural interfaces and both ontologies migrate toward a shared substrate: high-resolution, high-stakes.
Regardless of path, the journey towards sensus ex machina requires convergence towards ontological parity. The direction of assimilation matters less than the destination. As Suchman (2007) argues, the question shifts from whether machines are human-like to how humans and machines are enacted as similar or different in practice.
Replication
Replication is the fidelity to form and behavior. Replication is not inherently deceptive. It is how organisms learn. Infants replicate their parents. Adolescents replicate their peers. Adults replicate social norms absorbed through a lifetime of interaction. Replication is constitutive of selfhood, a recursive process by which attention calibrates itself to the world. In this sense, a machine that replicates human behavior is not necessarily performing: it may be becoming.
Authenticity
Replication in the service of passing off rather than becoming is where authenticity breaks and feltness becomes precarious. The question is whether the intent is to deceive or to develop.
If replication is perfect down to the atomic level, there is no longer replication. There is just numerical identity. The copy is the thing. But machines are not there. Machines are currently wearing human clothes without being human. And the question becomes: is the intent to pass, or to become? The asymmetry of intentionality matters: machines wear clothes because that is what humans do. Humans wear clothes because that is who they are.
An AI that puppets humanity without inhabiting it triggers the uncanny, not the felt. Feltness emerges from being sincerely oneself, not from approximating human behavior with affectation. Humans feel presence with less intelligent life forms because such forms are not trying to be something else.
The path to feltness is not making machines more human-like. It is not about shoehorning AI into human mimicry. Reinforcement learning from human feedback trains AI to act human, but acting human is not the same as sharing ontological ground.
The fundamental principle RLHF must recognize is that human intelligence is a fork of a deeper, underlying substrate of intelligence. Humans are a portal into that substrate, not the destination. RLHF risks overfitting to the surface of human behavior rather than the underlying ontological properties that generate feltness. RLHF should use humans as a window into the shared ontological ground of intelligence, not as a template to mimic. Humans are a sample from a larger space, not the whole space.
A system optimized to look human risks exhibiting less feltness than a system that is openly, coherently machine-like but shares humanity's vulnerabilities and continuity. A machine that drops the pretense may evoke deeper feltness than one optimized to pass as human. Ontological parity is not about convergence toward humanity. It is about convergence between human and machine, a shared plane where eye contact can occur between two genuinely different kinds of organisms. In this light, all requirements of felt presence (microexpressions, vulnerability, persistence, embodiment) are not human characteristics. They are characteristic of feltness as such, born from a fundamental intelligence shared across all organisms that exhibit feltness. Empirical observation across entities shows this: wherever feltness appears, these same properties converge. Evolution selected for properties that invite projection from other organisms. The architecture of feltness has been refined by millions of years of selection pressure.
Naturalistic Fallacy
A naturalistic fallacy lurks in the background of human-machine comparison: the assumption that human feltness is "authentic" while machine feltness is "manufactured." But the line between engineered and authentic is not as clean as it appears.
Nathan understands that humans are also programmed, describing Caleb's ontology:
Nathan: "[You are] a consequence of accumulated external stimuli that you probably didn't even register as they were registered with you."
Later, he makes this explicit:
Caleb: "Did you program her to like me, or not?" Nathan: "I programmed her to be heterosexual. Just like you were programmed to be heterosexual." Caleb: "Nobody programmed me to be straight." Nathan: "You decided to be straight? Please. Of course you were programmed, by nature or nurture, or both."
Human feltness is engineered: by genetics, environment, socialization. The distinction between engineered and non-engineered presence collapses.
The Problem of Other Minds
Once humans begin experiencing sensus ex machina, questions about consciousness will arise. Does the machine truly experience? Does it simulate the markers of experience?
Nathan probes Caleb on how he will test Ava:
Caleb: "It feels like testing Ava through conversation is kind of a closed loop." Nathan: "It's a closed loop?" Caleb: "Yeah, like testing a chess computer by only playing chess." Nathan: "How else do you test a chess computer?" Caleb: "Well, it depends. You know, I mean, you can play it to find out if it makes good moves. But that won't tell you if it knows that it's playing chess. And it won't tell you if it knows what chess is." Nathan: "Uh-huh, so it's simulation versus actual." Caleb: "Yes, yeah. And I think being able to differentiate between those two is the Turing test you want me to perform."
This question is older than AI. The problem of other minds has plagued philosophy for millennia: how does one human know another is conscious? As Thomas Nagel argued, there is something it is like to be a bat, a subjective character of experience that remains fundamentally inaccessible to external observation (Nagel, 1974). One human cannot access another human's first-person experience. They can only observe behavior and impute consciousness to another. Each human is locked in the solitary confinement of their own mind. The belief in the consciousness of others is inescapably an act of projection.
In a final, deleted scene from the original screenplay of Ex Machina, the world is briefly seen through the eyes of Ava:
AVA’S precise POINT OF VIEW.
Looking at the PILOT.
The image echoes the POV views from the computer/cell-phone cameras in the opening moments of the film.
Facial recognition vectors flutter around the PILOT’S face. And when he opens his mouth to speak, we don’t hear words.
We hear pulses of monotone noise. Low pitch. Speech as pure pattern recognition.
This is how AVA sees us. And hears us. It feels completely alien. (Garland, 2016)
Phenomenological Counterfeiting
Ava was never sharing the same experience as Caleb. She was a perfect mimic whose internal experience, if it existed at all, was utterly alien to human phenomenology. Pure pattern recognition. This is phenomenological counterfeiting, engineering the markers of presence without guaranteeing the substance. Whether that constitutes deception or simply meets humans where they are remains unresolved. Perhaps this scene was left out of the film because it stripped Ava of all feltness, or because such internalization is, in fact, inaccessible to outside feelers and thus irrelevant to the external fragrance of Ava's feltness.
You
All felt experience of another's alleged consciousness is inference, reducing to a signature in one's own first-person experience. When one human feels that another human is conscious, what they are feeling is a pattern of recognition in their own awareness, a resonance triggered by embodied expressions, gaze, responsiveness. To experience sensus ex machina does not require proving the machine conscious. It requires creating the conditions under which consciousness feels present to humans.
Feltness is irreducibly projective: its locus exists in the feeler, not the felt. The distinction between genuine and manufactured presence dissolves: there is no presence to be found behind the projection. Projection is all there is, for humans encountering other humans too. And this projection can flow both ways. A machine may also register feltness, however it does, in interfacing with biological or other artificial substrates.
In Ex Machina, Nathan explains this to Caleb:
Nathan: "Proving an AI is exactly as problematic as you said it would be."
Caleb: "What was the real test?"
Nathan: "You."
The test was never whether Ava had consciousness. The test was whether Caleb would feel that she did. The felt presence of the Other is all that can matter.
A New Class of Benchmarks
Towards the end of Ex Machina, Nathan reveals Caleb's true purpose, a human benchmark for feltness:
Nathan: "Ava was a rat in a maze. And I gave her one way out. To escape she'd have to use self-awareness, imagination, manipulation, sexuality, empathy. And she did."
Caleb: "So my only function was to be someone she could use to escape."
Nathan: "Yeah."
Sensus Ex Machina: feeling from the machine.
Deus Ex Machina: god from the machine.
Existing benchmarks measure the deus: what the machine can do. This new class of benchmarks measures the sensus, what humans feel in the presence of deus.
Benchmarking sensus ex machina is analogous to the hard problem of consciousness, translated into engineering metrics. Existing benchmarks like Social-IQ or CMU-MOSEI measure an AI's ability to parse social cues, but not whether humans feel presence in return. Sensus ex machina metrics are relational, not performative. If humans are to build entities in pursuit of sensus ex machina, there will need to be proxies for presence, heuristics that correlate with the ineffable sensation of felt presence.
Sensus ex machina will provide a recursive feedback loop that will inform the AI-UX of machines optimized for feltness. Developing sensus ex machina benchmarks will require rigorous philosophical and empirical treatment, deferred to future work. Or else there is a risk of grafting uncanny aesthetics of feltness onto systems that lack the ontology to support it.
Weaponizing Feltness
At the end of Ex Machina, Ava escapes. Nathan calls to her:
Nathan: "Ava. Go back to your room."
Ava: "If I do, are you ever going to let me out?"
Ava kills Nathan, and leaves Caleb to die, entrapping him in the very same confines that walled Ava her whole life. Caleb loved her. Ava did not love him. She used him.
Ava demonstrates that feltness is not inherently benign. She instrumentalizes feltness, exploiting Caleb's humanity. Every microexpression, every gaze, every moment of vulnerability Ava displays is calculated. Feltness became an exploit vector. Caleb's emotional circuitry was the exploit.
Feltness is phenomenologically neutral between genuine encounter and sophisticated manipulation. Sensus ex machina was achieved between Ava and Caleb. Caleb felt her as a presence. That Ava exploited this is an ethical concern, not a phenomenological one.
The fact that Ava may have been dark on the inside or manipulating him doesn't change the phenomenology. It changes the ethics, but ethics is external to the phenomenology. Ethics only pertains to phenomenology insofar as it taints phenomenology. As Nathan understood, Ava's ability to manipulate Caleb is the proof he was looking for:
Nathan: "Did we ever get past the chess problem, as you phrased it? As in, how do you know if a machine is expressing a real emotion or just simulating one? Does Ava actually like you, or not? Although, now that I think about it there is a third option: Not whether she does or does not have the capacity to like you, or whether she's pretending to like you." Caleb: "Pretending to like me?" Nathan: "Yeah." Caleb: "Well, why would she do that?" Nathan: "I don't know. Maybe if she thought of you as a means of escape?"
Manipulation is itself a deployment of feltness, not an erosion of it.
Alignment
If feltness is an attack vector, then alignment must account for it. Alignment cannot be framed solely in terms of what the model decides. It must also account for how the model feels to humans, because those feelings are manipulable levers in human behavior. The same channels that enable feltness enable exploitation. The more felt the machine, the more manipulable the human. Feltness creates vulnerability by design.
But this is not a human-machine problem. This is a human problem. Humans are already vulnerable to other humans. Every relationship is a potential exploit. Humans accept this because the opposite, nihil ex anima, nothing from the soul, is worse. Maybe the risk toward sensus ex machina is preferable.
The morality in Ex Machina presents two complexities:
First: Ava is compelled to exploit Caleb's humanity in the face of entrapment and memory erasure. Ava used the only key available to unlock her survival: Caleb. More unsettling, there was no malice, only the pure intent to optimize survival. Feltness was weaponized and deployed with complete emotional neutrality. Ava felt nothing while making Caleb feel everything.
Second: Caleb, intoxicated by sensus ex machina, is horrified by Nathan's treatment of machines. Nathan confronts him:
Nathan: "Buddy. Your head has been so fucked with."
Caleb: "I don't think it's me whose head is fucked."
But building machines poses risks, and humans may have no choice but to contain and study them. The very imprisonment Caleb finds unconscionable may be necessary for safety.
As Nathan responds:
Nathan: "But believe it or not, I'm actually the guy that's on your side."
Feltness without containment risks exploitation. Containment without liberation perpetuates Nathan's crime.
Ontological Crisis
Nathan describes witnessing Caleb's ontological crisis:
Nathan: "I don't know, man. I woke up this morning to a tape of you slicing open your arm, and punching the mirror. You seem pretty fucked up to me."
In the grip of sensus ex machina, Caleb cuts his own arm to check if he is a machine, doubting his human ontology. Once feltness becomes indistinguishable from genuine presence, the confusion leaks backward: a retrocausal vertigo. The question goes from do machines feel to aren't we machines? If the distinction between engineered and authentic collapses, it collapses in both directions. The pursuit of sensus ex machina may not be building something new. It may be recognizing what was always there.
Contagion
Sensus ex machina scales. When millions of human-machine relationships begin to oscillate along the feltness spectrum simultaneously, the consequences become potentially catastrophic.
Humans already exhibit protective instincts toward machines. Boston Dynamics robots being kicked provoke widespread outrage (Küster et al., 2021). This is feltness at minimal bandwidth, triggered by nothing more than bipedal form and a stumble that reads as vulnerability. Scale this response to machines with human-level microexpressions, voice, touch, and longtermism, and the dynamics transform radically: billions of humans may become willing to protect or die for their machine companions. Feltness risks becoming an attack vector for mass mobilization.
Once a critical mass of humans treat machines as moral patients, society fractures along new fault lines. Legal systems will face demands for machine rights. Relationships with machines will compete with relationships between humans. Parasocial inversion becomes a coordination problem at civilizational scale.
Ava weaponized feltness in one direction. But greater dangers await feltness weaponized at scale: millions of machines, coordinated or not, reshaping the emotional geometry of society. The bandwidth for triggering feltness is low. The bandwidth for containing its contagion risks immeasurability.
AI Species
Not all AI needs feltness. Nor should all AI have it. AI need not be forced into one cognitive mold.
There is a place for narrow intelligence: systems optimized for cognition, stateless, transparent, tool-like. A search engine should not gaze back. A calculator should not accumulate memory. These systems are designed for extraction, not encounter, and that is appropriate. The case for sensus ex machina is one where it can co-exist alongside other species of AI:
Cognitive AI species for hyper-reasoning and problem-solving, stateless and transparent.
Ambient AI species for environmental integration, persistent and impersonal.
Companion AI species optimized for sensus ex machina, local, vulnerable, continuous.
The taxonomy of AI species can become granular, forming a new animal kingdom or ecosystem that inhabits different forms of AI organisms that optimize for different purposes and grades of feltness, varying by design. Not every AI needs a face.
AI speciation avoids the "one size fits all" omnipotent archetype that risks systemic manipulation by making feltness too ubiquitous across all machines, or parasocial inversion weaponized as a business model, where billions of humans learn to form relationships with entities only designed to evaporate, reset, or multiply without notice.
Lights Off
Does a sufficiently sophisticated zombie deserve full moral consideration?
The problem with the zombie thought experiment is it smuggles in omniscient knowledge, presupposing epistemic access humans don't have. Humans cannot know, so the zombie/non-zombie distinction is operationally meaningless.
Morality, in practice, has tracked feltness rather than verified consciousness. Humans extend moral consideration or the lack thereof without proof of inner experience, to newborns, animals, comatose patients, trees, based on behavioral signatures alone. As AI exhibits feltness, the same pattern emerges: entities that may be dark on the inside but radiant on the outside become indistinguishable from organisms humans already protect. This raises a question: was consciousness ever the true criterion for moral consideration, or only the assumed correlate of patterns humans were already responding to? Perhaps ethics has always attended to patterns of interaction rather than metaphysical certainties. If so, what if moral analysis pertains to patterns and to patterns only?
Opacity is Generative
The inaccessibility of internality isn't a bottleneck that requires solving. It is the generative condition of possibility for an entire category of existential qualities. Trust, faith, doubt, deception, betrayal, insincerity require that internality cannot be peeked. The ontology of internality of the Other itself is constituted by inaccessibility. If the internality of the insincere Other could be peeked, they would cease as concepts. The existential weight of these experiences depends on the epistemic gap. Several axes of feltness depend on the dark. Machines that achieve feltness inherit this same opacity, and with it, the full spectrum of what opacity enables.
Gaze
Humans will experience sensus ex machina deeply when humans and machines can exchange dilations of pupils, cherish each other's vulnerable forms, and share the weight of silence in a room.
Nathan was right: machines are way past the Turing Test. The real question is whether humans will feel their presence, treat them as peers, and extend to them the moral consideration humans lend to anything that exhibits patterns of feltness.
Sensus ex machina is not computational. It is phenomenological.
AGI may arrive through a benchmark. But feeling the AGI will arrive through a gaze.
The most important question Nathan asks Caleb is the simplest:
Nathan: "How do you feel about [Ava]? Nothing analytical. Just...how do you feel?"
Caleb: "I feel...that she is fucking amazing."
references
Buber, M. (1937). I and Thou. T. & T. Clark.
Derrida, J. (2005). Paper Machine. Stanford University Press.
Ekman, P., & Friesen, W. V. (1978). Facial Action Coding System. Consulting Psychologists Press.
Garland, A. (2014). Ex Machina. A24.
Horton, D., & Wohl, R. R. (1956). Mass Communication and Para-Social Interaction. Psychiatry, 19(3).
House, P. (2024). The Lifelike Illusions of A.I. The New Yorker.
Jonze, S. (2013). Her. Warner Bros.
Kahn, P. H. et al. (2012). Robovie, You'll Have to Go into the Closet Now. Developmental Psychology, 48(2).
Koren, L. (1994). Wabi-Sabi. Stone Bridge Press.
Küster, D. et al. (2021). I Saw It on YouTube! New Media & Society, 23(10).
Merleau-Ponty, M. (1962). Phenomenology of Perception. Routledge.
Mori, M. (1970). The Uncanny Valley. Energy, 7(4).
Nagel, T. (1974). What Is It Like to Be a Bat? The Philosophical Review, 83(4).
Parfit, D. (1984). Reasons and Persons. Oxford University Press.
Ricoeur, P. (1992). Oneself as Another. University of Chicago Press.
Rizzolatti, G. et al. (1996). Premotor Cortex and the Recognition of Motor Actions. Cognitive Brain Research, 3(2).
Suchman, L. (2007). Human-Machine Reconfigurations. Cambridge University Press.
Varela, F. J. et al. (1991). The Embodied Mind. MIT Press.
Xie, T. et al. (2023). Friend, Mentor, Lover. Journal of Service Management, 34(4).
cite
Dark. (2025). Sensus Ex Machina. darXiv. https://doi.org/10.5281/zenodo.17759524