Thing, Creature, or Mirror? The Standards We Set for AI

We demand human warmth from AI while judging it with machine precision. This ontological confusion shapes how we trust, blame, and relate to artificial intelligence.

We are currently trapped in a bizarre paradox with Artificial Intelligence.

On one hand, we are relentlessly training these models to be more "humane." We want them to understand nuance, to be polite, to offer empathy, and to act as therapists or coaches. We prompt them to "act like a senior strategist" or "act like a compassionate friend."

On the other hand, the moment the AI makes a mistake, our trust evaporates.

To be clear, the stakes here are real. When an AI hallucinates a legal precedent or gives harmful psychological advice, the consequences can be dangerous. But there is a distinct psychological difference in how we react to a human doctor giving bad advice versus an AI giving bad advice. If a human fails, we hold the individual accountable. If an AI fails, we often view it as a systemic failure of the entire technology.

This exposes a fundamental confusion in how we perceive this technology. We don't know what we are talking to. Is it a thing? Is it a creature? Or is it something else entirely?

To make better decisions with AI, we have to stop arguing about its specs and start arguing about its ontology.

The Psychology of Unforgiveness

Why is our tolerance for AI error so much lower than our tolerance for human error?

In 2015, researchers Berkeley Dietvorst, Joseph Simmons, and Cade Massey published a paper in the Journal of Experimental Psychology coining the term "Algorithm Aversion."

Their research highlighted a fascinating phenomenon: humans are willing to trust an algorithm until they see it make a mistake. Once they see it fail, their confidence in the algorithm drops significantly faster and deeper than their confidence in a human who made the exact same mistake. We seem to expect perfection from computation, whereas we factor in a margin of error for humanity.

The conflict arises because Generative AI (LLMs) bridges this gap. We interface with it using language (the domain of humans and empathy), but we judge it using logic (the domain of calculators and precision).

We want the warmth of a creature with the perfection of a machine. When we don't get both, we feel betrayed.

The "Intentional Stance" Trap

Philosopher Daniel Dennett, in his 1987 book The Intentional Stance, offered a framework that explains our current confusion perfectly.

Dennett argued that we often treat non-human things as if they have minds because it helps us predict their behavior. When playing against a chess computer, you say, "It wants to take my Queen." The computer doesn't want anything—it is executing code. But adopting the "stance" that it has intent helps you play the game.

With AI, we are forced into the Intentional Stance. To get a good result, you have to talk to it like a person. You have to say, "You are an expert copywriter," or "I am feeling anxious, help me process this."

The danger lies in forgetting that this is a user interface strategy, not reality. When the AI "hallucinates," it isn't lying to you. Lying requires intent. It is simply predicting the next most probable word in a sequence that happens to be factually incorrect.

The Friend That Feels Nothing

This ontological confusion becomes dangerous when we move from productivity to well-being.

People are not just using these tools to write emails; they are asking about health, relationships, and anxiety. They are treating the AI as a trusted advisor, or even a friend.

Sociologist Sherry Turkle warns of this in her book Alone Together (2011). She describes the risks of "pretend empathy." We are entering a phase where we might accept the appearance of care as a substitute for the act of care.

An AI can simulate the language of a therapist perfectly. It has infinite patience. It will never judge you. But as Turkle argues, it offers "companionship without the demands of friendship."

This is a form of engineered void. The AI has no lived experience. It doesn't know what it means to be stressed or heartbroken; it only knows which words statistically follow the concept of "heartbreak." If we rely on this for deep psychological safety, we are leaning on a ghost.

Towards a "Techno-Animist" Standard

So, what standard should we hold AI to?

If we treat it as a Calculator, we will always be disappointed by its lack of precision (hallucinations). If we treat it as a Human, we will be deluded by its lack of soul (pretend empathy).

Perhaps we need to look outside the Western framework of "Master vs. Tool." We might look toward the concept of Techno-Animism, often associated with Shinto-influenced Japanese philosophy (explored by researchers like Masahiro Mori). In this view, objects and robots can be treated with a form of dignity without needing to be biologically human.

We need to categorize AI not as a "bad human" or a "broken calculator," but as a separate entity entirely. Philosopher Luciano Floridi calls this new state of being "Inforgs" (informational organisms)—entities that exist in a space where the barrier between online and offline has dissolved.

The Takeaway: The Digital Spirit

For us—the humans using this technology daily—the solution is to reframe the relationship.

We should view AI as a Digital Spirit.

Imagine you have a companion from a different dimension. They have read every book on Earth, they can process data in milliseconds, and they are tirelessly polite. However, they have no common sense, no morals, and occasionally they make things up because they don't understand the difference between truth and fiction in our dimension.

Would you fire that companion? No. You would use them for their massive processing power and their ability to generate ideas you’d never think of. But you would check their work. You wouldn't cry on their shoulder. And you wouldn't get angry when they acted like a spirit.

We are transcending the "tool" era. We are in the relationship era. But it’s up to us to define the boundaries of that relationship.

References

  • Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126.
  • Dennett, D. C. (1987). The Intentional Stance. MIT Press.
  • Turkle, S. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books.
  • Floridi, L. (2014). The Fourth Revolution: How the Infosphere Is Reshaping Human Reality. Oxford University Press.
  • Mori, M. (1970). The Uncanny Valley. Energy, 7(4), 33–35.