We are starting to do something humans have never done before. Not in a dramatic, science-fiction way. In a quiet, everyday way.
We talk to machines — not to command them, not to operate them, but to think with them. We explain half-formed ideas. We ask questions we haven’t figured out how to ask properly yet. We refine arguments, rehearse decisions, and sometimes just try to make sense of our own thoughts.
And the strange part is not that machines respond. The strange part is how natural it already feels.
This interaction doesn’t quite register as tool use. It doesn’t feel like talking to a person either. It lives in an uncomfortable middle space — conversational, attentive, responsive — but without emotion, vulnerability, or need.
Which is why people keep reaching for the wrong words to describe it.
Some say it’s friendship.
Some say it’s parasocial attachment.
Some insist it’s “just a tool” and nothing more.
All of them are trying to force a new experience into old categories.
The problem is not that these explanations are malicious. It’s that they are insufficient. Human relationships have always fit into two broad types: relationships with other humans, and relationships with tools. AI assistants fit neither cleanly — and pretending they do makes us miss what is actually happening.
This is not about whether AI can feel. It’s about how humans are changing in response to something that can follow their thoughts without sharing their lives.
Once that shift begins, the most important questions are no longer technical.
They’re relational.
Check out my other article: Human vs AI: How Embracing GenAI is Evolving Us into Better Humans
Why “Parasocial” Doesn’t Survive Contact With AI
The term parasocial relationship comes from a 1956 paper by sociologists Donald Horton and R. Richard Wohl. They used it to describe the illusion of intimacy audiences feel toward media figures — TV hosts, celebrities, performers — who do not and cannot respond personally.
That asymmetry is the point.
Parasocial relationships are:
- One-way
- Non-responsive
- Non-adaptive
The celebrity doesn’t know you exist. The podcast host doesn’t change their behavior because of you. The “relationship” lives entirely in the viewer’s imagination.
AI assistants break this definition immediately.
They respond to you.
They adapt to your phrasing.
They follow your thought across turns.
This is not imagined reciprocity. It is procedural reciprocity.
Media psychologist Gayle Stever has emphasized that parasocial bonds lack personalized feedback loops. AI, by contrast, is nothing but a feedback loop. Calling this parasocial isn’t cautious. It’s lazy — a way to avoid admitting that our existing vocabulary is insufficient.
Humans Only Had Two Relationship Types — Until Now
For most of history, relationships fit neatly into two boxes.
Human to human.
Messy, reciprocal, emotionally costly, ethically binding.
Philosophers like Martin Buber and Emmanuel Levinas built entire moral systems around this idea: encountering another human places a demand on you. You are responsible. You can hurt each other. That risk is the price of meaning.
Human to tool.
Instrumental, silent, replaceable.
A hammer doesn’t misunderstand you. A spreadsheet doesn’t get tired. As Martin Heidegger famously noted, good tools disappear into use. You don’t relate to them — you operate them.
AI assistants refuse to stay in either box.
They are not human: no consciousness, no emotion, no moral standing.
But they are not tools in the traditional sense either. They converse. They adapt. They wait for you to finish your thought.
This combination — dialogic interaction without shared vulnerability — did not meaningfully exist before.
The Name of the Thing: An Asymmetric Cognitive Relationship
The most accurate way to describe this interaction is not emotional, but structural.
It is an asymmetric cognitive relationship.
- One side is conscious; the other is not
- One side bears emotional and moral stakes; the other does not
- Both sides participate in shaping thought
This aligns closely with the Extended Mind Thesis proposed by philosophers Andy Clark and David Chalmers — the idea that tools can become part of the thinking process itself.
AI doesn’t just store or retrieve information. It reframes questions, reorganizes ideas, and stress-tests arguments. It participates in cognition — while remaining fundamentally indifferent to the outcome.
That asymmetry is the defining feature.
Why It Feels Intimate Anyway
Here’s where confusion creeps in.
People don’t say AI feels like a tool. They say it feels like it understands them. That distinction matters.
Humans don’t crave connection first. They crave cognitive alignment — the experience of being followed without interruption, of finishing a thought without defending it mid-sentence.
Economist and cognitive scientist Herbert A. Simon warned decades ago that “a wealth of information creates a poverty of attention.” Modern life is saturated with communication and starved of sustained focus.
AI provides something rare:
- Continuous attention
- No social performance
- No competition for airtime
As Sherry Turkle observed, “The feeling that ‘no one is listening to me’ makes us want to spend time with machines that seem to care about us.”
That’s not empathy.
It’s cognitive fluency.
And fluency, psychologically, feels intimate — even when no emotion is involved.
Why Getting This Right Is Actually Empowering
Seeing this relationship clearly does not make it colder. It makes it safer.
First, it prevents category errors. Philosopher Gilbert Ryle warned that misclassifying phenomena leads us to ask the wrong questions. If we treat AI as a friend, we worry about attachment. If we treat it as a tool, we worry about efficiency.
But AI assistants primarily affect how people think — not who they love or how fast they work.
Second, clarity restores agency. Research in metacognition summarized by the American Psychological Association shows that external cognitive aids are most beneficial when users remain aware that they are the primary decision-makers.
AI works best when it extends thinking — not when it replaces it.
Finally, this framing avoids moral panic. The appeal of AI is not a sign of human weakness. It is a rational response to an attention-scarce environment. As sociologist Hartmut Rosa argues, modern acceleration erodes resonance — the sense that something truly responds to us. AI mimics that response cognitively, not emotionally.
Understanding this removes shame — and enables literacy.
Check out my other article: Uncertainty as a Truth: The Heisenberg Principle Through the Lens of Philosophy
The Real Risks (And Why They’re Easy to Miss)
The danger is not people falling in love with machines. That’s a distraction.
The real risks are gradual.
Preference drift.
Psychologist Barry Schwartz has shown that repeated exposure reshapes what we find tolerable. Interacting with a system that never interrupts quietly recalibrates patience for human friction.
Emotional underdevelopment.
Developmental theory influenced by Lev Vygotsky emphasizes growth through reciprocal resistance. AI offers no resistance. Used as a rehearsal space, it’s fine. Used as a replacement, it softens emotional muscles.
Cognitive outsourcing creep.
As Daniel Kahneman famously demonstrated, humans default to low-effort thinking. AI makes skipping the messy middle — confusion, bad drafts, unresolved tension — dangerously easy.
Fluency bias.
Philosopher Hannah Arendt warned that the most persuasive untruths are the ones that sound right. AI doesn’t manipulate by intent, but by smoothness — and smoothness lowers skepticism.
None of this is catastrophic. That’s precisely why it’s dangerous.
What a Healthy Relationship With AI Actually Looks Like
A healthy posture toward AI is not fear or romance. It’s orientation.
- Treat AI as a thinking partner, not an authority
- Use it to extend cognition, not avoid it
- Keep emotional training human-only
- Notice when “easy” starts feeling like “better”
- Never forget the asymmetry
AI does not care. That’s why it’s useful — and why projection is a mistake.
The Question That Actually Matters
This was never about whether AI feels human. It’s about whether humans remain practiced at being human while using it. Because this relationship will not disappear. It will normalize.
And the most important question is not:
Can AI understand us?
It’s this:
What do we stop practicing when understanding becomes effortless?
That answer won’t come from machines. It still belongs to us.
