AI & transactional relationships
How technology monetizes the allure of and need for human connection
If you look closely at the AI hype, you will start noticing a paradox. We somehow tend to attribute personalities to our tools. Not just personalities, but feelings, intentions, consciousness... This fallacy is not new, of course. It goes back to ELIZA in the 1960s (thanks to Jenka for sharing the link and establishing the connection for this train of thought). This is an effort to see beyond the hype and expand on the following premise, which was a provocation I shared in a post on BlueSky:
“The chat, the conversational interface is the anthropomorphization of LLMs which is why even intellectual people fall for it so badly. The need to connect is inherently human but the need for validation is higher in a hyperindividualistic society.”
What does it tell us about ourselves that so many educated and rational people (people who should frankly know better) seem desperate to believe that Large Language Models are sentient?
Joseph Weizenbaum created ELIZA in the mid-1960s as a simple program that could mimic a Rogerian psychotherapist through pattern matching and response generation. The program was rudimentary by today's standards because it simply reformulated users' statements into questions, creating an illusion of understanding, although none existed.
What shocked Weizenbaum was not the program's capabilities but people's reactions to it. His own secretary would ask to be left alone with the program in order to have private conversations with it. Educated colleagues suggested it could revolutionize psychotherapy. Weizenbaum was horrified that people so readily projected consciousness onto obvious machinery. The experience disturbed him so much that he wrote an entire book (Computer Power and Human Reason) warning against the dangers of anthropomorphizing computers.
Spoiler alert: We didn't listen.
The ELIZA effect
Big tech, first with social media, and now with AI, has taken over something fundamentally human (conversation or discourse) and reduced it to a set of procedural rules and statistical patterns. In that attempt, it has pushed users to participate in a fiction and pretend that these interactions are meaningful. When ChatGPT says "I'm happy to help" it must experience something like happiness, or when Claude expresses "concern" about a user's problem, there ought to be an actual concern happening somewhere in its neural circuits.
But why are we so willing to take such an active part in this role-play? I believe the answer lies in the structure of the late-stage capitalist society itself. We live in an era of profound social isolation, where traditional communities have been systematically dismantled and replaced with market transactions. The hyperindividualistic society we currently find ourselves in doesn't just encourage but demands self-sufficiency while simultaneously eroding the social structures that make genuine human connection possible.
So as human connections are morphing into transactions, interactions become performances measured by engagement metrics, while AI interfaces offer simulated connection without requiring emotional investment or reciprocity. This asymmetrical pseudo-relationship, where users expect immediate gratification without some corresponding vulnerability, is the perfect example of the commodification of human interaction. The "ELIZA Effect" compounds this issue as we attribute genuine understanding to systems that merely simulate it, and in turn we are accepting simple and convenient exchanges as substitutes for authentic relationships, and dialogue.
The chat interface offers the simulation of connection without its messy reality. You can "talk" to Claude or ChatGPT at any hour, about any topic, and it will never judge you. It will never be too busy for you. It will never have its own needs that conflict with yours, always there for you. All the appearance of connection with none of the reciprocity or vulnerability that makes real connection meaningful.
However, this willingness to accept simulated understanding does not come without a cost. When we pretend machines understand us, we subtly shift our conception of what understanding actually means. This is the “ELIZA Effect”: our tendency to attribute understanding and empathy to systems that merely simulate these qualities. It trains us to mistake pattern recognition for empathy and statistical prediction for understanding.
Meanwhile, policy discussions about AI bias and discrimination, which are real concerns grounded in thousands of scientific studies, get clouded by hype and corporate mythmaking about "revolutionary" and "disruptive" technologies.
Breaking the spell
The first step toward addressing this problem is to make it visible. We need to stop pretending that our interactions with language models are anything other than interactions with clever text prediction engines. They are not "artificial general intelligences”. They are not "learning to think”. They are complex statistical systems trained on vast corpora of human-written text (and likely without consent but this is a discussion for a different post).
When a corporation or tech evangelist claims their AI possesses understanding or reasoning capabilities, they're not making a technical claim based on peer-reviewed studies, they're engaging in a specific form of mythmaking designed to increase the perceived value of their products. It’s pure marketing or hype for investors.
The truth is much simpler: everything these systems produce is equally disconnected from reality. These systems have no understanding of truth or falsehood—they're just predicting which words typically follow others based on patterns in their training data. By treating AI errors as fixable glitches rather than fundamental limitations, AI companies can keep promising that true intelligence is just around the corner, justifying massive investments while distracting us from questioning whether this technology actually serves human needs.
The result is a perfect distraction from the urgent need to exercise collective political agency over our technological future. While we argue about capabilities that don't exist, we neglect to question why we're rushing toward this simulacrum of connection.
Instead of building ever more sophisticated simulations of consciousness, we might need to focus on addressing the conditions that make those simulations seem so appealing in the first place: the loneliness, isolation, and lack of meaningful community that characterize life in hyperindividualistic societies, especially visible after COVID lockdowns.
The problem isn't that our machines aren't smart enough yet. The problem is that we've organized our society in a way that makes simulated connection seem like a reasonable substitute for the real thing. And we need to build community now more than ever.