Irish writer John Connolly once said:
The true nature of human beings is to feel the pain of others as their own pain, and to act to relieve that pain.
For most of history, we have believed that empathy is a uniquely human quality, a special ability that distinguishes humans from machines and other animals. However, this belief is now being questioned.
As AI becomes a bigger part of our lives and invades our most intimate areas, we are faced with philosophical conundrums. Could ascribing human qualities to AI undermine our own human nature? Our research suggests that it is.
Digitization of friendships
In recent years, AI “companion” apps such as Replika have attracted millions of users. Replika allows users to create custom digital partners and have intimate conversations with them. Members who pay for Replika Pro can also turn their AI into a “romantic partner.”
Physical AI companions are not far behind. Companies like JoyLoveDolls sell interactive sex robots with customizable features such as breast size, ethnicity, movement, and AI responses such as moaning and flirting.
Although this is currently a niche market, history suggests that today’s digital trends will become tomorrow’s global standards. With around one in four adults experiencing loneliness, the demand for AI companions will increase.
The dangers of humanizing AI
Humans have long attributed human characteristics to non-human beings. This is a tendency known as anthropomorphism. No wonder they do this using AI tools like ChatGPT, which appear to “think” and “feel.” But why is humanizing AI a problem?
First, AI companies will be able to take advantage of the human tendency to develop attachments to beings that resemble humans. Replika is touted as a “compassionate AI companion.” But to avoid legal issues, the company has pointed out elsewhere that Replika is not sentient and only learns through interactions with its millions of users.
Screenshot of Replika’s help page and contradictory information about ads.
Some AI companies openly claim that their AI assistants are empathetic and can even anticipate human needs. Such claims can be misleading and take advantage of people seeking companionship. When users believe that their AI companion truly understands them, they can become deeply empathetic.
This raises serious ethical concerns. Once users have given their AI companion some sentience, they are reluctant to remove it (i.e., “abandon” or “kill” it).
But what happens if that companion disappears unexpectedly, such as when the user can no longer afford it or the company running it closes? The companion may not be real, but The emotions that come with it are real.
Empathy – more than programmable output
Isn’t there a danger that by reducing empathy to programmable outputs, we will undermine its essence?To answer this, let’s first consider what empathy actually is.
Empathy involves responding to others with understanding and concern. It’s when a friend shares their pain with you, or when you feel the joy radiating from a loved one. It is a profound experience that goes beyond simple forms of measurement.
The fundamental difference between humans and AI is that humans truly feel emotions, whereas AI can only simulate emotions. This touches on the difficult question of consciousness, which asks how human subjective experience arises from physical processes in the brain.
Science has not yet solved the difficult problem of consciousness. shutter stock
AI can simulate understanding, but the “empathy” it purports to have is the result of programming that mimics language patterns that indicate empathy. Unfortunately, AI providers have financial incentives to trick users into becoming attached to seemingly relatable products.
dehumanization hypothesis
Our “dehumanization hypothesis” highlights the ethical concerns associated with attempts to reduce humans to a few basic functions that can be reproduced by machines. The more we make AI human, the more we risk dehumanizing ourselves.
For example, relying on AI to perform emotional labor can reduce our tolerance for imperfections in real-life relationships. This weakens our social bonds and can even lead to decreased emotional skills. Future generations may lose their grasp of essential human qualities and become less empathetic as emotional skills continue to be commodified and automated.
Also, as AI companions become more common, people may start using them in place of real-life relationships. This will likely increase loneliness and alienation, the very problems these systems claim to solve.
The collection and analysis of emotional data by AI companies also poses significant risks, as these data can be used to manipulate users to maximize profits. This will further erode our privacy and autonomy and take surveillance capitalism to the next level.
Hold providers accountable
Regulators need to do more to hold AI providers accountable. AI companies should be honest about what their AI can and cannot do, especially when there is a risk of exploiting users’ emotional vulnerabilities.
Exaggerated claims of “true empathy” should be made illegal. Companies that make such claims should be fined, and repeat offenders should be shut down.
Data privacy policies must also be clear, fair, and contain no hidden terms that allow companies to misuse user-generated content.
We must preserve the unique qualities that define the human experience. While AI can enhance certain aspects of life, it cannot and should not replace true human connection.