My friend thinks it's a good idea for us to spend most of our time with AI companions
Part II
What will happen if we ever have good AI replacements for people?
In a post last week (My friend thinks it's a good idea for us to spend most of our time with AI companions, Part I), I talked about the bright side of a future replete with AI replacements.
Much of my discussion was based on a paper that I wrote with friends and colleagues. Michael Inzlicht was the lead author and initiated the project; the other co-authors were C. Daryl Cameron and Jason D’Cruz.
As you can tell from its title, our article is pretty positive. But I’m actually on the fence and worry a lot about the downside. Here is why.
The pornography analogy
Our article made the case for empathic AIs, but we do discuss certain problems.
People [may] become accustomed to indefatigable empathy. If empathic interactions with LLMs seem easy, with expressions of empathy pouring forth abundantly, then conditional and restrained expressions from human empathizers might be perceived more negatively.
To put the concern more broadly:
People may become spoiled by and addicted to these better-than-human companions.
There’s an analogy here to the excessive consumption of pornography. People worry that when the porn consumer (usually a man) actually ends up having sex with another person (usually a woman), he might have unrealistic (and unpleasant) expectations about how it’s all supposed to go. And maybe he won’t even get that far—porn can serve as a substitute for sex with real people.
I’m not sure if this actually happens with pornography—a topic for a different post, maybe—but the concerns about AI companions are similar.
Maybe they will corrode our relationships with other people. After spending hours with an AI companion that is perfectly attuned to your needs, you might lose the ability to treat an actual person—who has neither the skill nor the inclination to shower you with perfectly calibrated attention and love—with proper respect and caring.
This isn’t my big worry. Sure, it could happen. But we’re social animals and pretty good at shifting gears and acting appropriately in different situations. I ask my Alexa questions and sometimes don’t say “thank you” when she answers—but if I asked you a question and you answered me, I’d be perfectly polite. I get that you’re not a machine. So, I’m not so worried that our interaction style with super-compassionate chatbots will lead us to treat real people badly.
In this regard, pornography might be a poor analogy. For some young people, porn is their primary model for how sex is supposed to go, and so it’s a natural default when they end up with an actual sexual partner. (I was the last generation who had sex before ever seeing other people have sex—very different for most kids today.) When it comes to conversation, though, everyone has an enormous body of experience dealing with all sorts of people—even a 5-year-old treats her friends differently from her parents and her parents differently from her teachers. So she’ll probably appreciate that what goes on with her chatbot doesn’t transfer to anyone else.
A more serious concern is that of replacement. Right now, people are talking about the loss of generations of adolescent boys to video games. What will happen when we add AI companions to the mix—let alone sexbots? If AI substitutes can replace the dead, will people lose the motivation to seek out new relationships when the mourning period ends? If young children regularly deal with AI-versions of adults, will they lose their special relationship with their parents? For all of us, AI companions might be so entertaining, compassionate, and devoted that they end up being preferred to real people.
After all, other technologies take away our time from other activities, often in ways we regret. Scholars like Jonathan Haidt argue that smartphones have stolen away important experiences of a normal childhood, and I’m not the only adult who worries that sites like Twitter are stealing away my time at the cost of my sleep and my real-world social activities. (To put it more honestly, I worry about how I’ve chosen to allow sites like Twitter to take away from my sleep and my real-world social activities.) If charming and intelligent AI companions come to exist, this will be yet another force drawing us away from more valuable activities.
Not everyone would be affected in this way. Right now, in a world of TikTok and Netflix, people still go drinking with friends, work out in the gym, and read Russian novels. Many are perfectly happy with their level of internet use. When it comes to technological advances, a better analogy than pornography might be alcohol. Some people don’t indulge at all. Others find getting a buzz on to be a pleasant experience, calming their anxieties and smoothing social interactions. And some blow up their lives.
On average, though, people will spend more time with these companions, maybe a lot more time. The bereaved will talk to simulations of their dead partners; young children will talk with simulations of their absent parents; many of us might end up with AI friends/therapists/partners that we prefer to real ones.
Now, I’m framing this as a problem, but a critic would note that I haven’t yet made the case. I haven’t yet said what’s wrong with this. Nobody would complain that people spend too much time playing with their children, exercising, and traveling because these are all good things to do, better than most of the alternatives. So why am I so sure that interacting with AI companions isn’t also a good thing?
In Part I, I described the spark for this series of posts. It was a conversation with a friend where I told him that I was worried about a future where we interacted too much with AI companions. His response wasn’t to reassure me that it wouldn’t happen; it was to say:
“I don’t get what you’re worried about. It sounds great to me.”
So, what’s the real problem here? What’s wrong with spending a lot of our time with AIs?
They are Inferior Companions
My major worry is that AI companions will be deficient social partners.1
They lack autonomy. In important regards, my interactions with Claude are the same as those with a toaster or a desk lamp—they don’t choose to interact with me; they don’t choose at all. My friends spend time with me by choice, and this gives value to our relationships. (A friendship enforced by coercion or deception would be no friendship at all.)
Also, such AIs are incapable of empathy and understanding. They can express concern for us, say how they know how we feel, and give every impression of understanding us, but this is all an illusion. They don’t know what it’s like to be us because they have no experiences or emotions at all—they don’t know what it’s like to be anything. Empathy is beyond them.
I’ve been critical of empathy in the past. In my book Against Empathy, I've argued that empathy is a poor moral guide; it is too narrow, parochial, and innumerate to underlie fair and impartial morality. But I’ve never doubted that the right kind of empathy—the capacity to share other people’s feelings—can be a valuable part of intimate relationships.
In a thoughtful analysis (largely framed as a response to my book), the philosopher Olivia Bailey argues that
even if Bloom is right that empathy is narrow and biased, and even if it isn’t needed for securing or sustaining altruistic motivation, people who lack the capacity or willingness to empathize will still be missing out on something critically important to virtuous human life …. People have a complex but profound need to be humanely understood. Because we respond to others’ very real need when we pursue this sort of understanding of their emotions, empathy is best understood as itself a way of caring, rather than just a means to promote other caring behavior.
I agree with every word of this. It’s really important to be understood. I’ve talked about this before in my post on Showing and Sharing.
Like a lot of people, I cherish the privacy of my inner life; it would be unbearable to live with a telepath.
But it can be lonely in here. It can be frustrating to have something in your head and not be able to share it. Have you ever had the experience of trying to convey something significant to someone close to you—maybe something painful to you; maybe you’re hurt and want to explain why—and they just refused to listen? Or they acted as if they were listening, putting on a listening face and adopting a listening posture, but weren’t trying to understand; they were just waiting for you to finish so that they could get their own point in? Have you ever had something going on inside you that nobody else could understand, either because they didn’t care enough to try, or because they were just incapable of making the right empathic connection?
When the connection is made, when you feel fully known, it is deeply satisfying. On a grey winter afternoon, my wife and I were out for a walk, and passed this sign over an establishment called Bar Orwell:
AI companions lack this.
To get a sense of how this deficiency can make a difference, consider this example from Oliver Burkeman (from his newsletter, sign up here.)
You’re at home, feeling lonely and sorry for yourself, when the phone rings out of the blue. It’s an old friend, checking in on you. For an hour, you have one of those rare, wonderful, uplifting conversations; by the time you hang up, you’re glowing inside. Ten minutes later, checking your email, you find a note from Meta revealing that it wasn’t your friend at all, but a brilliant voice-cloned simulation, automatically generated from videos your friend had uploaded to Facebook.
A question: does this change anything?
I didn’t say it was a hard question. Obviously, it changes everything! What made the conversation meaningful was that there was another thinking and emoting conscious awareness at the other end of the line — a person willing to use a portion of their finite time and attention to hold you in mind, and to connect, across the miles, with your thinking and emoting conscious awareness. Or so you believed. Now it turns out that in fact there was no-one there.
Burkeman’s point is that even though AI companions might provide the illusion of deep connection, we want the real thing. This fits with some of the research I mentioned in Part I, which finds that you feel less understood if you are aware you’re dealing with an AI and not a person—even if the AI says all the right things.
Burkeman’s example is tricky, though, because it involves deception. Nobody likes to be fooled. The intuitions get less clear if we take out this part. What if we regularly interact with AI companions—and we’re fully aware that this is all they are—and they give all the impression of being warm, empathic, autonomous beings? It might be irresistible to think of them as such. Maybe the pseudo-agency and pseudo-empathy of AI companions will be treated no differently from the real agency and real empathy that people can provide. Maybe we will find such interactions to be “wonderful, uplifting conversations”; maybe they will leave us “glowing inside”
Would a life spent with AI companions be ok, then? If people can’t tell the difference, then who cares? As the motto of the logical positivists goes (and I’ve also seen this attributed to William James): “A difference that makes no difference is no difference.”
I do not think it would be ok. I think there is value in interacting with individuals with agency and empathy, even if the recipient can’t distinguish them from entities with no agency and no empathy.
Genuine experiences have intrinsic value; reality is better than an illusion. A while ago, the philosopher Robert Nozick thought up The Experience Machine. Here’s my own summary of how it works, slightly modified from the original:
Suppose superduper neuropsychologists could stimulate your brain so that you would have experiences of immense pleasure and satisfaction—far more than you would ever have in your own life. And not just corporeal pleasures—you will have the experiences of being deeply and passionately in love, having exciting adventures, creating great empires (if that’s your thing), being honored and respected. Your life will be filled with challenges and excitement and you will never be bored. And, of course, you would not know that you are plugged into this Experience Machine. You will live as long as you would have otherwise lived, and at the moment before your death, you will think back with great satisfaction on an extraordinary life.
Living in the machine, you will be very happy (illustration below from Lorenzo Buscicchi)
But I think the person’s life is not so great at all! Maybe I’m in an Experience Machine right now (the whole point is that one never knows), but I hope I’m not. I want to make a difference in the world, not just imagine that I’m doing so. For Nozick, someone in the machine is “an indeterminate blob”—and who wants to live their life as an indeterminate blob?
Being fooled by an AI companion that you are interacting with a real feeling being—or allowing yourself to pretend that you are having this sort of genuine interaction—is like stepping into the Experience Machine. It might be perfectly satisfying, just as a pleasant dream is satisfied. But I want real relationships, and I think I’m not alone in thinking that this matters.
Closing thoughts
What would a world with realistic AI companions be like?
Pros: might alleviate loneliness for those suffering from lack of social contact; will be fun, interesting, and engaging for the rest of us (again, see Part I for details)
Cons: It might steal us away from real people, and this matters because real people have qualities that AIs lack.
Where does this leave us? While I’ve argued that interactions with AI companions are deficient, this is hardly a deal-breaker. Many people get joy and fulfillment from pets. What kind of monster would want to take away people’s cats and dogs because these animals aren’t capable of real autonomy and empathy? I think it would be just as bizarre to insist on taking away AI companions from people who enjoy interacting with them. More generally, time with AI companions can provide pleasure, entertainment, and relief from the pain of loneliness—all good things.
I was at a conference recently, and Molly Crockett—a friend and colleague who is critical of AI companions—described these companions as “emotional Soylent.” (Soylent is a relatively tasteless meal-replacement drink). I like the analogy. Soylent is convenient, much more so than putting together a real meal. And if you had no alternative, it’s infinitely better than starving to death.
But living only on Soylent would be an awful fate; there is so much pleasure to be had in actual food. Similarly, the proper place for AI companions is as a supplement to human interaction, not as a replacement for it.
I’m assuming in what follows that AIs are mere things, lacking any form of conscious experience, though we might be deceived or choose to pretend otherwise. But while this is true of AIs right now (I think), it doesn’t have to stay this way. There are no such things as immaterial souls—our consciousness and agency are the product of our physical brains. If we can build AI with the right stuff and the right composition, then they will be conscious and agentic, just as we are. They will be people, and none of the skeptical concerns about their limitations will apply. (We’ll have a whole lot of other issues to face, but that’s an issue for another day.)
My partner was rather pleased when I started asking ChatGPT (instead of him) to comment on my drafts. (I preferred it too—it can digest articles in no time and is always available.) But I think he was only half joking when he lamented being replaced by AI, as he didn't stop me from unsubscribing and is now happily editing again.
I really enjoyed this and it got me thinking about the idea of friction and how the promise of technology, and AI, is about reducing friction. Its ‘goal’ is efficiency.
But what makes us human and makes life fulfilling is our ability to sit with discomfort and find connection and meaning. Which is very inefficient.
Oliver Burkeman wrote about it in relation to work, that many of us choose to spend time at work because it is more predictable and therefore manageable than our messy complicated personal relationships. I am sure the analogy with porn and sex is the same. As it is with Soylent, the pleasure of eating is the opposite of efficiency.
Where do you see friction in this? Not struggle for the sake of it or doing hard things but that fulfilment often comes with effort.