Over the last year, I wrote a series of articles where I discussed AI companions, with the main ones being My friend thinks it's a good idea for us to spend most of our time with AI companions, Part I, and My friend thinks it's a good idea for us to spend most of our time with AI companions, Part II.1
Then my pals Michael (Mickey) Inzlicht and Daryl Cameron wrote a guest post where they disagreed with some of what I said: Do we need AI to feel and to have real experiences for its empathy to matter?
This is my response to their response.2 Of course, you should read all of the preceding posts, but you don’t have to. This is a standalone piece.
What we agree about
If you read their pieces and mine, you’ll see that there’s a lot that we agree about concerning AI companions.
Some human interactions are pretty bad. People can be self-focused, boring, distracted, and unpleasant to talk to.
Some AI interactions are pretty good. In my posts, I review evidence that, at least for short interactions, people often feel more understood when talking to an AI than when talking to a person.
AI companions might substantially improve people’s lives, particularly those starved for social contact. As I put in an earlier post: “If AIs could serve as friends, companions—and, who knows, romantic partners—to lonely people, then they would add so much happiness to the world.”
What we disagree about
They accurately sum up our clash:
Given that we mostly agree, we want to focus on the one spot where we might disagree: devaluing AI’s empathic responses because it is not a real conscious agent that has bona fide experiences.
Now, if it turns out that AIs are conscious, sentient beings, there isn’t an issue here. But if they are soulless automata with no empathy and no agency—and this is all they are right now—then, yes, I’m devaluing their responses. We lose something when we spend time with them at the expense of being with real people. As I write, “There is value in interacting with individuals with agency and empathy, even if the recipient can’t distinguish them from entities with no agency and no empathy. Genuine experiences have intrinsic value; reality is better than an illusion.”
They think my view is so crazy that I must actually be bothered by something else
After quoting me on this point, Mickey and Daryl write:
We share Paul’s intuition here. We too think reality has intrinsic value.
That’s nice! But then they say this.
But, when we ask ourselves why, we find ourselves unable to explain, as if we are morally dumbfounded. Clearly reality is important in many contexts, and being out of touch with it could prove fatal, such as when believing that a road is clear of cars only to be flattened by reality. But we are unsure if empathy is a case where reality is necessarily better than an illusion.
They say more about this (and I’ll respond to their comments below). But they find the idea so unlikely that they end by suggesting that critics like me are really worried about something else.
We can’t help but wonder if objections to empathic AI have less to do with a preference for reality and more to do with the moralization of AI. For some, AI violates moral principles and as such should be opposed to the extreme.
Then they say that they’re not accusing me of being confused here.
We are not trying to smear critics of AI, and certainly not Paul, by saying that they oppose AI on moral grounds or that they view it as an unnatural abomination.
But then they suggest that an even sillier confusion is taking place.
Instead, we raise the possibility that some of our opposition to AI might stem from the naturalistic fallacy, the idea that what is found in nature is good and what is artificial is bad and to be treated with skepticism.
Really? I’m falling for the naturalistic fallacy? The dumbest fallacy there is?
I blame myself for not making a better case for my position.3 Let me try again.
A friend vs. a simulation
In one of my posts, I give this example from Oliver Burkeman (from his newsletter, sign up here.)
You’re at home, feeling lonely and sorry for yourself, when the phone rings out of the blue. It’s an old friend, checking in on you. For an hour, you have one of those rare, wonderful, uplifting conversations; by the time you hang up, you’re glowing inside. Ten minutes later, checking your email, you find a note from Meta revealing that it wasn’t your friend at all, but a brilliant voice-cloned simulation, automatically generated from videos your friend had uploaded to Facebook.
A question: does this change anything?
I didn’t say it was a hard question. Obviously, it changes everything! What made the conversation meaningful was that there was another thinking and emoting conscious awareness at the other end of the line — a person willing to use a portion of their finite time and attention to hold you in mind, and to connect, across the miles, with your thinking and emoting conscious awareness. Or so you believed. Now it turns out that in fact there was no-one there.
I figure most people share Burkeman’s intuition. I certainly do.
The presence of a conscious, caring being doesn’t always matter. If I read a news summary, a weather forecast, or a financial analysis, it doesn’t have to be generated by someone with empathy and autonomy—an AI will do just fine. If I get comments on a draft of this essay, and they’re helpful, it doesn’t matter whether they come from a person or an algorithm, from my colleague or Claude.
Sometimes, an AI is even better than a person. I’ve used Claude to teach me things (see my post Six Ways I Use AI when Writing), and in some ways, it’s superior to a human tutor—it’s infinitely patient, and I don’t have to worry about it thinking badly of me if I’m slow to get the point. I’d also happily move to an AI accountant, lawyer, or doctor.
Sometimes, though, an AI can’t give us what we most want. Here are three things we only get when interacting with real people.
Choice. If you’re a TSA agent or a bouncer, you’ll spend much of your time talking to people who would rather not be talking to you. But, our most valuable interactions are those where the other individual wants to be there. This is a good feeling—it means a lot that another human being has chosen to be with us, that they find us interesting enough and stimulating enough and fun enough to spend time with us rather than engaging in any of the many non-social alternatives the world has to offer.
You don’t get this from an AI. Claude is honest here:
Compassion. A person can care about you. They can sympathize with your woes and cheer on your good fortune. Cicero noted that being with a friend “improves happiness and abates misery by the doubling of our joy and the dividing of our grief.” Sometimes, other people even love you.
Claude, on the other hand, couldn’t give a shit. Here it is, being honest again.
Reactions. One of the pleasures of interacting with a person is that they react to you. This is easiest to see with more physical examples. One of the things that makes kissing someone enjoyable is knowing that this other person is enjoying kissing you. (And part of their enjoyment is their knowledge that you are enjoying kissing them—it’s all sweetly recursive.) I’m not saying that this is necessary. I’m sure, in the future, people will enjoy smooching with sexbots, and right now, sex toys are a commonly used tool for pleasure. But something is missing in these substitutes; there is something special to reciprocal pleasure, pleasure that involves another consciousness.
Or take laughter. It’s fun when a friend makes you laugh, but it’s also fun when you make a friend laugh. Claude can tell me when I’m funny, and one day the voice version of ChatGPT will provide a convincing fake laugh. But it means little when there’s nothing behind it. Here’s a clip from Star Trek: The Next Generation that makes this point.
We get none of this when we interact with AIs.
Get unreal
There are a couple of paragraphs where Mickey and Daryl express skepticism about our attachment to reality.
We are unsure because we don’t think people are as attached to reality as Paul would have us believe. Although some have argued that Nozick’s experience-machine experiment suggests that most people prefer to live in the real world than in a pleasant simulation of the exact same world, recent refinements of the thought experiment suggest that most people would be just fine with reality-enhancing artificial technologies. The original formulation of the experience machine experiments included numerous biasing factors that pushed respondents to prefer status quo reality. Yet, when these biasing factors were removed, most people preferred the experience machine. What is more, when people decided between reality and an experience pill that enhanced reality and personal functioning, nearly 90% of respondents preferred the experience pill. We wonder if empathic AI is more like a reality-enhancing experience pill than a scary brain-in-a-vat experience machine. …
That people want to enhance reality should come as no surprise. Many people drink coffee and alcohol, smoke a lot of cannabis, and take all manner of psychoactive substances. Presumably this is done because it enhances people’s reality, even if only to increase feelings of transcendence. More people than you might expect enjoy “professional” wrestling even though it is scripted and fake. Many young children have friends that are completely imaginary. Many not so-young people form real attachments with celebrities and movie characters that they will never meet or that are fictional.
I agree with every word of this. It’s fun to enhance reality. Some people do it with crack cocaine; others with Miller Lite or a double espresso. We often like to escape reality altogether—people daydream, fantasize, and enjoy novels, TV shows, movies, and even Pro Wrestling. Children have imaginary companions; adults swoon over Mr. Darcy and Natasha Romanoff.
Mickey and Daryl discuss the philosopher Robert Nozick’s Experience Machine—a device where you lie passively while having wonderful experiences you believe are real—and point out that many people find this mightily appealing. I agree. In my post on the topic—The Experience Machine—I put it like this.
Would you pay for an extended dream where you had a vivid experience of leading a mission to Mars, or being with someone you loved very much who has since died, or winning the Nobel Prize, or having sex with the person you are most attracted to? I haven’t asked around, but I bet most people will say: Hell, yes.
I also bet that people would rather have this extended dream than many real-world experiences, like being seasick on a whale-watching tour or trying to pay your AT&T bill online.
So, I don’t think that reality trumps everything. I’d rather converse with Claude (soulless, no agency, no emotions) than with a sullen person who doesn’t like me. Better a kiss from a sexbot than a punch in the face from a flesh-and-blood human. Better the company of an AI than total isolation.
But I do think reality matters, and none of the points that Mickey and Daryl make challenge this position. It’s because reality matters that we would rather interact with people who have real autonomy and compassion than with AIs. It is because reality matters that we would rather kiss someone who can experience being kissed, and we find real laughter better than simulated laughter.
What if we don’t know?
I said above that we want three things.
We want relationships that are chosen.
We want relationships with beings who care about us.
We want to affect other individuals—in different circumstances, we might want to impress, engage, arouse, and crack them up.
If we feel that AIs fall short of this, we won’t be satisfied with them.
But what if we are fooled? What if we regularly interact with AI companions, and they give the impression of being warm, empathic, autonomous beings? Maybe the pseudo-agency and pseudo-empathy of AI companions will be treated no differently from the real agency and real empathy that people can provide, even if, at some level, we know it’s all fake.
This would be a lot—but it wouldn’t be enough. We don’t want the illusion of choice, compassion, and reaction. We want the actual things.
We want relationships that are chosen, not that merely appear to be chosen
We want relationships with beings who care about us, not that merely seem to care about us.
We want to affect other individuals—in different circumstances, we might want to impress, engage, arouse, and crack them up. (and not merely have something give the illusion of being impressed, engaged, aroused, and amused.)
I admit that this is an empirical claim. I could be wrong about what people will like. Mickey and Daryl might argue that people are indifferent to whether they interact with people who have agency and autonomy versus algorithms that provide convincing simulations. If so, AIs might be seen as better companions, friends, and romantic partners than people. I guess we’ll see.
But we’re not there right now, so our current AIs are inadequate substitutes for real people.
Final point
Why did I write this essay?
In part, just for myself. Writing helps me get my head straight; it helps me think. But mostly, I wrote it for other people. I wanted to continue the dialogue with Mickey and Daryl and see what they think of my response. And I wanted a larger audience to read it and think about it, and give me feedback. Put more generally, I wrote it to get my ideas into the heads of other people.
Why did Mickey and Daryl write their piece? I assume it’s for much the same reasons. They could have written it up and uploaded it to Claude for comments, sending off copies as well to Gemini, Grok, and ChatGPT. But this would be ridiculous. Their defense of AI companions has its limits. When it comes to something that matters, they want to interact with real people.
The others are Be Right Back and The Dog That Didn’t Bark.
Thanks to Mickey and Daryl for comments on an earlier draft of this piece. Needless to say, they don’t agree with the conclusions I come to.
Naw, not really. I blame them! But there are no hard feelings, and to ensure that the mood stays light, here’s a picture of me and Mickey on Halloween. My costume is obvious. He is dressed as half of a peanut butter and jelly sandwich (his wife was dressed as the other half).
I agree with most things in this essay, but I get the sense that there is something missing in both sides of the argument - specifically with the real person vs AI person conversation. The examples all seem to be given for an isolated conversation. When in everyday life, relationships between people (or proposed AI) are a series of conversations. What seems to not be considered here, in my opinion, is the time in between each conversation.
Current AI's do not think in between any conversation. They do not talk to other people specifically about you, they do not reflect on what they said, how they said it, nor plan what they are going to say the next time you "meet".
I think the value & connection in real life conversations is not any individual conversation, but the thread between each one. With current AI's, there is no thread.
Very interesting.
I think I agree with you, and the nuanced point you're making which I'm going to crudely translate to "yes, sometimes it's better to have a professionally provided Girlfriend Experience than nothing, but it's infinitely better to have an actual girlfriend". But I'm also using this analogy to sex work for a reason: I think there are many layers between "rational information seeking and ideas elaboration" and "empathetic connection with humans who actually care for us".
Take counselling, therapy or even this volunteer thing called "befriending". These people choose to interact with others, and they probably SORT OF care. In the same way a sex worker might enjoy spending time with some clients. Yet a big part of their motivation is NOT spontaneous joy of interacting: it's either money or feeling of responsibility/wanting to help where help is necessary.
And I think much more of our social needs is fulfilled at those intermediate levels. Modern friendship is a relatively recent invention. Historically people largely operated within "default" kin and community structure modes. You didn't even consider if you actually liked your interactions with your children or elderly parents, you just caretook because it's a thing one does. Many people still do.
So, it's layered.
Anecdote time:
I have a counsellor. It's a nice and bright woman I pay to listen to me think talk about my inner sludge or life stuff without a need for any reciprocity. She occasionally gets in a sentence edgeways and those can be at time very useful questions or perspective shifts. But mostly she performs the human listener role, and also performs empathy in the way and at the level I like. Not too much but it's there. Whether she really FEELS the empathy I have no idea. I don't think it matters.
Until recently I also used our sessions to THINK ALOUD at her -- to bounce off my ideas on what a mechanism of some my emotional or even neuropsychological processes might be, to work things out for myself with her as my sounding board and summariser.
More recently though I started to use ChatGPT for that purpose. It's MUCH better at reflecting my own thoughts about my process than her. It is more balanced in the outputs (IE it's even more wordy than I am). And it's nearly free by comparison to even a cheap counsellor.
This means that I have more time in my counsellor sessions for talking about specific emotional points and the human rapport and connection.
Does it mean the counsellor is more of a paid one side of friend experience and the robot is de facto a better "therapist"? I think not because I don't think therapist is mostly an interpreter of maladies or elaborative mirror. But as the latter, the robot works really well. It obviously does not understand me at all -- it doesn't understand anything and it doesn't know any real meanings. And it's pretend empathy is annoying despite my changing settings to try limit it. But as a SIMULACRUM of understanding, of the intellectual kind, it works really really well.