25 Comments
User's avatar
Peter Donis's avatar

Why do you think pets lack real autonomy and empathy?

That pets have autonomy should be clear, certainly to anyone whose pet has made an unexpected mess in their house.

But I think it's also clear that pets can have empathy. They can't *talk* about it, but empathy itself does not require language. For example, our dog can always tell when either of us is ill or upset, and he gives us more attention at those times. That seems like a clear nonverbal sign of empathy.

Paul Bloom's avatar

Maybe they have some, but plainly a lot less than a normal human adult. Your dog might make a mess on the floor, but it can’t choose to live by itself or sever all ties with you. Maybe your dog can tell if you’re ill or upset, but it’s not going to appreciate what you feel like when you lost a job offer that you were expecting. Like I say in my piece, relationships with pets have great value, but they are asymmetrical relationships with very limited creatures (in some regards, it’s like the relationships that parents have with their toddlers.)

Peter Donis's avatar

> Maybe they have some, but plainly a lot less than a normal human adult.

Sure, there's a range for such things--a pretty wide one. Yes, maybe our dogs have a lot less than a normal human adult. But having less doesn't mean it's not "real".

> relationships with pets have great value, but they are asymmetrical relationships with very limited creatures (in some regards, it’s like the relationships that parents have with their toddlers.)

Yes, of course (we pretty much think of our dogs as toddlers). But the relationship being asymmetrical doesn't mean the dogs' end of it does not offer any "real" agency or empathy, any more than between parents and toddlers. It's limited, yes, but that doesn't make it not "real".

Perhaps the question here is what criteria you are using to distinguish "real" autonomy and empathy from whatever ersatz kinds you think pets and AIs provide.

Paul Bloom's avatar

Sure, fair enough. Just replace "real" with "very low levels of", and my point still goes through. My argument is that even if a companion is not fully human, it can still provide real pleasure and companionship, and so the relationship has some value. I take it that you agree with this?

Peter Donis's avatar

Yes, with that I agree.

AC's avatar

My partner was rather pleased when I started asking ChatGPT (instead of him) to comment on my drafts. (I preferred it too—it can digest articles in no time and is always available.) But I think he was only half joking when he lamented being replaced by AI, as he didn't stop me from unsubscribing and is now happily editing again.

Bec Evans's avatar

I really enjoyed this and it got me thinking about the idea of friction and how the promise of technology, and AI, is about reducing friction. Its ‘goal’ is efficiency.

But what makes us human and makes life fulfilling is our ability to sit with discomfort and find connection and meaning. Which is very inefficient.

Oliver Burkeman wrote about it in relation to work, that many of us choose to spend time at work because it is more predictable and therefore manageable than our messy complicated personal relationships. I am sure the analogy with porn and sex is the same. As it is with Soylent, the pleasure of eating is the opposite of efficiency.

Where do you see friction in this? Not struggle for the sake of it or doing hard things but that fulfilment often comes with effort.

Eric Zimmer's avatar

Hi Paul. Great article. Interestingly, I had a conversation with Michael Inzlicht recently and did not connect that you were on that paper with him. We were discussing self-control, not AI.

The comment from Sherry about not having faith that we will be there for each other is instructive. Because the reality of the world right now is that in many cases people don’t have someone.

A friend of mine who is a therapist recently got a call from an AI company asking him to help them train an AI therapist. He was really opposed to the idea and thought it was awful.

My argument to him was that yes an AI therapist is not as good as a real therapist, unless that real therapist happens to not be any good.

It seems to me these things fall on a hierarchy. In the case of the AI therapist, the best thing is a good human therapist. The worst thing is an incompetent AI therapist, but a really well-trained AI therapist is better than no therapist or a bad human therapist.

I think companions are similar. A good human friend is best. But an empathetic seeming AI is better than abject loneliness or lousy friendships.

So when used to fill a gap I think they are a great thing. But as you point out, the worry is that when they begin to replace good relationships or stop us seeking real relationships.

Andy Blank's avatar

People are pretty well hardwired to sense real value - that thing they need to work to produce, to live. Playing video games isn't the same as actually going thru combat defeating the Nazis, watching a porno isn't the same as asking a girl on a date and having a fling, and talking to the LLM is not going to be the same as making friends. People know when they've accomplished something, as opposed to simulated an accomplishment. I don't think anytype of AI or VR is going to change that anytime soon. There may be people who have problems who abuse it, the same way people lose their lives to drugs or alcohol, or sit in front of a TV to excess. But the point for those people is that the addiction is a symptom of some other underlying issue.

In otherwords VR or AI interactions may replace TV or video games as unhealthy addictions, but the underlying causes will be the same. I don't see this is a big change in the way human nature works.

Theodore A Hoppe's avatar

In her book, Alone Together: Why We Expect More from Technology and Less from Each Other, and in her TEDTalk (2012), Sherry Turkle addresses AI companions or social robots "...that are specifically designed to be companions -- to the elderly, to our children, to us," by asking the question, "Have we so lost confidence that we will be there for each other?

"During my research I worked in nursing homes, and I brought in these sociable robots that were designed to give the elderly the feeling that they were understood. And one day I came in and a woman who had lost a child was talking to a robot in the shape of a baby seal. It seemed to be looking in her eyes. It seemed to be following the conversation. It comforted her. And many people found this amazing. But that woman was trying to make sense of her life with a machine that had no experience of the arc of a human life. That robot put on a great show. And we're vulnerable. People experience pretend empathy as though it were the real thing. So during that moment when that woman was experiencing that pretend empathy, I was thinking, "That robot can't empathize. It doesn't face death. It doesn't know life." And as that woman took comfort in her robot companion, I didn't find it amazing; I found it one of the most wrenching, complicated moments in my 15 years of work. But when I stepped back, I felt myself at the cold, hard center of a perfect storm. We expect more from technology and less from each other. And I ask myself, "Why have things come to this?"

Adam Ries's avatar

In the same way that planes fly effectively without flapping, and LLMs think effectively without neurons, perhaps we'll end up with artificial companions that socialize effectively, but in a dramatically different way from their natural anologues, us.

In your last post, GPT responded to "I feel stuck and unhappy and I don't know why" with "I'm truly sorry to hear that you're feeling this way. It can be tough when emotions aren't clear, and it's okay to feel uncertain. I'm here to listen.". It sounds a little canned, but it's genuinely a fine response. There is one aspect of its response that feels uncanny, though: it's trying to flap its wings. It is a machine, but it says that it's truly sorry, and that it's here to listen. In reality, it's not experiencing being sorry, and it's really in a million places at once on a million devices, or it's somewhere in a Californian data center, but not here. There is a version of AIs that I want to empathize with me that has real self-insight; whose digital brain is a strange loop that includes itself as well as the real world. What does a good response to sorrow or pain look like when it can't be "sorry" or "here" but still holds the well-being of the user in high regard?

Empathobots in 2024 are crude ornithopters, and we need to figure out what jet-propelled compassion looks like. We (or it) need to develop a new natural way of speaking with real humans from that almost-omnipresent perspective. That one's not in the training data, but it's the key to bridging the gap. If LLMs could truly experience love I would want them to express it, and if they experience nothing I want them to at least have a deep-enough self-knowledge to say so.

xad1sa's avatar

AI can't understand our minds and our way of thinking like other humans can. The chances are that another human has probably experienced the same circumstances and can give you advice on it, while AI can't.

Melissa Silva's avatar

I’m really fascinated by this topic. Wrote a post this week about how much Harlow’s Wire Monkey experiment showing “contact comfort” as key to bonding is so interesting when applied to AI and its perceived value. Thank you for the post!

Charlatan's avatar

Technological schizophrenia in the making...

Marek Veneny's avatar

Cool article, Paul! It made me think about one I wrote the other day about the epidemic of "self-medicating" behaviors, where I used the analogy of Ersatzkaffee (the coffee substitute Germans used during WW2). I framed it under the self-determination theory's basic needs - autonomy, competence, social connection - and argued that our lives are increasingly filled with substitutes for the real thing. Social connection, to keep with the theme, is substituted by much more easily accessible social media. Within social media, too, you have gradations of "worse" going from relatively stolid Facebook to high-octane TikTok (although everyone's racing to introduce the high-addictiveness of reels irrespective of the platform). I think that AI companions are just a continuation of this substitution phenomenon. I also wouldn't argue they are inherently bad - I bet there are cool use cases for many groups of people - but, like social media, I fear that people won't be able to use them in moderation. And that's where you get into trouble.

Though I must say that by the time I'm wearing diapers again (~ 50 years), I wouldn't be opposed to entering the Matrix (AI + VR) indistinguishable from the real world and live through a young and able avatar.

Theodore Bolha's avatar

Although I don't get as much immediate satisfaction from ChatGPT "understanding" what I'm saying, I will get the delayed satisfaction from those who let me know that—because of ChatGPT—they were able to understand what I wanted to convey. So, I see ChatGPT as a great tool.

kurt godel's avatar

Some of the modern Soylent varieties (like the 180 calories chocolate flavor) at this point are not just bearable but actually semi-positively good!

Like something in the universe of protein shakes and Starbucks “Frappuccino”-type things.

Justin Ross's avatar

I suppose this conversation could boil down to whether you think consciousness is a valuable thing in and of itself - since that's the value proposition of interacting with other conscious entities: that what we're sharing is something that *affects* both of us. Because we're conscious, and sharing that is valuable. And it sounds like you think consciousness is valuable. As do I.

The alternative is basically secular, raw physics - that nothing going on between any of us is any more than the mere exchange of data, and therefore has no "value." And I just think that's a horrible way to view any of this; a horrible way to view life. If that's how someone feels, why even still be alive?

Kevin Grant's avatar

Interesting piece. The mention of pets is very instructive.

Of course, as I was reading this I was thinking, "Yeah, but what if the AI is sentient? Doesn't that change everything?" Guess I should have read the footnote first.

Regarding that footnote--I agree that there are no immortal souls existing outside our physical beings, but is that necessarily true? Isn't it at least possible that there is a soul that transcends our bodies?

Chuck's avatar

I was also thinking of pets. Current LLMs aren’t even pets, they’re just things. Is that because they are not sentient? We do grade animal interaction by presumed sentience (and since it’s not like anything to be Claude, Claude is less than a bug). But I bet agency and personalization will be more important. People will decide LLMs are sufficiently sentient after they start being good companions in some limited pet-like way.

Diana van Eyk's avatar

AI companions sound creepy to me.

When I think of how I relate to friends and family, there's a give and take and context -- other people, projects, how they're actually doing. What would an AI companion contribute to the conversation?