Do we need AI to feel and to have real experiences for its empathy to matter?
Guest post by Michael Inzlicht & C. Daryl Cameron
From Paul Bloom:
A little while ago, I wrote a journal article with some friends called In Praise of Empathetic AI. Then, on this Substack, I wrote two pieces where I expressed some second thoughts—My friend thinks it's a good idea for us to spend most of our time with AI companions, Part I and My friend thinks it's a good idea for us to spend most of our time with AI companions, Part II.
I shouldn’t have expected my apostasy to go down easily! Two of my co-authors, Michael Inzlicht and Daryl Cameron, wrote a thoughtful and very interesting response, and I offered to put in on this substack. (I’ll write my own response to their response at a later date.)
Also, the “friend” I talked about in both articles? That was Michael.
Here goes:
Do we need AI to feel and to have real experiences for its empathy to matter?
Michael Inzlicht & C. Daryl Cameron
As the lead authors of a paper called, In praise of empathic AI, it should come as no surprise that we disagree with some of Paul’s gloomier takes on empathic AI. That Paul is also an author of that very same paper makes this only slightly awkward. We say only slightly because we realize that we are taking a controversial stand about AI here. We too are aware of and concerned with the risks and ethical trade-offs of empathic AI. Nonetheless, while many academics are worried about empathic AI, we are cautiously optimistic. Some have even accused us of being effective accelerationists.
We agree with much that Paul wrote about empathic AI. In our paper, we tried to present both pros and cons of empathic AI, much as Paul did in the two parts of this Substack. Although it might be tempting to read our paper as overly positive about empathic AI given our title, the actual content of the paper is more measured and balanced. Importantly, we see it as a brief starting point for a much longer and important conversation.
Given that we mostly agree, we want to focus on the one spot where we might disagree: devaluing AI’s empathic responses because it is not a real conscious agent that has bona fide experiences. This is the main reason we felt compelled to write a paper on AI’s remarkable empathic expressions in the first place: Can people benefit from empathic AI even when it tells them it not a real agent and cannot feel empathy?
Yes, Paul also quotes the cognitive scientist Molly Crockett, who describes empathic AI as Soylent, a mostly tasteless meal replacement that cannot compete with the taste and flavour of, say, filet mignon.
We are less bothered by this critique because we think that much of human empathy is also Soylent quality, if not worse. Our friends, family, and therapists are often so tired, busy, and distracted that they sometimes fake being interested and understanding. The two of us have studied this extensively through the lens of effort and why people choose to avoid empathy for strangers and loved ones too. Filet mignon level human empathy might be rarer than we think. Paul suggests that people want “real talk” from their interaction partners, but why assume humans are very good at that? After all, Paul has already alerted the field that human empathy might be severely limited, something we believe is mostly the product of choices and motives.
Nonetheless, if human empathy is truly better, most people would prefer it and stop relying on AI for emotional nourishment. The problem is that, at least based on the limited experiments that exist, people feel as cared for and understood if not more when the empathy is generated by AI. And this is also the case when they know they are interacting with an AI. Perhaps human empathy is more like Soylent than we imagine. More and better empirical evidence of empathic AI is needed before rushing to judgment.
So, let’s return to Paul’s main misgiving about empathic AI, which boils down to a preference for reality:
“I think there is value in interacting with individuals with agency and empathy, even if the recipient can’t distinguish them from entities with no agency and no empathy. Genuine experiences have intrinsic value; reality is better than an illusion.”
We share Paul’s intuition here. We too think reality has intrinsic value. But, when we ask ourselves why, we find ourselves unable to explain, as if we are morally dumbfounded. Clearly reality is important in many contexts, and being out of touch with it could prove fatal, such as when believing that a road is clear of cars only to be flattened by reality. But we are unsure if empathy is a case where reality is necessarily better than an illusion.
We are unsure because we don’t think people are as attached to reality as Paul would have us believe. Although some have argued that Nozick’s experience-machine experiment suggests that most people prefer to live in the real world than in a pleasant simulation of the exact same world, recent refinements of the thought experiment suggest that most people would be just fine with reality-enhancing artificial technologies. The original formulation of the experience machine experiments included numerous biasing factors that pushed respondents to prefer status quo reality. Yet, when these biasing factors were removed, most people preferred the experience machine. What is more, when people decided between reality and an experience pill that enhanced reality and personal functioning, nearly 90% of respondents preferred the experience pill. We wonder if empathic AI is more like a reality-enhancing experience pill than a scary brain-in-a-vat experience machine. For example, empathic AI could enhance a person’s social skills.
That people want to enhance reality should come as no surprise. Many people drink coffee and alcohol, smoke a lot of cannabis, and take all manner of psychoactive substances. Presumably this is done because it enhances people’s reality, even if only to increase feelings of transcendence. More people than you might expect enjoy “professional” wrestling even though it is scripted and fake. Many young children have friends that are completely imaginary. Many not so-young people form real attachments with celebrities and movie characters that they will never meet or that are fictional. A desire to enhance reality might explain why millions of people already use empathic AI for companionship.
Given that much of the focus in moral psychology is about the perception of harm and value in the world, we think that focusing on people’s subjective preferences is more important than whether the experience seemingly required for empathy is real. In our original paper, we punted on the deeper question of whether AI can actually empathize for a reason—it’s the perception of empathy that matters for recipients of empathy. This is why we believe that we should keep on asking how people perceive empathic AI. We also agree with Paul that we need to examine this over the long term to see if people’s preferences change once habituation sets in. By asking potential users of this technology how they feel and what they prefer, we privilege their perceptions and not that of outside observers like ourselves.
We can’t help but wonder if objections to empathic AI have less to do with a preference for reality and more to do with the moralization of AI. For some, AI violates moral principles and as such should be opposed to the extreme. Moralized attitudes resist being evaluated through an analysis of cost/benefit trade-offs. In an ongoing project led by UofT PhD students Victoria Oldemburgo de Mello, Eloise Côté, and Reem Ayad, we found that AI is indeed moralized for some. First, the same people who oppose empathic AI, also oppose chatbots, AI art, and use of AI in legal settings. This suggests there is one latent factor that underlies attitudes toward AI, even when the AI is applied in wholly different contexts. Second, over 20% of our participants were morally opposed to AI, unwilling to accept AI in any contexts no matter the potential benefits. Finally, the more a person values purity—the more they see what is natural as inherently good—the more they morally oppose AI.
We are not trying to smear critics of AI, and certainly not Paul, by saying that they oppose AI on moral grounds or that they view it as an unnatural abomination. Instead, we raise the possibility that some of our opposition to AI might stem from the naturalistic fallacy, the idea that what is found in nature is good and what is artificial is bad and to be treated with skepticism. Moralization essentializes what is a complicated interaction between technology and the humans using it. Just as we shouldn’t essentialize human empathy as fundamentally good, we shouldn’t essentialize AI empathy as fundamentally bad.
Again, we agree with Paul more than we disagree. We agree that empathic AI might have benefits and increase human welfare. We also agree that there are serious dangers to empathic AI that we must deal with head on. We also both share the intuition that human friends are preferable to AI friends. Where we diverge is how to justify this intuition. Paul thinks it’s because AI is not real, while we are less sure that a preference for reality has much to do with it. We are still working through these ideas and open to changing our minds. That is also why we encourage more research on the topic and more and deeper conversations between psychologists, philosophers, and engineers. Until then, please forgive our cautious championing of empathic AI.
Michael Inzlicht is a Professor of Psychology at the University of Toronto and can be reached at michael.inzlicht@utoronto.ca or @minzlicht on X/Twitter. C. Daryl Cameron is an Associate Professor of Psychology at The Pennsylvania State University and can be reached at cdc49@psu.edu or @DCameron84 on X/Twitter
>much of human empathy is also Soylent quality, if not worse.
Yup. Nearly everyone is just waiting for their chance to talk. And we all have our deep cognitive biases.
I read a critique of Peter Singer recently that said, in part, "I disagree with his hedonism. Only my preferred way of being happy is legit!" I get that sense from a lot of people in all sorts of areas.
Great discussion.
I don't think the difference has to matter, but it will matter to people who have decided that it matters to them. My intuitive response is that it seems to matter a lot if the interlocutor has the kind of emotions and experiences that I do, but my intuitions are sometimes wrong, and this seems to be one of those times where there doesn't seem to be a great reason to support them.
I like Nozick's Experience Machine thought experiment; not so much as evidence that people prefer reality over simulations (I think people prefer the status quo, and most would prefer the simulation if that's where their identities had been formed and then they were offered the chance to switch to reality), but as an intuition pump for getting at what happiness and a good life mean to us.