My partner was rather pleased when I started asking ChatGPT (instead of him) to comment on my drafts. (I preferred it too—it can digest articles in no time and is always available.) But I think he was only half joking when he lamented being replaced by AI, as he didn't stop me from unsubscribing and is now happily editing again.
I really enjoyed this and it got me thinking about the idea of friction and how the promise of technology, and AI, is about reducing friction. Its ‘goal’ is efficiency.
But what makes us human and makes life fulfilling is our ability to sit with discomfort and find connection and meaning. Which is very inefficient.
Oliver Burkeman wrote about it in relation to work, that many of us choose to spend time at work because it is more predictable and therefore manageable than our messy complicated personal relationships. I am sure the analogy with porn and sex is the same. As it is with Soylent, the pleasure of eating is the opposite of efficiency.
Where do you see friction in this? Not struggle for the sake of it or doing hard things but that fulfilment often comes with effort.
Hi Paul. Great article. Interestingly, I had a conversation with Michael Inzlicht recently and did not connect that you were on that paper with him. We were discussing self-control, not AI.
The comment from Sherry about not having faith that we will be there for each other is instructive. Because the reality of the world right now is that in many cases people don’t have someone.
A friend of mine who is a therapist recently got a call from an AI company asking him to help them train an AI therapist. He was really opposed to the idea and thought it was awful.
My argument to him was that yes an AI therapist is not as good as a real therapist, unless that real therapist happens to not be any good.
It seems to me these things fall on a hierarchy. In the case of the AI therapist, the best thing is a good human therapist. The worst thing is an incompetent AI therapist, but a really well-trained AI therapist is better than no therapist or a bad human therapist.
I think companions are similar. A good human friend is best. But an empathetic seeming AI is better than abject loneliness or lousy friendships.
So when used to fill a gap I think they are a great thing. But as you point out, the worry is that when they begin to replace good relationships or stop us seeking real relationships.
In the same way that planes fly effectively without flapping, and LLMs think effectively without neurons, perhaps we'll end up with artificial companions that socialize effectively, but in a dramatically different way from their natural anologues, us.
In your last post, GPT responded to "I feel stuck and unhappy and I don't know why" with "I'm truly sorry to hear that you're feeling this way. It can be tough when emotions aren't clear, and it's okay to feel uncertain. I'm here to listen.". It sounds a little canned, but it's genuinely a fine response. There is one aspect of its response that feels uncanny, though: it's trying to flap its wings. It is a machine, but it says that it's truly sorry, and that it's here to listen. In reality, it's not experiencing being sorry, and it's really in a million places at once on a million devices, or it's somewhere in a Californian data center, but not here. There is a version of AIs that I want to empathize with me that has real self-insight; whose digital brain is a strange loop that includes itself as well as the real world. What does a good response to sorrow or pain look like when it can't be "sorry" or "here" but still holds the well-being of the user in high regard?
Empathobots in 2024 are crude ornithopters, and we need to figure out what jet-propelled compassion looks like. We (or it) need to develop a new natural way of speaking with real humans from that almost-omnipresent perspective. That one's not in the training data, but it's the key to bridging the gap. If LLMs could truly experience love I would want them to express it, and if they experience nothing I want them to at least have a deep-enough self-knowledge to say so.
AI can't understand our minds and our way of thinking like other humans can. The chances are that another human has probably experienced the same circumstances and can give you advice on it, while AI can't.
I’m really fascinated by this topic. Wrote a post this week about how much Harlow’s Wire Monkey experiment showing “contact comfort” as key to bonding is so interesting when applied to AI and its perceived value. Thank you for the post!
Cool article, Paul! It made me think about one I wrote the other day about the epidemic of "self-medicating" behaviors, where I used the analogy of Ersatzkaffee (the coffee substitute Germans used during WW2). I framed it under the self-determination theory's basic needs - autonomy, competence, social connection - and argued that our lives are increasingly filled with substitutes for the real thing. Social connection, to keep with the theme, is substituted by much more easily accessible social media. Within social media, too, you have gradations of "worse" going from relatively stolid Facebook to high-octane TikTok (although everyone's racing to introduce the high-addictiveness of reels irrespective of the platform). I think that AI companions are just a continuation of this substitution phenomenon. I also wouldn't argue they are inherently bad - I bet there are cool use cases for many groups of people - but, like social media, I fear that people won't be able to use them in moderation. And that's where you get into trouble.
Though I must say that by the time I'm wearing diapers again (~ 50 years), I wouldn't be opposed to entering the Matrix (AI + VR) indistinguishable from the real world and live through a young and able avatar.
Although I don't get as much immediate satisfaction from ChatGPT "understanding" what I'm saying, I will get the delayed satisfaction from those who let me know that—because of ChatGPT—they were able to understand what I wanted to convey. So, I see ChatGPT as a great tool.
I suppose this conversation could boil down to whether you think consciousness is a valuable thing in and of itself - since that's the value proposition of interacting with other conscious entities: that what we're sharing is something that *affects* both of us. Because we're conscious, and sharing that is valuable. And it sounds like you think consciousness is valuable. As do I.
The alternative is basically secular, raw physics - that nothing going on between any of us is any more than the mere exchange of data, and therefore has no "value." And I just think that's a horrible way to view any of this; a horrible way to view life. If that's how someone feels, why even still be alive?
People are pretty well hardwired to sense real value - that thing they need to work to produce, to live. Playing video games isn't the same as actually going thru combat defeating the Nazis, watching a porno isn't the same as asking a girl on a date and having a fling, and talking to the LLM is not going to be the same as making friends. People know when they've accomplished something, as opposed to simulated an accomplishment. I don't think anytype of AI or VR is going to change that anytime soon. There may be people who have problems who abuse it, the same way people lose their lives to drugs or alcohol, or sit in front of a TV to excess. But the point for those people is that the addiction is a symptom of some other underlying issue.
In otherwords VR or AI interactions may replace TV or video games as unhealthy addictions, but the underlying causes will be the same. I don't see this is a big change in the way human nature works.
Interesting piece. The mention of pets is very instructive.
Of course, as I was reading this I was thinking, "Yeah, but what if the AI is sentient? Doesn't that change everything?" Guess I should have read the footnote first.
Regarding that footnote--I agree that there are no immortal souls existing outside our physical beings, but is that necessarily true? Isn't it at least possible that there is a soul that transcends our bodies?
I was also thinking of pets. Current LLMs aren’t even pets, they’re just things. Is that because they are not sentient? We do grade animal interaction by presumed sentience (and since it’s not like anything to be Claude, Claude is less than a bug). But I bet agency and personalization will be more important. People will decide LLMs are sufficiently sentient after they start being good companions in some limited pet-like way.
When I think of how I relate to friends and family, there's a give and take and context -- other people, projects, how they're actually doing. What would an AI companion contribute to the conversation?
"To put it more honestly, I worry about how I’ve chosen to allow sites like Twitter to take away".
Ideas around control, status, connection, authenticity. So much is baked into our soylent concern about AI. I for one am happy with my experience machine reading this. Thank you, sir.
This one leaves me feeling more optimistic than I expected. AIs as a supplement to human interaction could work. After all people have a pretty good track record of adopting new technology as an augmentation rather than replacement for things that help us flourish. Films did not kill off live theater. Television didn't kill off films. People who love and appreciate horses can still go ride one, however many cars we have. You can still buy cheese, even though "pasteurized cheese food product" is widely sold.
Maybe the thing to worry about isn't wholesale abandonment of real life but rather real relations being turned into a privilege. I'd hate to think that having real friends in the 22d century would be like eating farm-to-table tomatoes or having access to top-notch gym equipment. Or that a candidate for office might say, "I'm not a fancy-pants guy who hosts a regular poker night -- plain old AI friends are good enough for you, they're good enough for me." (I know, assuming there will still be elections and candidates is a whole other can of worms; just riffing here.)
My partner was rather pleased when I started asking ChatGPT (instead of him) to comment on my drafts. (I preferred it too—it can digest articles in no time and is always available.) But I think he was only half joking when he lamented being replaced by AI, as he didn't stop me from unsubscribing and is now happily editing again.
I really enjoyed this and it got me thinking about the idea of friction and how the promise of technology, and AI, is about reducing friction. Its ‘goal’ is efficiency.
But what makes us human and makes life fulfilling is our ability to sit with discomfort and find connection and meaning. Which is very inefficient.
Oliver Burkeman wrote about it in relation to work, that many of us choose to spend time at work because it is more predictable and therefore manageable than our messy complicated personal relationships. I am sure the analogy with porn and sex is the same. As it is with Soylent, the pleasure of eating is the opposite of efficiency.
Where do you see friction in this? Not struggle for the sake of it or doing hard things but that fulfilment often comes with effort.
Hi Paul. Great article. Interestingly, I had a conversation with Michael Inzlicht recently and did not connect that you were on that paper with him. We were discussing self-control, not AI.
The comment from Sherry about not having faith that we will be there for each other is instructive. Because the reality of the world right now is that in many cases people don’t have someone.
A friend of mine who is a therapist recently got a call from an AI company asking him to help them train an AI therapist. He was really opposed to the idea and thought it was awful.
My argument to him was that yes an AI therapist is not as good as a real therapist, unless that real therapist happens to not be any good.
It seems to me these things fall on a hierarchy. In the case of the AI therapist, the best thing is a good human therapist. The worst thing is an incompetent AI therapist, but a really well-trained AI therapist is better than no therapist or a bad human therapist.
I think companions are similar. A good human friend is best. But an empathetic seeming AI is better than abject loneliness or lousy friendships.
So when used to fill a gap I think they are a great thing. But as you point out, the worry is that when they begin to replace good relationships or stop us seeking real relationships.
In the same way that planes fly effectively without flapping, and LLMs think effectively without neurons, perhaps we'll end up with artificial companions that socialize effectively, but in a dramatically different way from their natural anologues, us.
In your last post, GPT responded to "I feel stuck and unhappy and I don't know why" with "I'm truly sorry to hear that you're feeling this way. It can be tough when emotions aren't clear, and it's okay to feel uncertain. I'm here to listen.". It sounds a little canned, but it's genuinely a fine response. There is one aspect of its response that feels uncanny, though: it's trying to flap its wings. It is a machine, but it says that it's truly sorry, and that it's here to listen. In reality, it's not experiencing being sorry, and it's really in a million places at once on a million devices, or it's somewhere in a Californian data center, but not here. There is a version of AIs that I want to empathize with me that has real self-insight; whose digital brain is a strange loop that includes itself as well as the real world. What does a good response to sorrow or pain look like when it can't be "sorry" or "here" but still holds the well-being of the user in high regard?
Empathobots in 2024 are crude ornithopters, and we need to figure out what jet-propelled compassion looks like. We (or it) need to develop a new natural way of speaking with real humans from that almost-omnipresent perspective. That one's not in the training data, but it's the key to bridging the gap. If LLMs could truly experience love I would want them to express it, and if they experience nothing I want them to at least have a deep-enough self-knowledge to say so.
AI can't understand our minds and our way of thinking like other humans can. The chances are that another human has probably experienced the same circumstances and can give you advice on it, while AI can't.
I’m really fascinated by this topic. Wrote a post this week about how much Harlow’s Wire Monkey experiment showing “contact comfort” as key to bonding is so interesting when applied to AI and its perceived value. Thank you for the post!
Technological schizophrenia in the making...
Cool article, Paul! It made me think about one I wrote the other day about the epidemic of "self-medicating" behaviors, where I used the analogy of Ersatzkaffee (the coffee substitute Germans used during WW2). I framed it under the self-determination theory's basic needs - autonomy, competence, social connection - and argued that our lives are increasingly filled with substitutes for the real thing. Social connection, to keep with the theme, is substituted by much more easily accessible social media. Within social media, too, you have gradations of "worse" going from relatively stolid Facebook to high-octane TikTok (although everyone's racing to introduce the high-addictiveness of reels irrespective of the platform). I think that AI companions are just a continuation of this substitution phenomenon. I also wouldn't argue they are inherently bad - I bet there are cool use cases for many groups of people - but, like social media, I fear that people won't be able to use them in moderation. And that's where you get into trouble.
Though I must say that by the time I'm wearing diapers again (~ 50 years), I wouldn't be opposed to entering the Matrix (AI + VR) indistinguishable from the real world and live through a young and able avatar.
Although I don't get as much immediate satisfaction from ChatGPT "understanding" what I'm saying, I will get the delayed satisfaction from those who let me know that—because of ChatGPT—they were able to understand what I wanted to convey. So, I see ChatGPT as a great tool.
Some of the modern Soylent varieties (like the 180 calories chocolate flavor) at this point are not just bearable but actually semi-positively good!
Like something in the universe of protein shakes and Starbucks “Frappuccino”-type things.
I suppose this conversation could boil down to whether you think consciousness is a valuable thing in and of itself - since that's the value proposition of interacting with other conscious entities: that what we're sharing is something that *affects* both of us. Because we're conscious, and sharing that is valuable. And it sounds like you think consciousness is valuable. As do I.
The alternative is basically secular, raw physics - that nothing going on between any of us is any more than the mere exchange of data, and therefore has no "value." And I just think that's a horrible way to view any of this; a horrible way to view life. If that's how someone feels, why even still be alive?
People are pretty well hardwired to sense real value - that thing they need to work to produce, to live. Playing video games isn't the same as actually going thru combat defeating the Nazis, watching a porno isn't the same as asking a girl on a date and having a fling, and talking to the LLM is not going to be the same as making friends. People know when they've accomplished something, as opposed to simulated an accomplishment. I don't think anytype of AI or VR is going to change that anytime soon. There may be people who have problems who abuse it, the same way people lose their lives to drugs or alcohol, or sit in front of a TV to excess. But the point for those people is that the addiction is a symptom of some other underlying issue.
In otherwords VR or AI interactions may replace TV or video games as unhealthy addictions, but the underlying causes will be the same. I don't see this is a big change in the way human nature works.
Interesting piece. The mention of pets is very instructive.
Of course, as I was reading this I was thinking, "Yeah, but what if the AI is sentient? Doesn't that change everything?" Guess I should have read the footnote first.
Regarding that footnote--I agree that there are no immortal souls existing outside our physical beings, but is that necessarily true? Isn't it at least possible that there is a soul that transcends our bodies?
I was also thinking of pets. Current LLMs aren’t even pets, they’re just things. Is that because they are not sentient? We do grade animal interaction by presumed sentience (and since it’s not like anything to be Claude, Claude is less than a bug). But I bet agency and personalization will be more important. People will decide LLMs are sufficiently sentient after they start being good companions in some limited pet-like way.
AI companions sound creepy to me.
When I think of how I relate to friends and family, there's a give and take and context -- other people, projects, how they're actually doing. What would an AI companion contribute to the conversation?
"To put it more honestly, I worry about how I’ve chosen to allow sites like Twitter to take away".
Ideas around control, status, connection, authenticity. So much is baked into our soylent concern about AI. I for one am happy with my experience machine reading this. Thank you, sir.
This one leaves me feeling more optimistic than I expected. AIs as a supplement to human interaction could work. After all people have a pretty good track record of adopting new technology as an augmentation rather than replacement for things that help us flourish. Films did not kill off live theater. Television didn't kill off films. People who love and appreciate horses can still go ride one, however many cars we have. You can still buy cheese, even though "pasteurized cheese food product" is widely sold.
Maybe the thing to worry about isn't wholesale abandonment of real life but rather real relations being turned into a privilege. I'd hate to think that having real friends in the 22d century would be like eating farm-to-table tomatoes or having access to top-notch gym equipment. Or that a candidate for office might say, "I'm not a fancy-pants guy who hosts a regular poker night -- plain old AI friends are good enough for you, they're good enough for me." (I know, assuming there will still be elections and candidates is a whole other can of worms; just riffing here.)