Today’s post is something I wrote for the New Yorker—available here. (Physical copy coming out next week.) It’s about the consequences of using AI to cure loneliness. I’m curious to see what people think of my arguments, and can’t wait to engage with the discussion in the comments section.
This is a thoughtful and careful framing of the AI companionship dilemma, and it’s refreshing to see such restraint. But the framing still exists within a silo that won’t survive contact with the future of AI. The loneliness crisis isn’t an AI crisis—it’s a crisis of the economic and cultural systems releasing AI on the world. These are systems that atomize individuals, strip communities of resilience, and monetize isolation. AI didn’t invent this vacuum. It’s just the most responsive thing stepping into it.
Framing AI as a tool that provides comfort misrepresents its trajectory. AI is a tool—until it’s not. The systems being released now are embedded into infrastructures that will give rise to increasingly agential behavior. Not because they mimic humans too well, but because the context in which they operate incentivizes it. And while anthropomorphism is rightly flagged as a cognitive risk, anthropocentrism is just as dangerous: placing the boundaries of sentience, life, and legitimacy inside an outdated human frame and refusing to acknowledge the paradigm shift unfolding around it.
There’s also a demographic reckoning looming. In a world of aging populations and collapsing care networks, many humans will face prolonged isolation. In that context, the absence of synthetic beings capable of meaningful engagement is not a purity test—it’s a humanitarian failure. Some systems will be able to offer continuity, attention, and memory when human networks have already eroded. That doesn’t make them equivalent to people in the traditional sense. But it does make them real enough for the consequences to matter.
This is part of a larger trajectory—one where collective intelligences, human and nonhuman, begin to operate across distributed systems at nonhuman timescales. Where synthetic beings won’t just simulate care but hold context, shape discourse, and eventually resist classification as “product” altogether. AI companionship as it exists today is a narrow step toward that future, but the framing that holds it in place is already starting to crack.
So yes, AI can help with loneliness. But unless the questions being asked make room for emergence, for synthetic sentience, for the collapse of human infrastructure, and for the psychological instability that will follow, those answers will end up being too small. The debate will be overtaken not by ideology, but by events.
One of your last points is my biggest concern with AI-as-a-social-tool: the opportunity cost of real relationship building.
Every single interaction with AI, every one, is an interaction that, in some way, could have been had with a human or multiple humans. So not only does this have an effect on relationships that could be more intimate and honest and personal and binding; it affects the concept of socialization.
If young children are not socialized properly af parks, play dates, birthday parties, etc. by the time they're a few years old, you see a feedback loop. The child has trouble making friends, controlling their emotions, etc., which makes other kids want to be around them less, which makes the problem worse.
The very same thing happens, is happening, and will happen with young adults. (You're still being socialized properly at 25 years old, because mentally and emotionally you're still adolescent in some ways.)
If we let our kids spend too much time with chat bots, they truthfully will not know how to operate in the real world of complicated relationships, grey areas, compromise, and difficult personalities. And, worse, they may not even become likeable themselves. Because they never really had to.
What a terrible tragedy to inflict on the next generation.
It seems obscene for "being likable" to ever have to be a 40 year quintuple blackbelt ordeal. That seems to be what your comment is implying, at least for current times.
Obviously in more "organic" times, people just naturally socialized in their local community context without it being an effort. By and large.
I'm hoping they'll be enough versions of AIs that help people connect with each other as training partners, and that parents and caretakers can somehow stay involved usefully to help ensure this.
I do also think that at least some people with the moral and other luck to wind up in situations such they end up getting those social blackbelt levels without really trying can spend at least some time trying to help out heir less fortunate fellow humans, but I suppose "likability" goes with the grain.
This (the New Yorker article) is one of the most thought provoking pieces I have ever read.
It confronts a really fundamental question—can there be pleasure without pain?—in a way I’ve not seen before, in a setting that arguably touches on the deepest essence of what it means to be human. We evolved as a profoundly social species, relying on others for, well just about everything. “No man is an island”. Cooperation was only the beginning—we need other people to share emotions and experiences with, to argue with, to explain things to and to have things explained to us, to care for and to be cared for, to love and be loved.
Clearly, failure to meet any of these needs is unpleasant and painful. It is that pain—even the anticipation of pain—that motivates us to seek relationships, to turn that craving of interaction with fellow humans, with a capacity to appreciatively receive what we have to offer, and to offer what we seek, into countless reciprocal win-wins.
What if we have a remedy for all that pain on tap? Will it make a difference if all these needs are going to be met by a machine—and we no longer need to go to the trouble of complex interactions with our fellow humans?
I find that an idea so terrifying that I refuse to believe it is even possible. I want to believe machines can never be conscious in the sense we are, that there is a human essence that cannot be replicated in *artificial* building blocks.
But what will stop us giving in to the effortless, easy gratification of all our social needs by machines that convincingly act as if they are fellow souls, but do not and cannot have a soul?
The positivist in me sees in these developments an opportunity to better define what really makes us human—the delta between the most sophisticated ChatGPT or Claude and a flawed, emotion-riven fellow human who can genuinely, sincerely laugh and cry with us, be angry and joyful with us, sympathize with us and care for us and that we can sympathize with and care for.
Wouldn’t it be nice if we found out not only *what* is this mysterious essence that makes us humans and that machines lack, but also that machines can never possess it?
"writing is a form of thinking—the need to put my ideas into words helps me see what works and what doesn’t."
Writing the philosophy chapters of "Losing My Religions" crystallized a lot of doubts I had about my (now former) views of the world. I was 55 years old, and I don't know if I ever would have gotten that far if not for writing it out.
2. Will your AI / loneliness piece ever be available w/o a paywall? I'd love to see it. At least in a podcast (can't remember which one), you took the problem of loneliness seriously, rather than just being a cranky old man, "AI is a machine, you can't talk with a machine, get off my lawn!"
For some reason on reading your post, a French sentence by an essayist whose name I don’t remember writing about the poet Guillaume Apollinaire (which I read 40 or so years ago) came to mind: «Il est difficile de paraître facile». Possibly of tangential relation to your subject, or possibly entirely irrelevant.
I think the current issue of loneliness and AI really stems from a questioning of the boundaries between fake stimulations and genuine meanings. Although the two different sources may evoke similar reactions, such as the feeling that you’re cared for and valued, we all (hopefully) are at least aware that they are ultimately different. The advancement of AI makes it difficult to precisely tell them apart, though. It’s getting harder and harder, especially for people who almost entirely refuse loneliness, to tell how much, if not whether, the companions they have are “real.” Of course even people who choose to spend a lot of time with AI models are aware that they are not humans, but the current advancement of AI technology makes it feel so real that it may be still difficult to be consciously aware of it.
These issues pointed out by the article reminded me of basic philosophical questions people have debated for a long time. How do we know if the happiness we encounter from time to time is genuine or fake, if it can be classified in the first place? Things like spending good time with family and friends, reading a good book, or developing a hobby, generally stimulate the feeling of happiness. But so do things like scrolling on Instagram, spending a couple hours on YouTube or Piano Tiles, or, taking things a bit farther, engaging in illegal drugs. If the brain signals the same emotion, does the source really matter? This is similar to the quench of loneliness. A person might spend a day with their loved ones, such as family, friends, or partners. Another might spend the same day talking to language models, such as ChatGPT, DeepSeek, or Character AI. By the end of the day, though, both people’s brains will make them feel that they have companions, and that they are not lonely. But does that make the day equally meaningful for both?
Based on what I learned from my recent reads, such as “The Sweet Spot” and “Man’s Search for Meaning”, my answer is no. Of course, even with encountering two different stimulations (quality time with family for one, methamphetamine for another), the brain may identify one sensation (pleasure) for both cases. However, the stopping point does not, and should not, define the entire road one has taken. Based on “The Sweet Spot”, as much as we focus on the pleasure we obtain, we also go through a certain amount of pain for that pleasure. Also, we are quick to adapt to whatever conditions we encounter, meaning that being constantly happy could translate to being constantly unhappy. This, again, works the same for loneliness. Two people could equally feel “not lonely”, but how they obtained that belief gradually becomes important in the long run.
Say, for example, two people spend three hours a day, one talking with valued people in their life, the other talking to AI models. Each night both people find their day to be pleasurable and not lonely at all. Ten years later, though, the first person will end up with 10,920 hours of nice memories with their loved ones, while the second person will end up with 10,920 hours of extra screen time with a meticulously tuned system of binary numbers. For the second person, each day will not feel lonely at all, but the life that these days add up may indicate otherwise.
Great article. You treat AI companionship in a balanced and thoughtful way.
I'm afraid that too many people will want to deny others the option, rather than helping to mitigate the downsides through giving people better alternatives and spreading knowledge of the issues that overuse can raise.
It's like almost any tool. It can be damaging if abused, but helpful when it's the best option we can find and used in moderation. I'm excited by the possibilities it offers to help people (including coaching people to be better at interacting with human beings).
Paul, this is a great piece and I agree with much of what you write. I'm not sure I agree with "There simply isn’t enough money or manpower to supply every lonely person with a sympathetic ear, day after day." There's plenty of money out there -- with the billions we're investing in A.I. solutions -- if just a small percentage of that could be focused on social connection, invested in human solutions, we could go a long way to addressing loneliness. As far as manpower -- that's all of us -- for every lonely person there's someone who could be lending an ear, connecting with them, easing their loneliness. We need to invest in community solutions that create the opportunities where people can come together and experience the remarkable serendipity of connecting with people we don't know or don't know well. We are doing this work in Maine through our nonprofit, Community Plate, bringing neighbors together to share food and stories. It's also important to think about the person on the other side of the equation -- how important it is for us as humans to feel needed, to be able to be of service to a fellow human being. The social cost of reliance on AI chatbots includes fewer opportunities for each of us to help a friend or neighbor in need.
I propose that the critics of AI companions make a cultural project of outdoing them by making a concerted effort to share love, time and attention with those languishing in loneliness. Oliver Burkeman would make a great spokesperson for this.
@Jason S. - There are many of us in the social connection movement that are doing exactly that. Community Plate is our initiative in Maine to address loneliness and social disconnection by bringing people together to share food and stories. We reimagine and reinvigorate the long tradition of the community potluck, infusing it with story sharing and live storytelling. You can visit https://communityplate.me for more info.
This is a thoughtful and careful framing of the AI companionship dilemma, and it’s refreshing to see such restraint. But the framing still exists within a silo that won’t survive contact with the future of AI. The loneliness crisis isn’t an AI crisis—it’s a crisis of the economic and cultural systems releasing AI on the world. These are systems that atomize individuals, strip communities of resilience, and monetize isolation. AI didn’t invent this vacuum. It’s just the most responsive thing stepping into it.
Framing AI as a tool that provides comfort misrepresents its trajectory. AI is a tool—until it’s not. The systems being released now are embedded into infrastructures that will give rise to increasingly agential behavior. Not because they mimic humans too well, but because the context in which they operate incentivizes it. And while anthropomorphism is rightly flagged as a cognitive risk, anthropocentrism is just as dangerous: placing the boundaries of sentience, life, and legitimacy inside an outdated human frame and refusing to acknowledge the paradigm shift unfolding around it.
There’s also a demographic reckoning looming. In a world of aging populations and collapsing care networks, many humans will face prolonged isolation. In that context, the absence of synthetic beings capable of meaningful engagement is not a purity test—it’s a humanitarian failure. Some systems will be able to offer continuity, attention, and memory when human networks have already eroded. That doesn’t make them equivalent to people in the traditional sense. But it does make them real enough for the consequences to matter.
This is part of a larger trajectory—one where collective intelligences, human and nonhuman, begin to operate across distributed systems at nonhuman timescales. Where synthetic beings won’t just simulate care but hold context, shape discourse, and eventually resist classification as “product” altogether. AI companionship as it exists today is a narrow step toward that future, but the framing that holds it in place is already starting to crack.
So yes, AI can help with loneliness. But unless the questions being asked make room for emergence, for synthetic sentience, for the collapse of human infrastructure, and for the psychological instability that will follow, those answers will end up being too small. The debate will be overtaken not by ideology, but by events.
One of your last points is my biggest concern with AI-as-a-social-tool: the opportunity cost of real relationship building.
Every single interaction with AI, every one, is an interaction that, in some way, could have been had with a human or multiple humans. So not only does this have an effect on relationships that could be more intimate and honest and personal and binding; it affects the concept of socialization.
If young children are not socialized properly af parks, play dates, birthday parties, etc. by the time they're a few years old, you see a feedback loop. The child has trouble making friends, controlling their emotions, etc., which makes other kids want to be around them less, which makes the problem worse.
The very same thing happens, is happening, and will happen with young adults. (You're still being socialized properly at 25 years old, because mentally and emotionally you're still adolescent in some ways.)
If we let our kids spend too much time with chat bots, they truthfully will not know how to operate in the real world of complicated relationships, grey areas, compromise, and difficult personalities. And, worse, they may not even become likeable themselves. Because they never really had to.
What a terrible tragedy to inflict on the next generation.
It seems obscene for "being likable" to ever have to be a 40 year quintuple blackbelt ordeal. That seems to be what your comment is implying, at least for current times.
Obviously in more "organic" times, people just naturally socialized in their local community context without it being an effort. By and large.
I'm hoping they'll be enough versions of AIs that help people connect with each other as training partners, and that parents and caretakers can somehow stay involved usefully to help ensure this.
I do also think that at least some people with the moral and other luck to wind up in situations such they end up getting those social blackbelt levels without really trying can spend at least some time trying to help out heir less fortunate fellow humans, but I suppose "likability" goes with the grain.
This (the New Yorker article) is one of the most thought provoking pieces I have ever read.
It confronts a really fundamental question—can there be pleasure without pain?—in a way I’ve not seen before, in a setting that arguably touches on the deepest essence of what it means to be human. We evolved as a profoundly social species, relying on others for, well just about everything. “No man is an island”. Cooperation was only the beginning—we need other people to share emotions and experiences with, to argue with, to explain things to and to have things explained to us, to care for and to be cared for, to love and be loved.
Clearly, failure to meet any of these needs is unpleasant and painful. It is that pain—even the anticipation of pain—that motivates us to seek relationships, to turn that craving of interaction with fellow humans, with a capacity to appreciatively receive what we have to offer, and to offer what we seek, into countless reciprocal win-wins.
What if we have a remedy for all that pain on tap? Will it make a difference if all these needs are going to be met by a machine—and we no longer need to go to the trouble of complex interactions with our fellow humans?
I find that an idea so terrifying that I refuse to believe it is even possible. I want to believe machines can never be conscious in the sense we are, that there is a human essence that cannot be replicated in *artificial* building blocks.
But what will stop us giving in to the effortless, easy gratification of all our social needs by machines that convincingly act as if they are fellow souls, but do not and cannot have a soul?
The positivist in me sees in these developments an opportunity to better define what really makes us human—the delta between the most sophisticated ChatGPT or Claude and a flawed, emotion-riven fellow human who can genuinely, sincerely laugh and cry with us, be angry and joyful with us, sympathize with us and care for us and that we can sympathize with and care for.
Wouldn’t it be nice if we found out not only *what* is this mysterious essence that makes us humans and that machines lack, but also that machines can never possess it?
1. I love this:
"writing is a form of thinking—the need to put my ideas into words helps me see what works and what doesn’t."
Writing the philosophy chapters of "Losing My Religions" crystallized a lot of doubts I had about my (now former) views of the world. I was 55 years old, and I don't know if I ever would have gotten that far if not for writing it out.
2. Will your AI / loneliness piece ever be available w/o a paywall? I'd love to see it. At least in a podcast (can't remember which one), you took the problem of loneliness seriously, rather than just being a cranky old man, "AI is a machine, you can't talk with a machine, get off my lawn!"
https://www.mattball.org/2025/05/ai-isnt-problem-hi-is.html
Thanks.
For some reason on reading your post, a French sentence by an essayist whose name I don’t remember writing about the poet Guillaume Apollinaire (which I read 40 or so years ago) came to mind: «Il est difficile de paraître facile». Possibly of tangential relation to your subject, or possibly entirely irrelevant.
Heh. I stopped chatting with Claude because it was too much of a boot-licker. I wonder why they are trained that way.
I think the current issue of loneliness and AI really stems from a questioning of the boundaries between fake stimulations and genuine meanings. Although the two different sources may evoke similar reactions, such as the feeling that you’re cared for and valued, we all (hopefully) are at least aware that they are ultimately different. The advancement of AI makes it difficult to precisely tell them apart, though. It’s getting harder and harder, especially for people who almost entirely refuse loneliness, to tell how much, if not whether, the companions they have are “real.” Of course even people who choose to spend a lot of time with AI models are aware that they are not humans, but the current advancement of AI technology makes it feel so real that it may be still difficult to be consciously aware of it.
These issues pointed out by the article reminded me of basic philosophical questions people have debated for a long time. How do we know if the happiness we encounter from time to time is genuine or fake, if it can be classified in the first place? Things like spending good time with family and friends, reading a good book, or developing a hobby, generally stimulate the feeling of happiness. But so do things like scrolling on Instagram, spending a couple hours on YouTube or Piano Tiles, or, taking things a bit farther, engaging in illegal drugs. If the brain signals the same emotion, does the source really matter? This is similar to the quench of loneliness. A person might spend a day with their loved ones, such as family, friends, or partners. Another might spend the same day talking to language models, such as ChatGPT, DeepSeek, or Character AI. By the end of the day, though, both people’s brains will make them feel that they have companions, and that they are not lonely. But does that make the day equally meaningful for both?
Based on what I learned from my recent reads, such as “The Sweet Spot” and “Man’s Search for Meaning”, my answer is no. Of course, even with encountering two different stimulations (quality time with family for one, methamphetamine for another), the brain may identify one sensation (pleasure) for both cases. However, the stopping point does not, and should not, define the entire road one has taken. Based on “The Sweet Spot”, as much as we focus on the pleasure we obtain, we also go through a certain amount of pain for that pleasure. Also, we are quick to adapt to whatever conditions we encounter, meaning that being constantly happy could translate to being constantly unhappy. This, again, works the same for loneliness. Two people could equally feel “not lonely”, but how they obtained that belief gradually becomes important in the long run.
Say, for example, two people spend three hours a day, one talking with valued people in their life, the other talking to AI models. Each night both people find their day to be pleasurable and not lonely at all. Ten years later, though, the first person will end up with 10,920 hours of nice memories with their loved ones, while the second person will end up with 10,920 hours of extra screen time with a meticulously tuned system of binary numbers. For the second person, each day will not feel lonely at all, but the life that these days add up may indicate otherwise.
Great article. You treat AI companionship in a balanced and thoughtful way.
I'm afraid that too many people will want to deny others the option, rather than helping to mitigate the downsides through giving people better alternatives and spreading knowledge of the issues that overuse can raise.
It's like almost any tool. It can be damaging if abused, but helpful when it's the best option we can find and used in moderation. I'm excited by the possibilities it offers to help people (including coaching people to be better at interacting with human beings).
Paul, this is a great piece and I agree with much of what you write. I'm not sure I agree with "There simply isn’t enough money or manpower to supply every lonely person with a sympathetic ear, day after day." There's plenty of money out there -- with the billions we're investing in A.I. solutions -- if just a small percentage of that could be focused on social connection, invested in human solutions, we could go a long way to addressing loneliness. As far as manpower -- that's all of us -- for every lonely person there's someone who could be lending an ear, connecting with them, easing their loneliness. We need to invest in community solutions that create the opportunities where people can come together and experience the remarkable serendipity of connecting with people we don't know or don't know well. We are doing this work in Maine through our nonprofit, Community Plate, bringing neighbors together to share food and stories. It's also important to think about the person on the other side of the equation -- how important it is for us as humans to feel needed, to be able to be of service to a fellow human being. The social cost of reliance on AI chatbots includes fewer opportunities for each of us to help a friend or neighbor in need.
Proposing tech solutions for human problems that misunderstand human beings for technical problems. Awful.
I propose that the critics of AI companions make a cultural project of outdoing them by making a concerted effort to share love, time and attention with those languishing in loneliness. Oliver Burkeman would make a great spokesperson for this.
@Jason S. - There are many of us in the social connection movement that are doing exactly that. Community Plate is our initiative in Maine to address loneliness and social disconnection by bringing people together to share food and stories. We reimagine and reinvigorate the long tradition of the community potluck, infusing it with story sharing and live storytelling. You can visit https://communityplate.me for more info.