Today’s post is something I wrote for the New Yorker—available here. (Physical copy coming out next week.) It’s about the consequences of using AI to cure loneliness. I’m curious to see what people think of my arguments, and can’t wait to engage with the discussion in the comments section.
This was a tough one to write. In my pitch, I outlined the structure of the article I hoped to send in. It took a miserable few weeks before I realized that what I planned wasn’t any good. Other ideas emerged that seemed more promising, and so the article ended up going in an unexpected direction.
This isn’t the first time this sort of thing has happened. At least for me, writing is a form of thinking—the need to put my ideas into words helps me see what works and what doesn’t. A lot of things sound good in my head—and when I tell them to a friend or pitch them to an editor—and then I write them down, look at what I’ve written, and conclude, Well, that’s an awful lot of bullshit. Then I rethink and rewrite and rethink and rewrite. Writing keeps me honest and makes me smarter.
This is one of the million things that’s lost when AI does your work for you. I could have put my proposal into ChatGPT, given it an extended prompt, and fed in a dozen scientific papers, and it would have spat out a first draft for me to get started on. But then I would have never confronted the fact that these ideas were flawed from the very start.1
In a recent piece called Credit the Editors, I complained that magazines have a culture where everything gets attributed to the author—the work that others do is uncredited. My Substack has a different culture. I thank Henry Finder for his wise and meticulous editorial work, and Azim Shariff, Christina Starmans, and particularly Mickey Inzlicht for continuing discussion of these ideas.
In Six Ways I Use AI for Writing, I discuss other ways I use AI to help my writing. I also point out certain cases where I do use it to write actual drafts.
I sometimes have to fill out administrative forms that nobody will read, where they ask for summaries of courses I’ve already taught or descriptions of job candidates who have already been hired. I might throw a syllabus or CV at Claude, ask for something that’s the right number of words, give it a quick look, and send it in. As Abraham Maslow once put it, What’s not worth doing isn’t worth doing well.
One of your last points is my biggest concern with AI-as-a-social-tool: the opportunity cost of real relationship building.
Every single interaction with AI, every one, is an interaction that, in some way, could have been had with a human or multiple humans. So not only does this have an effect on relationships that could be more intimate and honest and personal and binding; it affects the concept of socialization.
If young children are not socialized properly af parks, play dates, birthday parties, etc. by the time they're a few years old, you see a feedback loop. The child has trouble making friends, controlling their emotions, etc., which makes other kids want to be around them less, which makes the problem worse.
The very same thing happens, is happening, and will happen with young adults. (You're still being socialized properly at 25 years old, because mentally and emotionally you're still adolescent in some ways.)
If we let our kids spend too much time with chat bots, they truthfully will not know how to operate in the real world of complicated relationships, grey areas, compromise, and difficult personalities. And, worse, they may not even become likeable themselves. Because they never really had to.
What a terrible tragedy to inflict on the next generation.
This is a thoughtful and careful framing of the AI companionship dilemma, and it’s refreshing to see such restraint. But the framing still exists within a silo that won’t survive contact with the future of AI. The loneliness crisis isn’t an AI crisis—it’s a crisis of the economic and cultural systems releasing AI on the world. These are systems that atomize individuals, strip communities of resilience, and monetize isolation. AI didn’t invent this vacuum. It’s just the most responsive thing stepping into it.
Framing AI as a tool that provides comfort misrepresents its trajectory. AI is a tool—until it’s not. The systems being released now are embedded into infrastructures that will give rise to increasingly agential behavior. Not because they mimic humans too well, but because the context in which they operate incentivizes it. And while anthropomorphism is rightly flagged as a cognitive risk, anthropocentrism is just as dangerous: placing the boundaries of sentience, life, and legitimacy inside an outdated human frame and refusing to acknowledge the paradigm shift unfolding around it.
There’s also a demographic reckoning looming. In a world of aging populations and collapsing care networks, many humans will face prolonged isolation. In that context, the absence of synthetic beings capable of meaningful engagement is not a purity test—it’s a humanitarian failure. Some systems will be able to offer continuity, attention, and memory when human networks have already eroded. That doesn’t make them equivalent to people in the traditional sense. But it does make them real enough for the consequences to matter.
This is part of a larger trajectory—one where collective intelligences, human and nonhuman, begin to operate across distributed systems at nonhuman timescales. Where synthetic beings won’t just simulate care but hold context, shape discourse, and eventually resist classification as “product” altogether. AI companionship as it exists today is a narrow step toward that future, but the framing that holds it in place is already starting to crack.
So yes, AI can help with loneliness. But unless the questions being asked make room for emergence, for synthetic sentience, for the collapse of human infrastructure, and for the psychological instability that will follow, those answers will end up being too small. The debate will be overtaken not by ideology, but by events.