Discussion about this post

User's avatar
Matt Ball's avatar

>much of human empathy is also Soylent quality, if not worse.

Yup. Nearly everyone is just waiting for their chance to talk. And we all have our deep cognitive biases.

I read a critique of Peter Singer recently that said, in part, "I disagree with his hedonism. Only my preferred way of being happy is legit!" I get that sense from a lot of people in all sorts of areas.

Expand full comment
C. Connor Syrewicz's avatar

> “I think there is value in interacting with individuals with agency and empathy, even if the recipient can’t distinguish them from entities with no agency and no empathy. Genuine experiences have intrinsic value; reality is better than an illusion.”

I need to engage with these ideas more, so maybe I’m speaking in ignorance, but this seems like a genuinely concerning position for any psychologist to take. Who’s to say what a “genuine” experience is, and what isn’t? Sounds like a no-true-scotsman fallacy to me. Aren’t all experiences (regardless of the antecedents of those experiences) “genuine”?

Now it may be possible that the effects of AI-generated empathy are “better” or “worse” when compared to human-generated empathy along some specifiable dimensions: maybe a lot of time with AI-generated empathy makes some people less tolerant of human failures in empathy, which hurts their relationships with real people. It’s okay to speculate about this and to try and test these kinds if ideas—AI-generated empathy is probably going to have upsides and downsides compared to human-generated empathy, and those comparative upsides and downsides should be investigated—but describing some experiences as “genuine” and some as “not genuine” seems like an incredible declaration of value about this subject, a declaration of value that cannot possibly be supported by evidence given how new AI-generated empathy actually is.

Expand full comment
11 more comments...

No posts