Is it irresponsible for academics to refuse to use AI?
Maybe not yet—but we’re getting there
On Facebook, the philosopher N. Ángel Pinillos (Substack) commented on the new AI policies of the journal Ethics.
I thought that the goal of scholarship is to produce the highest quality academic content. However, many of my colleagues who I respect seem to think that the goal is something more complicated, they endorse things like that it is preferable that the content be fully produced by humans without help from AI. (Ethics seems to have this policy). I’m trying to understand this perspective. Consider a medical journal that does not publish a paper because it was made by AI, although the paper contains the cure for a deadly disease and would be published if it was made by a human. I assume everyone would think that the academic journal has made a grave error. So what’s the difference between this case and philosophy (or related fields)?
Ángel considers an extreme version in which AI writes the paper itself. I want to change the case a bit. Suppose a human remains the author, but AI helps write the paper—it’s involved in brainstorming ideas, clarifying arguments, identifying the relevant literature, anticipating objections, tightening prose, and so on.
Is there anything wrong with this?
If you think using AI will make the paper worse, then it’s obviously wrong to use it, just as it would be wrong to intentionally use a malfunctioning statistical program. But suppose that AI will make a paper better. (What’s “better”? For present purposes, imagine a blind assessment: experts compare a version created alone versus one created with AI; whichever is judged better is better.) Whether or not we’re already there, it’s easy to imagine a future where this is true.
There’s nothing wrong with using AI in these circumstances. In fact, if an author could improve a paper by using AI, it would be irresponsible not to do so.1 What would you think of someone who decided to use the second-best statistical test, who purposefully did a shoddy literature review, or who had the chance to get sharp and useful comments on the paper’s arguments and chose not to? That’s just bad scholarship.
I can think of just one good argument against this conclusion. Producing valuable scholarly work is a good thing, but it’s not the only good thing. Suppose I could do better experiments if I violated ethical rules and harmed my subjects. Still, I shouldn’t do this—the costs outweigh the benefits. Similarly, someone who believes that AI depends on cruel labor practices, has terrible environmental costs, or degrades our cognitive powers, probably shouldn’t use it—unless their work is very important, as in Ángel’s example of curing a deadly disease. Even if you think there are more minor costs, ethical or otherwise, you should take these into account, just as you would factor in the costs of using any other sort of assistance.
The responses to Ángel’s Facebook post, many of them by philosophers, went in a different direction. They didn’t object to his claim that we should use AI if it would cure deadly diseases. But some of them said that philosophy isn’t like that at all. Some likened philosophy to a conversation; others described it as a game. In either case, using AI ruins the endeavor. As one respondent put it,
I would still never make use of [AI] myself in my own papers, because I might as well just read it and leave it at that (for the same reason I wouldn’t look on the internet for the answer to today’s Wordle).
I don’t know what to make of this. Sure, it’s fun to figure things out on your own. But there are certain pleasures that serious scholars have to give up.
After all, nobody would respect a cancer researcher whose work was shoddy because she chose not to take steps to improve it. We think of cancer research as having objective value—it’s important to do it well. Well, I think this is true for other forms of scholarship, including the sort of psychological research that my colleagues and I do, and including philosophical pursuits concerning ethics, metaphysics, epistemology, and the like. Some of this work is better than others—the arguments are clearer, the examples are better, the ideas are richer, the theories are more responsive to the data, and so on—and since there is value to doing good work, there is an obligation, all other things being equal, for scientists and scholars to use the best tools available.
Does this obligation extend to philosophers who see their careers as Wordle marathons? I guess not—as the expression goes, what’s not worth doing is not worth doing well. I certainly don’t think of philosophy in general in these terms, but maybe there are some careers, maybe even some subfields, where it’s really all a game. Such players can forgo AI—and anything else that makes the game less enjoyable.
Now, I’m framing this in terms of extremes—you’re either curing cancer or playing word games—but, more realistically, even those of us who take our work seriously often let other considerations shape the sort of papers we publish. People rush out papers that aren’t quite ready because they’re on the job market or up for tenure. They cut corners to save time or money. And, like the philosopher quoted above, sometimes they’d take the time to work something out themselves rather than use a tool that quickly does it for them.
These decisions are understandable—they are venial sins at worst. But they really are sins. It’s one thing to refuse to use AI because you don’t think it helps or because you think using it is morally wrong. (I disagree with both of these claims, but if you believe them, then your decision makes sense.) But to refuse to do it because it feels yucky, or it’s not fun, or it’s not authentic—those are bad reasons. Someone who offers these reasons is saying, essentially, that the quality of their work doesn’t matter that much to them. This is not an attitude to be proud of.
Betteridge’s law of headlines states that any headline that ends in a question mark will be answered with a “no”. This post violates the law. Is it irresponsible for academics to refuse to use AI? If they think AI improves the work, and there are no substantial costs, then, if they take their jobs seriously, the answer is yes.2
Two qualifications: First, I’m only talking here about scholarly and scientific publications. Obviously, students shouldn’t use AI to write their papers because the point of their papers isn’t to produce the best work; it’s to assess students’ ability. Novels, poetry, love letters, hate mail, apologies, and most Substack posts are intermediate cases in which I can see a strong case for refusing to use AI in a significant way—though I’m keeping an open mind. Second, the human author should acknowledge the help of the AI—it’s dishonest to take credit for work that’s not your own
Thanks to Christina Starmans and ChatGPT for comments on an earlier version of this piece. I took all of Christina’s advice. I took some of ChatGPT’s advice, such as removing a joke it deemed in bad taste, but I ignored most of it. Perhaps in the future, as AI improves, refusing to do what it says will be a form of academic irresponsibility. But it’s not there yet.


As a philosopher I’m of course interested in finding out what’s true, but I am also interested in developing my own point of view, understanding what I think about certain topics, and engaging in productive conversation with others through the journals. Those things have value too. I’m not going to outsource my own thinking, even if the subcontractor is better at it than I am.
I love this post!