Progressives should worry more about their favorite scientific findings
and pay more attention to Ginger Rogers science
.
You may have heard of the study published in PNAS in 2020 concluding that Black newborns have higher survival rates when Black doctors attend to them. It got a huge amount of coverage in the popular press. It was even cited by Supreme Court Justice Ketanji Brown Jackson in her dissent last year on the court’s ruling against racial preferences in college admissions. The newborn research, Brown Jackson claimed, shows the benefits of diversity. “It saves lives,” she wrote.
The same journal just published a reanalysis of the data. It turns out that the effect disappears once you take into account that Black doctors are less likely to see the higher-risk population of newborns that have low birth weight.
I wasn’t surprised when I saw the re-analysis because I didn’t believe the original finding. It’s not like I thought: “Oh, that has to be bullshit!” It was more: “Meh—I reserve judgment.”
I have a similar attitude about other politically-relevant findings I see1, including reports that:
minority children do worse in school because of the implicit biases of their teachers
microaggressions have negative effects
trigger warnings have positive effects
conservatives are stupider, more biased, more fearful, or in some other way psychologically inferior to liberals.
Meh. Meh. Meh. Meh. Maybe the findings are true, but I wouldn’t bet on them.
To see the problem, compare these findings to those without political import. Suppose I read a paper in Nature Human Behaviour claiming that
There is a relationship between the presence of specific bacterial populations in someone’s gut and their performance on memory and attention tasks
Now, one should always be skeptical about any single scientific finding. Due to the nature of statistical inference, experiments sometimes yield positive findings when there are no real effects. There is outright fraud (rare, but it happens), poor experiment design, selective reporting of results, misuse of statistics, and so on—all sorts of ways that scientists, eager to get published, intentionally or unintentionally over-inflate their results.
Still, Nature Human Behaviour is a good journal. To get published there, you must go through a robust review process with expert scholars who critically examine the methods and findings. So, yes, this publication would raise my confidence that bacteria in the gut do influence memory and attention.
Now, imagine that I read a paper in the same journal claiming that
Racially diverse teams do better at solving scientific problems
All the same concerns about fraud, poor statistics, and so on apply. But now there’s something else. This sort of finding fits the ideology of most people who review papers for Nature Human Behaviour. It’s the sort of finding that improves the journal's prestige. It’s a result that ends up reported in the New York Times and The Guardian; it will get cited in briefs to the Supreme Court that support progressive policies.
These are all additional reasons, above and beyond the paper’s scientific quality—above and beyond the possibility that the finding is true—that make it more likely to be published. So, while you shouldn’t dismiss the finding entirely, you should take it less seriously.
Conversely, if Nature Human Behaviour published a paper reporting that racially diverse teams do worse at solving scientific problems, you should think: Wow, this is a conclusion that the reviewers and editors don’t like, so the evidence for it must be strong, the researchers must have carefully ruled out other explanations, and so on. One should take it more seriously.
It’s like what someone once said about Ginger Rogers and Fred Astaire: They’re both going through all the same moves, but Ginger Rogers is doing them backward and in high heels. A published finding that clashes with the political prejudices of reviewers and editors is a Ginger Rogers Finding. It had to be twice as good.
It shouldn’t be controversial that the source matters when judging whether something is true. In a recent debate, Donald Trump said that crime went way up in the United States during the Biden/Harris term. Suppose you have no idea about the facts. Still, you should be skeptical because Trump has a motivation for saying it other than the facts—his political interest. Suppose that Harris said the same thing—”Although our administration has much to be proud of, I admit that there has been a sharp rise in crime.” Now, you should be less skeptical because it’s against her interests. If Jane’s father tells you that Jane is smarter than Donny, you should be more skeptical than if Donny’s father tells you that Jane is smarter than Donny.
As another example, I’m a big fan of FIRE—The Foundation for Individual Rights and Expression. I respect their work in defending the free speech rights of academics. But suppose they started their own journal and published articles suggesting that censorship of professors’ views has all sorts of bad effects. You should think: Meh. That’s just what a free speech organization would want to publish.
People get this. In an unpublished study by Cory Clark and colleagues, people were asked for their perceptions of the left/right political slant of different organizations and professions, such as journalists, scientists, the Supreme Court, and psychologists. The organizations and professions perceived as most slanted were judged least trustworthy and least worthy of deference. This is true even when one is sympathetic toward the slant; that is:
even left-leaning participants were less trusting and less willing to support and defer to left-leaning institutions that appeared more politicized, and even right-leaning participants were less trusting and less willing to support and defer to right-leaning institutions that appeared more politicized.
I don’t want to overstate the extent of bias in journals. The vast majority of published findings have nothing to do with politics, and so the considerations that we’ve been talking about don’t apply. Also, I can think of many journal articles with findings that oppose a progressive worldview—Ginger Rogers findings do get out there. After all, the same journal (Proceedings of the National Academy of Sciences—PNAS) that published the original newborn article also published the reanalysis of the data that sparked this Substack.
Then again, sometimes an unpopular finding makes it through the system, and the system strikes back. A few years ago, a Nature Communications paper found that female scientists benefit more career-wise from collaborating with male mentors. This was not a message people wanted to see; there was outrage on social media, and the authors were pressured into retracting the paper. (See here for details).
This is just an anecdote, though, and it would be fair to dismiss it as an exceptional case. A better reason to believe there’s a political bias in what gets published is that scientists, reviewers, and journals explicitly say there is.
A 2012 study of about 800 social psychologists found that conservatives (about 6% of the sample) fear the negative consequences of revealing their political beliefs to their colleagues. They are right to do so. The same study finds that many of these non-conservative colleagues, particularly the more liberal ones, tended to agree that if they encountered a grant or paper with “a politically conservative perspective”, it would negatively influence their decision to award the grant or accept the paper for publication.
A 2024 study found that most professors—including those on the left, though it was more common on the right—self-censor their publications about controversial matters in psychology. This is because they are concerned about negative social and career consequences. While they don’t have much to worry about from most of their colleagues (most academics “viewed harm concerns as illegitimate reasons to retract papers or fire scholars [and] had great contempt for peers who petition to retract papers on moral grounds”), they aren’t being paranoid. There is a minority of professors who believe that a proper response to certain views is …
ostracism, public labeling with pejorative terms, talk disinvitations, refusing to publish work regardless of its merits, not hiring or promoting even if typical standards are met, terminations, social-media shaming, and removal from leadership positions.
Finally, certain major journals explicitly state that policy implications partially determine what gets published. This is how editors and reviewers are told to do their jobs. Consider the new guidelines from Nature Communications, developed in the wake of the female-mentor article:
And here is a more recent editorial from Nature Human Behaviour outlining their new procedures for reviewers and editors. As they put it,
Science has for too long been complicit in perpetuating structural inequalities and discrimination in society. With this guidance, we take a step towards countering this.
I know this is a controversial issue, and I suspect that some of you will side with the journals. You might believe, say, that a paper concluding that racially diverse groups are less efficient at solving scientific problems or one concluding that female students do better with male mentors should be harder to publish because these findings, even if they are true, will cause “potential harm” (Nature Communications) or will make the journal “complicit in perpetuating structural inequalities and discrimination in society/” (Nature Human Behavior).
My point here isn’t to argue the issue. I’m saying that if this is your view, congratulations—your side has won. Journals have the ideological slant you want them to. This may have all sorts of benefits, but one of the costs is that you can’t trust the more progressive-friendly findings these journals report.
Why did I title this article “Progressives should worry more about their favorite scientific findings”? Why is this a progressive problem?
First, people tend to believe findings that fit their worldview. Just as parents are too credulous when someone tells them how smart their child is, progressives are too inclined to take progressive-friendly findings too seriously.
Second, people tend to notice when systems, organizations, and people are biased against them and ignore biases in their favor. Many progressives don’t know about the editorial positions of journals like Nature Human Behavior, so they assume these journals just tell it like it is. Non-progressives are more aware of the partisan bias of the individuals who publish in and review for these journals and tend to be suspicious (perhaps too suspicious) of politically relevant findings from a community they believe doesn’t like them and your politics.
Everyone wants to get their facts right. How should progressives solve the problem of less trustworthy progressive-friendly results?
One solution is to reform journals so there isn’t political bias in what gets accepted and rejected. This would have the advantage that everyone—on the left and the right—would trust the journals more (see again here).
But I’m not sure this is possible. As I mentioned before, many of the people who run the journals disagree with this solution. Also, regardless of what the journals decide, much of the decision-making power is in the hands of individual editors and reviewers, and, as we see in the polls described above, many of them think that articles going against their political position shouldn’t be published.
A more humble solution is to become more educated about how personal and institutional biases shape the credibility of certain claims. Just like any intelligent observer should be more skeptical when politicians say terrible things about their enemies and when parents say wonderful things about their children, consumers of scientific information should be more skeptical when journals produce findings that are in lockstep with the political views of their editors, reviewers, and readers.
I like this ending above! I wrote it when I was in a cheerful mood. It’s optimistic, not just because it proposes a solution but also because it assumes that we all share the goal of getting things right.
I’m in less of a good mood now, so here’s an alternative ending.
Some people do care about how scientific findings bear on issues of political and social relevance. It matters to them whether implicit stereotypes lead to discrimination against members of certain groups, whether diverse organizations are more or less efficient, whether there is racial bias in police shootings, whether trigger warnings help or hurt, and so on. Some of these people, including scientists in the field, are interested in such findings for what they tell us about the mind. Others see them as relevant to dealing with certain real-world problems. Someone concerned about racism in the workplace, for instance, might be genuinely interested in whether diversity training works and so be interested in the many studies that ask exactly this question.
But many people have a different attitude. They are interested in social science research published in journals only insofar as it supports their positions, persuades others, or can be used to dunk on their foes.
This is understandable. In most of life, it’s important to get the facts right, but when it comes to the political/moral domain, one might have other priorities. Here’s a story I tell in Psych.
I was at a dinner once when Donald Trump was president, and we were all complaining bitterly about him. Someone mentioned the latest ridiculous thing he did, and we were all laughing, and then a young man, no fan of Trump, politely pointed out that this event didn’t really happen the way we thought it did. It was a misreporting by a partisan source; Trump was blameless. People pushed back, but the man knew his stuff, and gradually most of the room became convinced. There was an awkward silence, and then someone said, “Well, it’s just the sort of thing that Trump would do,” and we all nodded, and the conversation moved on.
Was the young man’s contribution a rational one? It depends on his goals. What was he most hoping to accomplish—to know and speak the truth, or to be liked? If your goal is truth, then … being biased to defend the positions of your group because of loyalty and affiliation, is plainly irrational. Truth-seeking individuals should ignore political affiliation when learning about the world. When forming opinions on gun control, evolution, vaccination rates, and so on, they should seek out the most accurate sources possible. …
But we are social animals. While one of the goals that our brains have evolved to strive for is truth—to see things as they are, to remember them as they really happened, to make the most reasonable inferences based on the limited information we have—it’s not the only one. We also want to be liked and accepted, and one way to do this is by sharing others’ prejudices and animosities.
If your goal is getting things right, then the advice to be skeptical of findings that flatter your views and to pay special attention to Ginger Rogers results is just the thing.
But what if your goal is to feel good about yourself? To persuade people to join your cause? To mock your ideological opponents? To make yourself popular within a group of like-minded individuals? Being skeptical about findings that support your view is great if you want to pursue the truth, but it is an awful strategy if you want to satisfy these other goals.
For whatever it’s worth, I believe that pursuing these other goals at the expense of truth, though understandable for individuals, makes the world worse. We should be primarily interested in accuracy, if only because we’re more likely to solve the many problems that plague us if we get our facts right. And so I wish the incentives for truth-seeking were higher. I hope for a culture where we cheer on those who work hard to get things right—like the young man in my story—even if the truth clashes with the narratives we’re most fond of.
But maybe this is a naive hope.2
I have particular studies in mind for these and other findings I’ll discuss below, but I won’t cite them. My point is about a general pattern, and so it would be unfair to target specific papers.
Thanks to Yoel Inbar, Michael Inzlicht, Azim Shariff, and Christina Starmans for comments on an earlier draft.
This is similar to Biblical scholars’ “Criterion of Embarrassment.” If an account is embarrassing to its authors’ worldview, it’s regarded as more likely to be true.
https://en.m.wikipedia.org/wiki/Criterion_of_embarrassment
Another suggestion I have is that we should require newely minted PhDs to take an oath like doctors do -- an oath that recognizes the special position of trust they have as an expert and requires them to swear to -- when speaking as an expert -- always attempt to convey the full state of the evidence not cherry pick positions for personal comfort or partisan benefit, to endeavor to publish in ways that add to our overall understanding and never p-hack or hide unwanted outcomes and to speak up to correct the record about their area of expertise even when they fear the consequences of how people would react.
While I think part of the benefit would just be getting academics to think more seriously about the impact of their work the most important aspect is that it creates an excuse for why you are speaking up.
If you're an epidemiologist who is considering standing up to say BLM rallies are dangerous because of the potential for COVID spreading you will reasonably worry people will infer the reason you spoke up is because you are against BLM. An oath like this gives you another reason you can point to.