A lot of developmental psychology isn't worth doing
Not your studies, of course. Yours are great!
This is a niche post. It’s about the sort of developmental psychology research that gets published in journals like Child Development and Developmental Psychology and presented in meetings like the Cognitive Developmental Society. If you’re among the 99%+ of my subscribers who don’t care about this, please don’t unsubscribe. Just skip this, and then I’ll be back soon to write about topics like AI, envy, empathy, mental illness, and so on. See you in a few days!
Ok, the civilians are gone. Now it’s just us.
“This paper fills a much-needed gap in the literature”1
—comment by a critical reviewer, first observed in 1950.
I attended the Cognitive Development Society conference in Pasadena, CA, earlier this year. CDS is my favorite developmental conference, with the best speakers and talks, and this one was a blast. I had a lot of fun and learned a lot.
But I also noticed something. Many of the talks I attended had a certain structure, and I realized that I’ve been seeing this for a long time—including in colloquium talks, student presentations, and journal articles. They reported work that followed this recipe:
Start with an observation about adults—some ability, intuition, opinion, or understanding that adults in our society have.
Develop a task to test for the presence of this ability, intuition, etc. in children.
Test children of different age groups, usually looking at (a) an age where you don’t expect them to be adult-like and (b) an age where you do expect them to be adult-like.
If you find that neither age group is adult-like, test an older age group.
If you find that both age groups are adult-like, test a younger age group.
Present your findings in a graph like the one below, showing that children become more adult-like over time. These days, you’ll probably go for a fancier graph from R, but I’m going old school here (thanks, Excel) and forsaking even the error bars.
State your conclusions. Here, you might say that 3-year-olds don’t get it all, 5-year-olds are better (But are they better than chance? Well, do the analysis), and 7-year-olds pretty much nail it.
If it’s a talk, graciously bask in applause. Be prepared for questions, but don’t worry; they’ll be easy to deal with. You might be asked if younger children would have succeeded if you made the task easier. (Answer: Great question! Yes, maybe so! We’re hoping to simplify our design for future studies.). Would you expect to get the same findings if you tested children in other societies? (Answer: Great question! We’re hoping to do cross-cultural work in the future!) Maybe someone will push back and say something like: “Why do you think so-and-so’s lab in Berkeley (say) finds that even 4-year-olds get this thing, and you only find it in 5-year-olds?” A good answer is: “Well, they must have smarter kids in Berkeley!” That’ll usually get a laugh, and maybe it’s true.
I am going to argue that most of these studies are a waste of time. But I’m not saying that they are easy to do.
Step 1 is tough because you have to choose the right topic—the right sort of adult ability, intuition, etc. Nobody cares about the developmental trend of learning that Paris is the capital of France, or that Hamlet ends badly, or that when you stand up, your lap goes away.
Often, the right topics have to do with some subtle aspect of physical, numerical, and social understanding, such as, say, knowing that when you multiply two negative numbers, you get a positive number or that it’s harder to forgive a serious transgression than a minor one, or that people often get happy when something bad happens to their rivals.2 Often, investigators want to know what children think of politically charged issues, such as racial and sexual stereotypes, trans rights, immigration, and climate change. In the areas of developmental psychology I’m most interested in, topics are often drawn from philosophical examples or “experimental philosophy” studies with adults. One might want to know when children come to have adult-like judgments about trolley problems, say, or when they come to get the adult intuition that human lives in the future matter less than those in the present.
Step 2 is tough because it requires figuring out good ways to test children, something which requires considerable methodological skill. (Some of the newer methods that I learned about at CDS are ingenious.)
Steps 3-5 are tough because they involve testing children. These are not the survey studies that our lazybones social psychology colleagues run online. The difficulty of testing babies and children is one reason why developmental psychologists struggle to get out a couple of empirical papers a year while some of our colleagues down the hall can generate triple-digit h-indexes without ever leaving their comfy offices.
There are good reasons to do some of these studies.
Sometimes, the findings have practical importance. Teachers, therapists, parents, and judges might be interested in what children know at certain ages. Now, in reality, the studies almost never have the practical importance that the investigators boast about in their grant applications. It’s often hard to take data from a developmental lab and apply it to a classroom or a courtroom. But, still, this is a respectable reason to do a study.
Most often, the studies bear on theoretical issues. Suppose you believe that some ability is innate or at least emerges prior to an understanding of language. Then it becomes really interesting to discover that 6-month-olds can do it—or that 3-year-olds can’t do it. Some of the great discoveries in our field have been that babies have astonishing capacities (pro-nativist—think of the work of Elizabeth Spelke and others) and that older children show surprising limitations (anti-nativist—think of the work of Jean Piaget and others). Sometimes, theories make more nuanced predictions about age. Some psychologists might believe that some mature ability will crop up just at the point when children are able to walk, for instance, and so a study that tells us that children can do it much earlier (2 months, say) or only much later (9 years, say) is a real kick in the pants to such a view. Interesting work!
In the above cases, the motivation for the work is clear, and the speaker will usually explain it in the first two minutes of the talk:
“We want to know whether 9-year-olds know blah blah because it will help us question them properly in domestic abuse cases.”
“It’s often said that blab blah is innate. We challenge this view by showing that 2-year-olds struggle to understand this basic notion.”
“So-and-so claims that 4-year-olds lack the cognitive capacity for blah blah, but here we find that even 12-month-olds succeed with the task is made easy enough.”
But some of these talks don’t begin with a justification. Sometimes, they start with something like:
“Adults do blah blah ...”
and then:
“… But nobody has yet tested what children do.”
“… It is important to provide a developmental perspective.”
“… there is a gap in the literature.”
Often, there isn’t even this. The speaker talks briefly about something that adults know or do. “Studies show that adults believe that, on average, men have deeper voices than women.” And then, without missing a beat, the speaker goes on to say: “We tested a group of 3-year-olds and a group of 5-year-olds to explore the developmental period at which this understanding emerges.”
The talks don’t have justifications because there aren’t any. Suppose someone were to stand up during the question period—not me!—and ask:
Sorry, I must have missed this, but why does it matter? Who cares whether this nugget of knowledge shows up at age 3 or 5 or whatever? Obviously it has to come in sometime between babyhood and adulthood. Who cares precisely when? What theory would your data support? Who would be surprised by if the answer is one thing or another? Who would be pleased? Why is this experiment worth doing?
I have asked much gentler versions of these questions when I’m discussing ideas with students. Sometimes they find it strange to be asked why they did their studies. One student laughed nervously and said: “I heard you like to ask philosophical questions.”
I never blame the students. I blame their advisors. Many developmental psychologists have deep theoretical motivations for their work. But some of them apparently run labs where the only motivation for running studies that anybody discusses is to get papers accepted by conferences and published in journals—or to get future grant support to write papers that are accepted by conferences and published in journals.
Doing experiments where the findings don’t matter is not a valuable activity. If you can’t answer the question “Why are you doing the study?” with something better than “Nobody has done it before,” you shouldn’t be doing the study. Journals shouldn’t send out such papers for review, and conferences shouldn’t accept them. Of course, researchers can make discoveries by accident, and sometimes a finding can be the catalyst for interesting work in the future. And there are worse sins than wasting everyone’s time. But publications and presentations are zero-sum, and if we encouraged higher standards, there would be more room for the good stuff.
Confession here: I’ll admit that this theoryless way of proceeding is, right now, a pretty good way to get papers published. One can imagine a first-year student coming into an advisor’s office and struggling to find a project to work on, and the advisor says:
Look through the last few issues of top journals like Psych Science, PNAS, and Cognition, and find a good adult result of the sort that can be run with children. Then we’ll run the same experiment with 4- to 6-year-olds and see what we find.
Bigger confession now—this way of doing developmental research has earned me and my students publications.3 But I no longer think of it as a way to do good science.4
We’ve been in a similar situation before.
When neuroimaging came onto the scene, there was great excitement about demonstrations that certain specific parts of the brain were active when people thought of different things. You couldn’t open up an issue of Science or Nature without seeing colorful pictures of brain activation. Look at what happens in the brain when people do math problems! Or listen to music! Or feel envy!
Here is what I said about this in 2016.
Nowadays, many people only seriously consider claims about our mental lives if you can show them pretty pictures from a brain scanner. Even among psychologists who should know better, images derived from PET or fMRI scans are seen as reflecting something more scientific—more real—than anything else a psychologist could discover. There is a particular obsession with localization, as if knowing where something is in the brain is the key to explaining it.
I see this when I give popular talks. The question I dread most is “Where does it happen in the brain?” Often, whoever asks this question knows nothing about neuroscience. I could make up a funny-sounding brain part—“It’s in the flurbus murbus”—and my questioner would be satisfied. What’s really wanted is some reassurance that there is true science going on and that the phenomenon I’m discussing actually exists. To some, this means that I have to say something specific about the brain.
This assumption reflects a serious confusion about the mind and how to study it. After all, unless one is a neuroanatomist, the brute facts about specific location—that the posterior cingulate gyrus is active during certain sorts of moral deliberation, say—are, in and of themselves, boring. Moral deliberation has to be somewhere in the brain, after all. It’s not going to be in the foot or the stomach, and it’s certainly not going to reside in some mysterious immaterial realm. So who cares about precisely where?
Many years earlier, the philosopher Jerry Fodor put this in a pithier way:
If the mind happens in space at all, it happens somewhere north of the neck. What exactly turns on knowing how far north?
Fortunately, neuroscience has outgrown this interest in localization for localization’s sake. There is still a lot of research focusing on what part of the brain some psychological process corresponds to, but this is almost always in the service of answering theoretical questions that bear on competing psychological theories. (For examples, see the first chapter of my Psych).
I hope developmental psychology advances in a similar way.
I hesitated before publishing this post. Why piss people off? But I spoke about my concern with a friend in the field—a developmental psychologist whose work is much cooler than mine—and he said that (1) everyone knows that the kind of unmotivated research I’m complaining about has little value, and (2) everyone thinks everyone else does this unmotivated work—their own work is deep and theoretically grounded—and so nobody will think I’m talking about them. And, most of all, (3) it’s a good thing to get these concerns out there, especially for early-career researchers. So here it is.
This phrase is sometimes mistakenly used as a compliment (with the intended meaning of: “much-needed-FILLING-OF-A-gap”). But the literal interpretation is definitely not complimentary.
In this example and others, I’m not drawing on actual research that I’ve heard about at the CDS conference or elsewhere—I have no interest in calling anyone out.
I can tell you the precise papers that I wrote this way, but the problem is that they were co-authored with graduate students, and I don’t want to throw them under the bus. (According to the faculty handbook, you can self-flagellate all you want, but flagellating graduate students is forbidden.)
So, how should you come up with new ideas? That deserves its own post, but one short answer (which is not the only way to do things) is that the advisor and the student should start by doing a lot of reading, both theoretical and empirical, and then brainstorm about theories and phenomena that are worth exploring.
Well said. The incentives of the academic industrial complex have gone all topsy turvy. Opportunistic incrementalism to further careers has subsumed meaningful innovation to further science. Thomas More’s words in A Man for All Seasons come to mind: “We must just pray that when [the] head's finished turning [the] face is to the front again.”
With respect to: (2) everyone thinks everyone else does this unmotivated work—their own work is deep and theoretically grounded—and so nobody will think I’m talking about them.
Moved from private sector and spent some time working in government - everyone seemed to acknowledge (with wry smiles) ubiquitous levels of inefficiency and incompetence, but none self-identified as the culprits.
The “I’m not the cause” effect would be worthy of a paper or two. Probably already done. Which leads me to this: papers and graduate theses must be produced en masse as part of an established process - just as civilians (and economies) need jobs to survive. The test is not currently: do people’s jobs all contribute in a valued and meaningful way to the project of civilization? If no, quit and find meaningful work. Why should the standard be higher for academic research, given the institutions, incentives, and processes currently in place?