You're right that often when critics are talking about psychology they mean social psychology. There's a generation(s?) of people who got into psychology because of the flashier Ted-Talk-style findings, made their careers out of those findings, and for them that is the field. Most people's interactions with psychology is that kind of pop-psych stuff. So even though there are more reliable and robust findings, it's all sort of very different than what people expect I guess. My hope is that as social psych falls in status we can focus more on the biological and evolutionary aspects which they have tended to downplay in favor of environmental explanations.
I think I interpet Adam's concern a little differently. These weren't just any old folks who got caught out--they were major players in the field with prestigious positions and thousands of publications. I don't follow psychology and even *I've* read Ariely. Yet when their work was discredited, no textbooks had to be rewritten, no conference topics changes.
In other words, they got all those acolades despite the fact their research contributed nothing.
In every discipline, people want to be successful. But skill is a normal curve. Only a select few have the chops--and the luck--to drive real impact. So there is always pressure to measure people on metrics that don't require it: volume of work, hitting deadlines, how "agreeable" someone is.... This is as true for the people at the top as at the bottom.
But here's where capitalism, despite its generally shittiness, is useful. The people who own a company want it to make money so they can make money. This creates pressure on leaders and incentivizes them to reward employees who move the bottom line. The opposing pressure still exists--in fact it is still quite strong--but it adds *some* meritocracy.
Unfortunately in non-profit disciplines like academia, there is no opposing force. We tell everyone the lie that publishing papers, making tenure, and getting grants are signals of important work, but they are not. Just look how long it took Behavioral Economics to get any sort of foothold. The gatekeepers were all classical economists. They had no desire to see ideas promoted that called their own work into question. Ask yourself, why did Katalin Karikó lose her tenure track position? Why couldn't she or Drew Weissman get grants for their work?
I've thought about this problem for many years. How do we create incentive structures at non-profits that will promote impact the same way the profit motive drives it at for-profit companies. No luck so far, but maybe someone here will be smart enough to solve it!
"How do we create incentive structures at non-profits that will promote impact the same way the profit motive drives it at for-profit companies.”
The answer is quite simple, although unpalatable to those with vested interests: stop giving them taxpayer money. That will leave them dependent on the charity of individuals who are interested in the advancement of knowledge, and they will have to compete on merit to attract and maintain that support.
Letting (probably ideological) bureaucrats decide which scientists get other people’s money is a ludicrous, cancerous system.
If this were just a problem of government mismanagement, we wouldn't see the same issue at donor-funded non-profits, and we do.
I think there are three fundamental problems that need to be solved:
One, effectively measuring impact. "How much revenue did this company generate in April?" is a 'defined problem'. If someone gives you an answer it's either right or wrong, and you can check to find out. But "does this research meaningfully change our understanding of <area>" is highly subjective, and can even change over time.
Two, making success shared. Things are impactful in research because they *change* our understanding. If I'm the CEO and my employee's project makes a lot of money, I benefit too! But if I'm the department chair and your research calls my theories into question, I'm only harmed by you. Unless the people in power benefit from the success of new ideas, they will fight them.
Three, creating self-reinforcing systems. The companies that make the most money have the most money to reinvest in themselves. Success inherently gives you the tools to do more of the thing that was successful. But non-profit outcomes aren't like. If I'm trying to reduce homelessness and discover a technique that works, that doesn't inherently hand me more funding. I have to convince people (donors, the government, my boss--somebody), and they might not give it to me for all kinds of reasons that have nothing to do with whether my technique is effective.
Show me a non-profit that doesn’t receive, directly or indirectly, government support, and I’ll show you a non-profit that does simply what its donors (who, to be clear, also must not receive government support in any form) want it to do. If they are poorly run, they won’t last because the donors will shift their money to better ones. They will not be propped up by “private profits, socialized losses”.
| How do we create incentive structures at non-profits that will promote impact the same way |the profit motive drives it at for-profit companies.
Speaking as someone with a lot of academic, nonprofit, AND for-profit experience (especially with startups)...they have. It just doesn't look the same way, and the outcome is different.
Businesses are, ultimately, factories. If yours isn't working, the main reason is that you don't understand it well enough to fix it, speaking as a consultant. No big, factories have a lot going on. Fundamentally, they're about taking known inputs, transforming them into known outputs of value via a stable process, and repeating this. That's why a fair number of people say making money is easy. Nobody has EVER said this about groundbreaking research.
Nonprofits and academia are fundamentally gardens. Since the outcome is unknown beyond "world-changing," all you can do is create the best conditions possible for this to happen. Top-tier institutions know how to do this very well, and looked at objectively they *do* spin out a stream of world-changing things. Sure, it looks like a trickle, but take a closer look at each drop of water. You're never gonna get a whole river of them. Or to change metaphors, some of the best vineyards in France are the size of a Cheesecake Factory, with literally every square foot of dirt analyzed and pampered. They still don't produce the best wines EVAR, not every time. And their worst years are pretty good, but not even always.
While many may not regard connectionist models as pure psychology, they certainly have some origins in cognitive science. To this end, I think we can regard the current crop of AI and those on the horizon are arguably zeitgiest levels of events that qualify as "world-changing."
What about the findings around “nudges?” It’s not world changing, but it did inform policy in Obama’s response to the global financial crisis, amounting to billions of dollars in stimulus. Unfortunately for Obama, the very nature of the finding means the policy response went unnoticed, so he was punished in the 2010 elections.
Interesting. I interpreted Mastroianni's (main) point in a completely different way. You write: "As I interpret the logic of Mastroianni’s argument so far, it’s this: If our science is going well, discovering that specific findings by single investigators are mistaken (due to error, fraud, poor design, whatever) should have major consequences."
I think he meant the exact opposite straight from the get go: if we remove specific findings by single investigators, we *don't* get major consequences for the psychological body of science (which is what you claim later on too). For instance, if we posit that ego depletion is mistaken, this doesn't affect 99.99 % of people. In this context, fraud matters little. This is what I'd say is the "indictment of our field": in the grand scale of science, psychology's (wobbly) findings matter little.
But that might be the feature of the field, not a bug; we don't build things with our theories like physics or biology do. We deal in abstract concepts. This would be fine, I think, if we all collectively agreed on the fact that we shouldn't expect the same caliber of findings from psychology than we do of other, exact, sciences. But since psychology gets quite a bit of spotlight in recent years, either due to an endless hype for "breakthrough" findings (which matter very little in the end), or because of general "psychologization" of society, and it also - on many universities - counts itself an exact science (because hey, if we use the tools of exact science, it must mean we are exact science, nevermind the wobbliness of the subject matter - humans), it gets a lot of bashing as well. In that sense, I agree with you: I also think psychology is ok, provided we expect much less from it than we now do -- it's nice to know about attentional blindness and a sophisticated understanding of babies (or any of the other findings you listed), but those are "interesting" or "quirky", something you read for leisure in the afternoon, not "life-changing".
"rich, lonely monkey" will never not be funny. Love the friendly disagreement. More of this, less group head-bobbing, please.
You're right that often when critics are talking about psychology they mean social psychology. There's a generation(s?) of people who got into psychology because of the flashier Ted-Talk-style findings, made their careers out of those findings, and for them that is the field. Most people's interactions with psychology is that kind of pop-psych stuff. So even though there are more reliable and robust findings, it's all sort of very different than what people expect I guess. My hope is that as social psych falls in status we can focus more on the biological and evolutionary aspects which they have tended to downplay in favor of environmental explanations.
I think I interpet Adam's concern a little differently. These weren't just any old folks who got caught out--they were major players in the field with prestigious positions and thousands of publications. I don't follow psychology and even *I've* read Ariely. Yet when their work was discredited, no textbooks had to be rewritten, no conference topics changes.
In other words, they got all those acolades despite the fact their research contributed nothing.
In every discipline, people want to be successful. But skill is a normal curve. Only a select few have the chops--and the luck--to drive real impact. So there is always pressure to measure people on metrics that don't require it: volume of work, hitting deadlines, how "agreeable" someone is.... This is as true for the people at the top as at the bottom.
But here's where capitalism, despite its generally shittiness, is useful. The people who own a company want it to make money so they can make money. This creates pressure on leaders and incentivizes them to reward employees who move the bottom line. The opposing pressure still exists--in fact it is still quite strong--but it adds *some* meritocracy.
Unfortunately in non-profit disciplines like academia, there is no opposing force. We tell everyone the lie that publishing papers, making tenure, and getting grants are signals of important work, but they are not. Just look how long it took Behavioral Economics to get any sort of foothold. The gatekeepers were all classical economists. They had no desire to see ideas promoted that called their own work into question. Ask yourself, why did Katalin Karikó lose her tenure track position? Why couldn't she or Drew Weissman get grants for their work?
I've thought about this problem for many years. How do we create incentive structures at non-profits that will promote impact the same way the profit motive drives it at for-profit companies. No luck so far, but maybe someone here will be smart enough to solve it!
"How do we create incentive structures at non-profits that will promote impact the same way the profit motive drives it at for-profit companies.”
The answer is quite simple, although unpalatable to those with vested interests: stop giving them taxpayer money. That will leave them dependent on the charity of individuals who are interested in the advancement of knowledge, and they will have to compete on merit to attract and maintain that support.
Letting (probably ideological) bureaucrats decide which scientists get other people’s money is a ludicrous, cancerous system.
If this were just a problem of government mismanagement, we wouldn't see the same issue at donor-funded non-profits, and we do.
I think there are three fundamental problems that need to be solved:
One, effectively measuring impact. "How much revenue did this company generate in April?" is a 'defined problem'. If someone gives you an answer it's either right or wrong, and you can check to find out. But "does this research meaningfully change our understanding of <area>" is highly subjective, and can even change over time.
Two, making success shared. Things are impactful in research because they *change* our understanding. If I'm the CEO and my employee's project makes a lot of money, I benefit too! But if I'm the department chair and your research calls my theories into question, I'm only harmed by you. Unless the people in power benefit from the success of new ideas, they will fight them.
Three, creating self-reinforcing systems. The companies that make the most money have the most money to reinvest in themselves. Success inherently gives you the tools to do more of the thing that was successful. But non-profit outcomes aren't like. If I'm trying to reduce homelessness and discover a technique that works, that doesn't inherently hand me more funding. I have to convince people (donors, the government, my boss--somebody), and they might not give it to me for all kinds of reasons that have nothing to do with whether my technique is effective.
Show me a non-profit that doesn’t receive, directly or indirectly, government support, and I’ll show you a non-profit that does simply what its donors (who, to be clear, also must not receive government support in any form) want it to do. If they are poorly run, they won’t last because the donors will shift their money to better ones. They will not be propped up by “private profits, socialized losses”.
| How do we create incentive structures at non-profits that will promote impact the same way |the profit motive drives it at for-profit companies.
Speaking as someone with a lot of academic, nonprofit, AND for-profit experience (especially with startups)...they have. It just doesn't look the same way, and the outcome is different.
Businesses are, ultimately, factories. If yours isn't working, the main reason is that you don't understand it well enough to fix it, speaking as a consultant. No big, factories have a lot going on. Fundamentally, they're about taking known inputs, transforming them into known outputs of value via a stable process, and repeating this. That's why a fair number of people say making money is easy. Nobody has EVER said this about groundbreaking research.
Nonprofits and academia are fundamentally gardens. Since the outcome is unknown beyond "world-changing," all you can do is create the best conditions possible for this to happen. Top-tier institutions know how to do this very well, and looked at objectively they *do* spin out a stream of world-changing things. Sure, it looks like a trickle, but take a closer look at each drop of water. You're never gonna get a whole river of them. Or to change metaphors, some of the best vineyards in France are the size of a Cheesecake Factory, with literally every square foot of dirt analyzed and pampered. They still don't produce the best wines EVAR, not every time. And their worst years are pretty good, but not even always.
While many may not regard connectionist models as pure psychology, they certainly have some origins in cognitive science. To this end, I think we can regard the current crop of AI and those on the horizon are arguably zeitgiest levels of events that qualify as "world-changing."
What about the findings around “nudges?” It’s not world changing, but it did inform policy in Obama’s response to the global financial crisis, amounting to billions of dollars in stimulus. Unfortunately for Obama, the very nature of the finding means the policy response went unnoticed, so he was punished in the 2010 elections.
New subscriber. This is fascinating reading!
Is there any chance you could link to survey papers for points 5 and 8? I’d very much like to know more about those...
it should be easy enough to find citations on google or google scholar, but if you get really stuck, ask again!
Thank you. On fear, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6716607/pdf/nihms-1006718.pdf is skeptical; I’ve skimmed through abstracts of the papers that cite it and can’t see a refutation. I’d be keen to hear your take on the paper.
Great response!
This could be a candidate for an important development: psychopathology, based on symptom covariation, has a dimensional and hierarchical structure https://en.wikipedia.org/wiki/Hierarchical_Taxonomy_of_Psychopathology
PS. I'm 2/3rds done with Psych and really enjoying it!
Good piece, & FYI typo in #4 - should be "Adelson"
thanks, Hilary! - I'll fix it
As for point #2, what about this (just came out): https://www.pnas.org/doi/10.1073/pnas.2214930120
Thanks. I think it's a nice paper, and helps explore the issue of when inattentional blindness occurs. (But it doesn't challenge the main finding.)
That is one pretty weak reply.
Interesting. I interpreted Mastroianni's (main) point in a completely different way. You write: "As I interpret the logic of Mastroianni’s argument so far, it’s this: If our science is going well, discovering that specific findings by single investigators are mistaken (due to error, fraud, poor design, whatever) should have major consequences."
I think he meant the exact opposite straight from the get go: if we remove specific findings by single investigators, we *don't* get major consequences for the psychological body of science (which is what you claim later on too). For instance, if we posit that ego depletion is mistaken, this doesn't affect 99.99 % of people. In this context, fraud matters little. This is what I'd say is the "indictment of our field": in the grand scale of science, psychology's (wobbly) findings matter little.
But that might be the feature of the field, not a bug; we don't build things with our theories like physics or biology do. We deal in abstract concepts. This would be fine, I think, if we all collectively agreed on the fact that we shouldn't expect the same caliber of findings from psychology than we do of other, exact, sciences. But since psychology gets quite a bit of spotlight in recent years, either due to an endless hype for "breakthrough" findings (which matter very little in the end), or because of general "psychologization" of society, and it also - on many universities - counts itself an exact science (because hey, if we use the tools of exact science, it must mean we are exact science, nevermind the wobbliness of the subject matter - humans), it gets a lot of bashing as well. In that sense, I agree with you: I also think psychology is ok, provided we expect much less from it than we now do -- it's nice to know about attentional blindness and a sophisticated understanding of babies (or any of the other findings you listed), but those are "interesting" or "quirky", something you read for leisure in the afternoon, not "life-changing".