9 Comments
User's avatar
⭠ Return to thread
Lee Jussim's avatar

It is a reaction time difference, not a bias. Whether any particular score, including a faster response to elderly-bad/young-good than to elderly-good/young-bad reflects a "negative association" in any sense other than "speed-of-reaction-time" is precisely what is at issue. And yes, I am saying that a faster response on the iat to e-b/y-g than to e-g/y-b does not necessarily mean "negative association" in any conventional meaning of the word "negative."

Relatedly, you did define implicit bias as an association involving human groups, so that would exclude my ham and cheese example (though, cognitively, I do not see why it should, but its your definition, so I'll go with it). I associate the French with wine, pro baseball players with athleticism, and rural dwellers in the U.S. with Trump support. I do not see how any of these would constitute bias for any conventional meaning of the term bias. If one agrees that some associations are not bias, then bias cannot be defined as associations of concepts with human groups. It remains possible that some associations of concepts with human groups are some sort of bias, but one would then need to define bias as something other than mere associations.

Expand full comment
Paul Bloom's avatar

My definition of "implicit bias" doesn't include ham and cheese because nobody would think of ham and cheese as a bias. I'm not sure why you're confused about this definitional choice--shouldn't a good definition match the usual meaning of a term? And I agree that associating French with wine, etc. isn't a bias either. But I did also mention valence -- "often with some sort of positive or negative feelings". If I took out "often" (which I probably should), would we be on the same page?

If we get a faster response to elderly-bad/young-good than to elderly-good/young-bad, there has to be a reason for this. I think it's because we associate elderly with bad and young with good. You disagree, which is fine, but what's your alternative?

Expand full comment
Lee Jussim's avatar

No, that's not really the point. The point is that the IAT is so filled with measurement artifacts that a simple interpretation of "reaction time diff" as "association" is not necessarily justified. See Fiedler et al, 2006; Blanton et al 2015 (Towards a meaningful metric...); Sherman's various papers on The Quad Model; Machery's work on IAT anamolies or Gawronski et al, 2022, Psych Inq, title: "Reflections on the difference between implicit bias and bias on implicit measures." Great exchange there. Quoting G et al: "Our rejection of BIM as an indicator of unconscious biases raises the question of whether implicit measures still have any value for research on social biases. Some commentators seemed rather skeptical about that, noting that the research program on BIM has lost considerable momentum over the last years—partly due to unresolved debates about the predictive validity of BIM and meta-analytic evidence questioning the presumed causal role of BIM in discriminatory behavior." [BIM=Bias on implicit measures]. And also: "Expanding on the debate about the meaning of the term implicit, we discourage using the term implicit in reference to bias. Use of the term implicit is just too flexible and inconsistent to ensure conceptual precision."

Or one stop shop: https://osf.io/74whk/, >40 sources critical of the IAT in particular or the concept of implicit bias in general.

Expand full comment
Paul Bloom's avatar

Those are useful citations and quotes, Lee, on many topics. And I agree with a lot of them, such as about the lack of predictive validity of the IAT.

But I was hoping you could answer my question. You said that I was wrong when I said that a faster response to elderly-bad/young-good is because we associate elderly with bad and young with good. You repeat this here, saying that my conclusion about associations is "not justified", Ok, good, so how do YOU explain the effect? (I'm not playing gotcha -- I'm genuinely interested in what the best alternative is, and I know you've given these issues a lot of thought)

Expand full comment
Lee Jussim's avatar

"How do you explain the effect?" is exactly the right question, Paul. The answer is ... no one knows, because the effect may reflect an association, but it may also reflect so many other artifacts and irrelevancies that its interpretation is not knowable with anything approaching scientific certainty. Accordingly, Corneille (in a commentary on the Gawronski et al article) recommends abandoning the concept entirely. Another commentary (which I do not have handy, might have been in a different exchange) referred to work with the IAT as a "degenerating" line of research -- the consequence of recognizing so many unknowns about what the IAT captures has led to a fair amount of work, not on "implicit bias" (whatever that means) but on figuring out what in tarnation the IAT actually does measure.

Expand full comment
Lee Jussim's avatar

Not to beat a (hopefully) dead horse (tho, fieldwise, its more of a zombie horse, one that should be dead but keeps appearing in peer review), I found the "degenerating line of research" quote, and it is in the same Psych Inq exchange. Full quote:

"the implicit measurement approach to implicit bias has suffered from significant paradigm degeneration (Lakatos,1970). To maintain itself, auxiliary assumptions such as multiple moderators in conjunction lead to respectable predictive validity correlations (Kurdi et al.,2019), social desirability bias on laboratory behavioral measures (Tierney et al.,2020), the cumulative consequences of minute discriminatory biases (Greenwald et al.,2015; Hardy et al.,2022), mismatched and suboptimal behavioral outcomes in studies examining causality (Gawronski et al., this issue), and aggregate-level crowd biases (Payne et al.,2017) must be invoked. Some or even all these defenses may hold empirically. And yet this heavily modified theoretical structure would still represent a major retreat from earlier models in which pervasive individual-level implicit prejudices and stereotypes constitute major causal contributors to societal inequities. Thus, we believe that Gawronski et al. (this issue) underestimate the seriousness of the empirical challenges to the“bias on implicit measures”(BIM) paradigm, as well as the need for major reforms including (but not limited to) those they advocate."

Source:

"Avoiding bias in the search for implicit bias":

https://www.tandfonline.com/doi/epdf/10.1080/1047840X.2022.2106762?needAccess=true&role=button

I note, however, that G et al's reply to the commentaries acknowledged *even more* limitations to the IAT than did their target article, so they were reasonable and responsive to this sort of criticism.

With "misinfo" in the air the last few years (and I won't even get started on the current war), I like to turn the academic spotlight typically used for some variation of "look how idiotic people are" (and lord knows, people can be deluded fools) on academics themselves. This is from a talk o "Academic Misinformation" and it starts out by pointing out that "misinfo" is (as far as I can tell) not very different from "falsehoods" or "inaccuracies" -- so its mostly old wine in a new bottle. This slide is on:

Different Types of Falsehoods

1. Known to be Factually Untrue. “It is brighter at night than during the day.”

2. Unjustified by the evidence. “There was once life on Mars.”

3. Misleading by presentation of incomplete evidence, when the complete picture shows something different. Almost anything about which there are (social) scientific controversies.

Most claims about the IAT and implicit bias do not full under 1. They do fall under 2 and 3.

Solution? Let's wait for another 30 years or so before making bold claims about implicit bias, including (especially?) claims based on the IAT.

------

Historical Digression with Perhaps Some Parallel

1940s: New Look in Perception! People's motivations influence basic perception!

1950s: Withering criticisms revealing artifacts, biases, and unruled out (and likely) alternatives. (F. Allport, Prentice, and others).

1980s: Automaticity! Social Priming!

2000s: Social priming shown to be filled with p-hacked and unreplicable studies.

2000s: People's motivations do influence perception after all! We have better methods! (Balcetis, Cole, Banerjee).

2016: Firestone & Scholl. Rinse & repeat (see 1950s withering criticisms, updated and more sophisticated but essentially applied in the same way).

BUT:

2021: Cole & Balcetis, Adv Exp Soc, tl;dr, paraphrased conclusion not actual quote: "we took the criticisms very seriously, have performed slews of studies that address and account for them, and there is now clear evidence for motivation influencing basic perception."

IDK what I make of all this, and I look forward to *this* work being openly criticized by skeptics. But at least its a step forward.

So, if you start in 1947 and end up in 2021, that 74 years and the evidence is definitely better but we won't know how definitive it is for a while yet.

That is, to me, an excellent model for how to treat most claims based on the IAT or about "implicit bias."

Expand full comment
Paul Bloom's avatar

thanks for the thoughts, Lee. If I do a deeper dive into this topic, these references and your arguments are going to be very useful.

Expand full comment
Dan Grubbs's avatar

I'm getting old and I find that it really is bad. I would rather be young.

I think the first problem is using the term 'bias' in the first place. People have a bias against 'bias'. It implies something bad prior to determining whether the effect is good, bad or neutral. I think the people who came up with this stuff determined it was bad due to confirmation bias. They believed racism was everywhere and this interpretation confirmed their prior belief.

Expand full comment
Edgy Ideas's avatar

How to disabiguate a measure of received wisdom from one measuring prejudice? It's really the same question.

One should look at the anomalies produced by the theory.

The important one I NEVER see Banaji et al discuss in reference to the IAT is that of "Slow learning". (except perhaps Lee Dunham)

This is the major difference between the RACE: GOOD-BAD IAT and the GENDER: GOOD-BAD IAT. They failed to find any potentially causal slow learning effects for the Black:White, even though they were forced to create measures for younger and younger age groups. (as low as 5)

This could mean that part of the RACE IAT construct measures the same type of categorisation effect of boys pulling on different coloured football shirts. (i.e. Team White and Team Black)

However, the Harvard Implict study shows clear longitudinal affects for Gender with increasing shifts towards Male: Bad, Female:Good for BOTH sexes. One might therefore conclude that a slow learning effect is present, which would not be surprising due to current messaging to boys and girls as they go from adolescence to adulthood.

If that is the case then the IAT must be separated into two components - Instantaneous and Slow Learning (Speculating a bit but perhaps corresponding to System 1 and 2 related evaluations).

Lastly, in terms of Stereotype forms of the IAT, how to disambiguate between stereotypes are on the whole are true and ones that are a product of bias.

This could be the problem of "Systematizer" vs "Empathiser" perceptions.

Systematizers might assess the world as it happens to be where as empathisers might assess the world as it ought to be. (Akin to Jungian Thinking and Feeling types).

e.g. If an IAT result shows Men: Violent, Women: Passive, then are you measuring actual bias or Subjective perception of a Stereotype?

Expand full comment