Bias in Bias
From bias research to science and science communication on masks and abortion, bias plagues researchers even when they claim to be identifying or correcting it
“ ‘Ain’t’ ain’t a word, and I ain’t gonna say it no more,” my fifth-grade teacher gently teased a particularly Southern gentleman in the class. But “ain’t” and “y’all” are normal parts of speech in Alabama. So undeniably so that, when the law school offered me a free ride, I selected the university T-shirt that said “Hey Y’All” on the front and “Bye Y’All” on the back, to celebrate. I never went to law school.
Eventually, I got a PhD instead, because I was young and needed the money. Then, after a postdoc that took me to UCLA and Harvard, I left the U.S. and academia. It was here in Berlin that I discovered that, in spite of not saying “ain’t” and “y’all” in polite company, and definitely not having a Southern accent in Alabama, I do indeed have a Southern accent in Northern Europe — including an understanding of the correct pronunciation of the word “jaguar” that precludes me from ever marrying well. (The word is pronounced “JAG-wire”; there are no other acceptable pronunciations.)
Image: Bernard Dupont, 2008, Belize Zoo, Belize, Creative Commons Attribution-Share Alike 2.0 Generic license, WikiMedia Commons.
Accents, it gradually dawned on me, are not absolute binaries — although cases exist on the continua poles, like Mr. “ ‘Ain’t’ ain’t a word.” We can’t hear ourselves relative to any group other than the one(s) we’re in and among. And, bless my heart, I’m the hick it took a few decades to realize this.
In much the same way, we’re limited in how much and how well we can get the meta on our own cognitive and psychosocial biases. “This is water,” in David Foster Wallace’s parlance. “There's not much science in science,” in Sander Greenland’s. “No Exit” from the miserable psychosocial swamp of other people including in our own heads, to invoke Sartre.
So bias research is itself plagued by bias, misinformation discourse by misinformation, and research integrity work by the very problems it ostensibly addresses. This series of posts looks at an example of each of these tail-eating problems in science, and particularly in bias research itself: (1) bias in research on bias in research on bias — that is, bias in the self-reflexive cognitive bias, (2) misinformation in misinformation discourse about misinformation discourse about masks — that is, bias in the bias-spotters three layers in, and (3) bias in research integrity work dealing with abortion research.
Then, it suggests we better grapple with our epistemological limits in two, apparently diametrically opposed ways — first, following punk philosopher of science Paul Feyerabend, by accepting that we can’t necessarily use science to identify science’s limits, or ours in doing it — and so developing and practicing a greater respect for unknown or unpopular truths that dissenting views may express, holistic attitudes toward multi-method research, and appreciation for the intransigence of selection bias in human affairs. And second, doing better science applying widely accepted evidentiary standards.
Our practical choice in trying to make a better world, then, is not between embracing diverse methodologies and perspectives as part of scientific discourse, and advancing science through improving its methodological rigor in a uniform, standardized way. Rather, it is between seeing both of these endeavors as equally logical, necessary steps in grappling with our epistemological limits as part of the same critical-reflective science — or holding them in false opposition. Taking the former path means giving up on the hope of shortcuts to doing better science. Heuristics, checklists, and other means of avoiding critical thinking, as a general rule, won’t get us where we need to go — grappling with our limits and, sometimes, learning in the process.
The Self-Reflexive Cognitive Bias
Extending Sir Francis Bacon’s concern from his 1620 Novum Organum that scientists are vulnerable to the same perceptual and cognitive biases as everyone else, philosophers Joshua Mugg and Muhammad Ali Khalidi explore four possible cases of a new paradox, the Self-Reflexive Bias Paradox:
If a subject S conducts research R, which is evidence for the existence of bias B that arises in context C, and R was conducted in context C, then R was likely subject to B and S should not accept research R as good evidence for the existence of B. If S had accepted the existence of B purely on the basis of R, then S should not accept the existence of B; but, if S does not accept R as good evidence for the existence of B on the basis that R was likely subject to B, then S should accept the existence of B (p. 88 Mugg and Khalidi).
This paradox depends on artificially narrow constraints that themselves reflect common cognitive biases like dichotomania (accepting or rejecting research as good evidence). In the real world, for practical purposes, the paradox does not exist, for two reasons: (1) There’s plenty of bias to go around. Just because we have epistemological limits, doesn’t mean there’s a paradox in identifying that we have them, and in that identification being subject to them, too. It just limits our ability to know what we do and don’t know about them. Meta-cognition always has limits; scientists are always imperfect human beings. Fallibility is a fact, not a logical paradox. And (2) we can accept uncertainty instead of dichotomizing evidence interpretation.
The four cases Mugg and Khalidi explore are (1) Theory-Ladenness of Observation (TLO, Brewer 2012; closely related to confirmation bias - also a possible example of the paradox); (2) the Dunning-Kruger effect (aka “Unskilled and unaware of it,” Kruger and Dunning 1999); (3) the bias bias (“The Bias Bias in Behavioral Economics,” Review of Behavioral Economics, 2018, 5: 303-336, Gerd Gigerenzer); and (4) the inference heuristic (Cimpian and Salomon 2014). Mugg and Khalidi identify ways TLO and D-K may escape the paradox, but not ways for the bias bias and the inference heuristic to overcome it. However, as argued above, there is also no paradox for practical purposes in the latter two cases. The appearance of a paradox is a product of biased, artificially narrow thought experimental constraints including dichotomania in weighing the merits of evidence. Accepting uncertainty allows for these phenomena to withstand self-reflexivity. Here’s a bit more about these two cases.
The Bias Bias
As I wrote previously:
There is a massive risk assessment and communication literature to which I don’t begin to do justice here, but I would remiss if I didn’t say I’m enjoying learning and thinking more about cognitive psychologist Gerd Gigerenzer’s work (h/t Richard McElreath). Gigerenzer, director of the Harding Center for Risk Literacy here in Berlin, builds on Herbert Simon’s idea that people make better decisions satisficing — deciding “good enough” on imperfect information — rather than optimizing, or trying to figure it all out and make the best possible choice. This strain of decision theory sings the counterpoint to the Kahneman, Tversky, Piattelli-Palmarini melody: Our cognitive-emotional software is full of bugs that make biases and mistakes common if not unavoidable, we’re just smart enough to see how stupid we are “if we are very careful and try very hard” — and part of the problem is that we use heuristics (information shortcuts) to sort things out. Sure, but heuristics work, Simon, Gigerenzer, and others say. We just need better risk literacy, science, and communication so people know when and how to use them. “The fault, dear Brutus, is not in our stars, But in ourselves…”
In “The Bias Bias in Behavioral Economics,” Review of Behavioral Economics, 2018, 5: 303-336, Gigerenzer continues playing this counterpoint to Kahneman et al, but goes one farther. Behavioral economists, he charges, portray “psychology as the study of irrationality,” and so wind up spotting “biases even when there are none” (p. 303). He calls this the bias bias — “the tendency to see systematic biases in behavior even when there is only unsystematic error or no verifiable error at all (p. 307).
To revisit an example I wrote on previously, in his seminal “How to Improve Bayesian Reasoning Without Instruction: Frequency Formats,” with Ulrich Hoffrage, Psychological Review, 102(4) 1995, 684-704, Gigerenzer’s research shows that
when we get mathematically identical information in these different formats, we do a much better job assessing risk with the former (historical exposure) than the latter (evolutionarily novel) format. They’re not psychologically identical, even though they’re mathematically identical; and the format makes a huge difference in usability (my gloss on his findings).
So, in this view, people are smart. It’s scientists who accidentally created a bias against good Bayesian statistical inferences by giving them information in the wrong format. And then drew the wrong conclusions from the results.
However, Mugg and Khalidi suggest Gigerenzer’s “bias bias” may itself exhibit the very bias with which it charges others (“Self-reflexive cognitive bias,” European Journal for Philosophy, 11:88, 2021). They note publication bias in favor of novel findings (p. 14) could influence Gigerenzer as well as the researchers he criticizes, with Christensen-Szalinski & Beach’s analysis (1984) of citation patterns showing “research papers that claimed to find evidence for cognitive biases were cited significantly more often than ones that did not” (footnote 13).
Again, there’s enough bias to go around. Researchers may respond to incentives to publish on new biases, whether about biases themselves, or otherwise. At the same time, as human beings, scientists are all subject to cognitive bias. And yet, as critical-reflective thinkers, we are still able to recognize bias in science, including in bias science. Just because we have epistemological limits, doesn’t mean there’s a paradox in identifying that we have them. It would, rather, be paradoxical if we didn’t have limits, our understanding of which was itself characterized by our limits.
So we might say that the bias bias is potentially subject to the phenomenon it describes, but that this self-reflexivity neither proves nor disproves its existence in a particular case. Nor does it constitute a paradox. It would be surprising if one subgroup of researchers as one subgroup of humans turned out to be exempt from being human.
The Inference Heuristic
As above, the same structural conditions and cognitive biases can produce publication bias toward a bias literature (the bias bias), and an element of bias in noting that bias. The same goes for the inference heuristic, “a basic cognitive tendency that leads people to explain patterns or correlations with reference to inherent (or intrinsic) features rather than extrinsic (i.e. relational or historical) features” (p. 11). This is also known as the Fundamental Attribution Error in psychology (footnote 8 citing Jones & Harris 1967; Ross 1977), which Mugg and Khalidi describe as “the common human tendency to explain aberrant behavior of others in terms of their flawed character while tending to explain one’s own aberrant behavior in terms of rationalizations.” (Tangentially, this is a flawed description of the FAE, which rather says that people tend to make dispositional attributions for others’ mistakes and situational ones for their own. Situational attributions may be valid in ways that calling them rationalizations denies, implying a “ground truth” in dispositional attributions that has not been established either in general, or specifically by Mugg and Khalidi in this article.)
It’s possible that the inherence heuristic reflects the phenomenon it describes precisely because that phenomenon is real and widespread. Khalidi & Mugg 2014 “accused Cimpian and Salomon of having the same cognitive tendency that they claim to have uncovered” (p. 11) by appealing “to intrinsic properties of humans,” i.e., “[tending] to explain and make inferences using intrinsic features” (p. 12). “The fact that humans tend to rely on intrinsic properties has more to do with the world and our relation to it than it does with the intrinsic properties of human cognition” (p. 88).
Again, there’s enough bias to go around. We can decline artificiality and dichotomania in approaching ambiguous, uncertain evidence for the existence of the bias — here, the (intrinsic attributional) inference heuristic. Doing so enables us to recognize that multi-causal explanations are likely, including for our explanation-generating processes. That recognition suggests a possible story in which it’s unlikely that either historical or non-historical causes alone drive complex event chains, but we don’t know if one dominates (e.g., in a deterministic universe), because we don’t know where to cut the feedback continuum, thus being probably unable to establish whether we have free will.
In the same way, we probably can’t establish whether the inference heuristic reflects a bias — making dispositional/intrinsic attributions where situational/historical ones are determining — or an accurate human tendency to make those attributions. The causal cookie crumbles where you cut it, and there’s no paradox here if we accept these epistemological limits. Accepting uncertainty again dissolves the appearance of a paradox.
Who Cares in Science?
Mugg and Khalidi’s conception of the self-reflexivity paradox as a strict one arguably reflects the problem of researchers’ ever-limited meta-cognition. Moreover, by introducing artificial constraints that exhibit cognitive bias, Mugg and Khalidi published a paper on a proposed new bias (self-reflexive cognitive bias), an outcome that resonates with Gigerenzer’s, Altman’s, and others’ point about perverse incentives shaping scientific publishing, including in precisely this kind of science (bias science). At the same time, the common cognitive biases they exhibit in these constraints resonate with Bacon’s, Kahneman’s, Greenland’s and others’ points about distortion shaping perception and cognition.
Were it not too cynical, I would propose a self-reflexivity paradox paradox: critical reflection on others’ attempts to critically reflect will itself be inexorably subject to the same sorts of problems (bias, error, structural incentives) shaping the antecedent on which one is critically reflecting. This paradox paradox is called being human, with the limited limits that implies. Under artificially strict conditions of only accepting what we have evidence to establish without the lingering possibility of bias, and being unable to establish anything to that standard, science cannot exist. Go home and bake cookies, everyone; the philosophers closed up the whole shop.
No one believes this, because it is silly. We can do a little better correcting past mistakes (ourselves’ and others’). That’s what science is supposed to do.
Who Cares in Society?
Probably only a small group of intellectuals cares if there is a bias bias, a self-reflexivity bias and/or attendant paradox, neither, or both. So what is the broader social and political significance of this debate?
In the end, as good epidemiologists have learned, the state of the world at large is about the best evidence we have on these matters, not the results of highly artificial experiments (that comment applies to medical science, where randomized-trial worship is spinning out of control). Experiments and other technical academic works like Gigerenzer's and mine offer only suggestions to improve narrow technical microbehavior. They do not begin to credibly address the vast problems of human macrosystems, where for example the very people who should be kept from power are precisely those who eventually take over those systems. Those people know how to exploit human flaws, and increasingly commit to their own fallacies and cognitive biases as their insulation and profit grows (even from decisions that harm their constituents) - for them, their accession to power is proof of their correctness and reason to ignore if not crush criticisms. Those are large-scale sociopolitical issues whether we confine ourselves to the realm of academia, scientific research, business, or local government, or we look at the national or international scene; in all cases we'll find large groups of peers, press, and followers will idolize and reward the worst of leaders, as is obvious in events leading up to wars. In academia it is much less obvious to outsiders due to the burial of arguments in technicalities and jargon, but just as real (if far less impactful) (Sander Greenland, email, Nov. 2, 2023).
Let’s turn, then, to somewhat less academic examples of science’s human problem, in which our limits shape our perceptions, leaving traces of human perspective in what we presumably intend to be objective science (an unattainable aspiration): bias in science and science communication on masks and abortion. These are both case studies in which the information environment is hyperpolarized, the establishment consensus favored by powerful social and political networks is strong, its evidence base and interpretation is typically flawed in the same ways much of science is flawed, and critics of these flaws themselves make mistakes that are also common in science and society. These flaws and mistakes matter, since we would really like to know the answers to the questions at issue: Whether masks or respirators may prevent some Covid transmission, and whether abortion may causally contribute to substantial increases in mental health problems up to and including suicide.
These are existential, not philosophical, matters for society. There is no exit from being silly humans, but we can still do better than getting the answers to such questions identifiably wrong. The next installments of this series look at how.