“Cough First and Ask Questions Later?” — my post on bias in science and science communication on masks and respirators — has a mistake. Don’t worry: We still don’t know what we want to know about PPE efficacy during a pandemic.
The post used the term “selection bias” when I should have said “confounding.” On one hand, I’m assured “this is a narrow technical problem” (Sander Greenland, email, Dec. 22). As far as I can tell, it doesn’t affect my substantive arguments, here or elsewhere.
On the other hand, it’s important to get the terms right. In statistics, “selection bias” means “being selective about who is studied, as in randomized trials, rather than imbalances in who has been treated or exposed” (SG, Dec. 22).
In my masks post, there are three places that require modification accordingly:
Instead of “… a lot of randomized trials are vulnerable to the very sorts of selection biases they’re designed to protect against,” it should say “a lot of randomized trials purchase their protection from confounding with vulnerability to selection biases”;
instead of “When we recognize that selection bias plagues observational and experimental evidence alike,” it should say “When we recognize that biases can plague observational and experimental evidence alike”; and
instead of “… we risk ignoring the very selection bias randomized trials are supposed to guard against,” it should say “we risk ignoring the very selection bias randomized trials cannot guard against due to their restrictions on participants” (SG, Dec. 22).
Guarding against confounding, not selection bias, is the potential promise of randomized trials. They don’t always fulfill that promise, as in the case of infant feeding trials that failed to account for maternal and infant health, breastfeeding problems, and accidental starvation, and so may have systematically biased their results in favor of apparent breastfeeding benefits — when the true story may be one of common and preventable harm from historically anomalous exclusive breastfeeding norms. Consider also a recent abortion reversal trial that failed to block on gestational age, and so may have systematically biased its results against finding an effect of progesterone (though such an effect remained possible according to its estimates). Still, guarding against such confounding is the point. In this sense, guarding against selection bias is the opposite of what well-designed randomized trials do.
This correction also applies to all these other posts. How did this happen?
“Selection” contains semantic ambiguity. “Self-selection” is one thing, “selection effects” are another, “selection bias” still another (term of art alert), and then there are “sample selection bias” and “endogenous selection bias.” Sometimes these terms get different usage in different fields.
Like “significance” and “statistical significance,” it seems, “selection” has a lot of uses, but “selection bias” is a statistics term of art that should be used as such. The bottom line:
In health-sciences methodology, randomized trials are not designed to protect against selection biases, in fact as designed they aggravate those biases through intense exclusion criteria (such as being a woman of reproductive age) that limit their scope, in order reduces the frequency of adverse events and consequent ethical-liability concerns. Rather they are designed to protect against confounding, mainly in the form of uncontrolled imbalances between treatment groups on factors related to the outcome.
Nonexperimental studies are the opposite: They can be and often are not very selective about who is studied ( as in modern database studies): Anyone who is eligible for treatment in the real world can be included since there is no ethical-liability constraint on inclusion. The main concern is now confounding in the form of uncontrolled imbalances between treatment groups on factors related to the outcome (SG, Dec. 22).
***
In the same masks post, I wondered if Naomi Oreskes herself subtitled her recent Scientific American essay “What Went Wrong with a Highly Publicized COVID Mask Analysis? The Cochrane Library, a trusted source of health information, misled the public by prioritizing rigor over reality” Nov. 1, 2023.
Over email, she confirmed “As you suspected, I did not put that title on it. What I was getting at, and I think you understood, was a critique of a particular, excessively narrow, conception of ‘rigor’ ” (Dec. 20).
She also criticized the Cochrane Library for defending the Jefferson et al 2023 masks report by criticizing the media response to it. Again, Cochrane Library Editor-in-Chief Karla Soares-Weiser wrote in a statement responding to the media response to the report:
Many commentators have claimed that a recently-updated Cochrane Review shows that ‘masks don’t work,’ which is an inaccurate and misleading interpretation… Given the limitations in the primary evidence, the review is not able to address the question of whether mask-wearing itself reduces people’s risk of contracting or spreading respiratory viruses.
But Oreskes charges:
… I think the Cochranes have been disingenuous in their defense of the report, because it was the report’s first author who said flatly, in more than one interview, that “masks don’t work.”
So they can’t just blame this on media over simplification.
We might blame it on cognitive bias, particularly denial of uncertainty and ambiguity, and interpreting evidence to fit a preferred story (on both sides) — and pretending or not seeing that that’s what’s going on. But is it really dichotomania to translate scientific evidence’s practical implications in binary public policy terms, when we do need sometimes to execute binary decisions about policy regimes (like mask mandates or no)? Might the stakes invoke the precautionary principle in spite of uncertainty, such that the translation from ambiguous science to binary public policy does not imply a cognitive error?