We Didn't Start the Bias (The Politics of Evaluating Mass Screenings)
Generalists, guilds, and the danger of doing good science
The scruffy bear is lost. That he was one of many beloved bears is small consolation to both me and the little boy who loved him more than he loved any other bear, and who loves nothing more than bears. We say he’s on an adventure, probably in America already. If my son can face this sad truth without a tear — solid soul — then surely I can admit when my own lovie looks to be lost.
Maybe I’m just tired. Maybe I’m getting old. But I’m having a moment of uncertainty. Does it even make sense to try to do good science, much less good science communication? Every line of inquiry that might matter for an important outcome or value — to save lives, for instance, in national security or health contexts — leads to big money, nasty politics, and entrenched interests. And the people who are supposed to care the most about getting it right can’t be bothered to critically read the primary source literature themselves. No one has time, so everyone is delegating trust. Playing that telephone game. Playing with lives. Playing with fire.
As Billy Joel put it:
We didn’t start the fire.
It was always burning
Since the world’s been turning.
The point is the Allegro — 145 beats per minute, lyrics firing off as rapidly as headlines. Information overload. History happened too fast for the younger generation to get up to speed on all the backstory before joining in. And it will keep happening after we die. No one will ever get all caught up. So shortcuts and mistakes will rule.
So it is also with science, and the evidence underpinning the many interventions and norms of our complex modern societies.
My brain churns on slowly but incessantly, as is its wont, turning the wheel from vaccines — big money, nasty politics, no shortage of stupidity all-around; so maybe don’t make a name for myself explaining how bias and error distort the science? — to other programs of a similar but distinct structure. Vaccines are mass preventive interventions for low-prevalence problems. I’ve spent years (many) thinking about mass screenings for low-prevalence problems. The binary intervention and outcome categorizations make nice mathematical and logical features for structural thinking. It almost seems like a person could learn something here.
And I thought, maybe, I finally had: That, while typical efficacy analyses promote or degrade programs of this structure for allegedly succeeding or failing at their objectives, none seem to consider all possible causal pathways: classification, strategic behavior, information effects, and resource (re)allocation implications. Most just run some numbers along the test classification pathway and call it a day.
This is bad logic or, we might say more politely, obsolete methodology. And itself represents an epistemic intervention that presumes these programs are more sterile, more scientific than they actually are — ignoring the human elements that may actually make them work (or subvert them). Basically, I’m saying we need a behavioral economics of mass screenings.
But, just as scientists (though we might try) can’t exit our sociopolitical context to be neutral in doing science as a matter of avoiding bias and error, so too we can’t help being helpless creatures thrown together in the teeming sociopolitical chaos.
To return to the sad story of the week: Lost bears are never, in my humbling experience, recovered. There are too many people passing by who will never be back again, didn’t mean anything by taking the (very) old thing that might have been put out for trash or Zu Verschenken, and mightn’t even bother returning him if they realized what they had done. The bear is such a huge thing to the boy, but such a small thing to the world. It doesn’t mean to be so stupid and so evil as to keep his scruffy bear lost. It just can’t help it.
Evaluating net effects requires a generalist lens and implies crossing guild lines
So anyone who tries to evaluate the net effects of mass screenings for low-prevalence problems is going to face two related sociopolitical problems: (1) requiring a devalued generalist toolkit for doing the analysis, and (2) provoking guild interests just by asking the questions necessary to assess the resource (re)allocation line in particular and the net effects question in general. These two sociopolitical problems roughly correspond to the world being so stupid as to keep the beloved scruffy bear, lost — and so evil.
With the rise of neoliberalism in the late 20th-early 21st century, general education has become increasingly devalued both popularly (think American anti-intellectualism) and by elites (think cuts to publicly funded higher education, increasing privatization in the U.S. and its satellites including the U.K. and the Netherlands). As a consequence, it’s become harder to justify spending the time and money learning broadly enough to think critically. Because who’s going to pay for that? But we need multiple methods and real time to really think.
Relatedly, just asking whether entrenched programs actually do what they claim to do provokes defensiveness among their proponents. Asking what else could be done with the same funding, in turn, provokes defensiveness and also invites the political maneuvering of professionals and associated, well-organized groups whose paychecks depend on securing ever more resources.
Policemen, I am told, can never agree that more crime tips, from mass surveillance or otherwise, would undermine their ability to follow up appropriately on more specifically obtained information. Because they can’t argue that they aren’t Superman and won’t do all the criminal justice they can with all the information we can give them. Only that they’ll need more resources to more law enforcement, the more tips they’ve got to go on.
Same for doctors. Sure, the same money spent on mammography might save millions more women if it were spent on preventive research instead. But it’s not politically possible to cut the programs whether the funds would be earmarked for probably more effective interventions, or not. Nor to posture as even considering cutting them if it’s calculated as appearing more cost-efficient to prevent cancer another way.
What was I thinking?
Somewhere along this path, I’ve come to (belatedly) wonder whether anyone really wants good science done on anything that matters. Or, more to the point, whether the guild interests threatened by even inching toward a proper net effects estimate wouldn’t usually if not always outweigh the active public interest in having the math done right. In political developmental studies, special interests are well-understood as better-organized and attentive than public ones.
In other words, special interests shape what science it’s possible or generally advisable to do. People with more sociopolitical wits than I factor that into what they do and how they do it. I just walk around looking for long-lost bears and hoping that maybe there is so much stupidity and so much evil in the world, and the chaos is so unpredictable and irrational that it will be possible for someone else at least to make headway in the public interest sometimes.
But realizing that there is too much happening, too fast for anyone to stop for the bear. To put him up somewhere high, nearby, to be found by his rightful owner.
Similarly, there is too much primary source material to read it all yourself, much less actually stop to think about it. There are too many other errands to run and interests to balance. Too many incentives are misaligned. And so the world is moving too fast and too defensively, and too many people are too stupid, too evil, or both, for it to be possible to solve the mother of all collective problems — spin science.
Altman declared “We need less research, better research, and research done for the right reasons.” He was especially concerned about perverse incentives driving waste, fraud, and abuse in health sciences.
Billy Joel said “We didn’t light it, but we tried to fight it.” He was describing how information overload prefigures intergenerational knowledge gaps.
I say, go get a new bear. No one really wants good science or science communication. They want their biases confirmed and interests served. I hear there are jobs in IT.
I'm so sorry for the loss of your son's fluffy bear. I know how important they are! (P.S. I've discovered that fields in the humanities also have too much research, poor or useless research, and research done for the wrong reasons. Improving humanities research is just much less urgent than improving medical research!)