Death by a Thousand Cuts, Part 4
Envisioning critical science communication and the rogue methodologist ninjas
So I’ve been thinking about how bad science links diverse crises that divide and conquer activists. For instance, much of modern medical science spins myths about the safety of common practices that may actually cause substantial, fully preventable, and long-term harm to mothers and their children. But modern societies have grown so complex, so quickly, that, when you situate these problems in their larger contexts, it appears that there are many incentives to focus narrowly to potentially solve some problems, and not enough resources to focus broadly to potentially solve the mother of all collective action problems: spin science. It seems that we have to choose, and in choosing solveable issues to prioritize (typically on the basis of individual interests), we doom bigger pictures to remain unresolved — a tragedy of the commons of sorts.
What are some possible solutions that deal with these bigger pictures? This post is a very rough cut at answering this huge question, before moving on to another topic I’ve been working on lately in the women’s health space.
Methodologists often point to the need for more and better science education, particularly for scientists. I cast a slightly jaundiced eye on this approach. If the problem is that experts are doing it wrong, then going to experts to convince them to change their ways is a long game you could try as an authority in the field. That’s what you see in statistics reform, and it’s great! But what about us lowly activists who want to help regular people live better lives right now by making better choices based on fuller information stemming from better science? We’re positioned differently, and have different goals.
It’s worth noting that the education argument here is part of a larger conversation about the degradation of education. Generalists have been devalued for decades. Generalist educations, too. How many times have you heard the joke: “What did the liberal arts major say?” “Do you want fries with that?” As a liberal arts graduate in a family of mostly practical people doing sensible things of which I was constitutionally incapable, I’ve heard it a lot. There’s a real lack of understanding among otherwise congenial and intelligent people that learning to think critically in general actually produces a very valuable skill set. It’s just that the monied forces of the market don’t necessarily promote critical thinking, because it tends to challenge the status quo. There is signal in that money thing, but there is noise. There is a reason public service jobs pay less, and the reason is not that the work is less valuable.
An anecdote: Among survey researchers in the U.S., it’s well-known that the Census data get notably better when the economy gets notably worse. The reason is that the Census gets taken then by more temporary workers with graduate degrees. There is not enough money in public interest research to otherwise draw the best workers to the job. But when more qualified people get more desperate and do it anyway, society benefits from the better data that result.
So what can society do to improve science in the public interest? Normal internal scientific correction mechanisms are badly malfunctioning, as demonstrated by the near-impossibility of getting wrong things retracted from the publication record. Bottom-up solutions evoke the specter of special-interest capture (think Koch funding for the Tea Party; think Putin signs at anti-lockdown protests). Better top-down solutions are fantasies…
One of my favorite such fantasies, apparently shared among some other statistics reform groupies, involves a public interest science institute headed by the likes of Sander Greenland and Frank Harrell in the U.S. But in reality, as usual, the people who should have this power wouldn’t want it — and the people who would want it, shouldn’t have it. In practice, top-down solutions are at least as vulnerable to corruption as bottom-up ones, probably moreso, and neither gets us out of the problem of no neutrality. There is no solving the problem of science being done by scientists, just as there is no neutral selection of what issue agenda to prioritize in improving science (education, research, communication, and policy).
Game theory can sometimes help identify better strategies for approaching intractable problems by abstracting key components and comparing outcomes. It seems to me that the lesson of game theory is, be the weirdo — the sole defector, the lone wolf terrorist, the free rider in a Scandinavian paradise. At least, that's often the position that “wins,” if you want to be crude about it.
In science policy, that logic may play out in cases of decentralized networks of relatively autonomous agents affecting public interest change. The German CCC’s electronic voting machine activism springs to mind. In 2009, Germany’s Federal Constitutional Court banned the use of voting computers because they make something black-box that must be transparent and democratically accountable — the vote count in a democratic society. (The problem came back, as problems do.)
So if this is the most effective strategy and spin science is the mother of all collective action problems in modern societies, then where are the rest of the groups of hacker/activists, rogue methodologists, and science communicators working against bias and corruption for no pay in the public interest?
Most people are just too busy putting out fires to do this work. It doesn’t pay, or doesn’t pay enough. It also still operates mostly at the level of particular instantiations of the problem, instead of addressing contributing factors. I’m not sure there’s any other way to do it, practically speaking.
Sure, one could envision a Larry Lessig-style single-issue party promoting science reform first… The party of the biggest big tent! Prioritizing evidence-based evidence! It all makes sense! But when I try to daydream about that, the sequence invariably ends in people who are actually good at politics laughing at me. This is probably for a reason.
So I prefer to imagine the rogue methodology ninjas watching. Lying in wait with samurai swords to leap from the banisters, slashing Table 2 Fallacies and lighting bad causal logics on fire, before leaving as quickly and quietly as they came in.
In reality, as a wise woman once told me, no one likes to be criticized. So no one is feeding the ninjas, at least not for their ninja duties. And the right audience for science reform isn’t necessarily the experts who are doing it wrong. It’s the people they’re hurting.
That means reformers can’t just be methodologists, setting issue-area specialists straight from above the practical fray. At least some of us in this ecosystem have to be science communicators, but on a re-envisioned model where that doesn’t mean parroting spin science. Rather, it has a methodological criticism component, a rethinking of the big picture component, an anti-corruption component… It means a lot of things other than (only) plugging an information hole. This looks like a much harder job than science communication on the corporate spokesperson model.
There are some amazing science communicators doing it anyway. Tom Chivers does this with science and statistics. Amy Alkon does it, too, often with a focus on evolutionary and social psychology. I’m sure there are other good models in this space. Feel free to drop me a line if you know some more, or suggest them in the comments.
Returning to the big picture I’ve been thinking about, this isn’t necessarily a working solution. Possibly there isn’t one. This just looks like part of the best we can probably do.