Determinism as Uncertainty Aversion
A happy, hopeful post about bias in consensus science and combatting overconfidence
In technical terms, the consensus scientific view is that we’re screwed. That is, structure beats agent like rock beats scissors. And structure goes all the way down to the brain. In a deterministic universe, free will cannot exist. Proponents of this dominant position include Stanford biology and neurology professor Robert Sapolsky and physicist Youtube sensation Sabine Hossenfelder. The social and political implications of this position include mental health and criminal justice reform; in Sapolsky’s telling, it fits with left-liberal narratives about softening institutional responses to deviants. (As always, methodological criticism — what I’m up to here — is orthogonal to this sort of policy debate.)
But hey, you might say. I’m a person. I have free will. I know, because I have to make difficult choices all the time. So this can’t be right!
That we experience free will in our daily lives is of no consequence in this view. Cognitive biases pervade the human experience. Optical illusions in the physical world parallel cognitive-emotional illusions like those of competence among the incompetent in the Dunning-Kruger effect.
Maybe free will matters as an ideology in ways that can be formally modeled in social systems, whether it exists or not. Maybe belief in free will matters as an individual psychosocial trait in political terms; here it contrasts with beliefs in fatalism and genetic determinism — which Costello et al find to be authoritarian correlates, even controlling for political conservatism. Maybe free will matters in psychological well-being, but its import is likely to be complex rather than linear given what we know about beliefs in a just world, system justification theory, and things like that. Logically, you get four cells if you divide people into agentic (free will)/deterministic (no free will) and karmic (just world)/fatalist (no just world) believers.
My prior would be to expect the agentic/fatalist combination to be the most helpful for people — at least people who’ve experienced adversity, which is most of us? — psychologically. But maybe fatalism is so depressing, it can net out a positive effect of belief in agency. Here’s the confusion matrix (assuming an adversity condition) that I would like filled out with data that I can’t easily find:
Regardless of the answers to these sorts of empirical questions, the scientific consensus that determinism precludes free will is a tautological interpretation of ambiguous, uncertain existing evidence on the nature of consciousness. That many scientists believe in determinism (premise) doesn’t prove that it governs the brain (conclusion). This post highlights two alternative readings of this evidence — free will exists (also wrong), and we don’t know (correct). It suggests the mistaken consensus illustrates uncertainty aversion distorting scientific discourse.
That seems to be the more charitable possible explanation for widespread, misguided denial of the possibility of free will — a denial that also supports powerful social and political networks’ preferred narratives about issues such as restructuring society to address climate change. A less rosy explanation is that staving off existential uncertainties is what authoritarian ideologies have always done for their adherents (Costello et al cite, e.g., Adorno et al., 1950, Fromm, 1941, Kay et al., 2009, Womick, Ward, Heintzelman, Woody, & King, 2019). “We never make mistakes” if we never make choices in the first place. Authoritarian ideologies provide the comfort of determinism, and literal determinism isn’t special. It’s just one case of authoritarian ideology (a non-partisan problem) at work in scientific discourse (as elsewhere).
There is another side to this story. Determinism reflects uncertainty aversion, has psychosocial correlates and consequences, and is insufficiently supported by available evidence. But belief in free will does all that, too. Andrew Huberman’s latest podcast with psychiatrist Paul Conti offers a good example, with Conti saying more than once that it’s “mathematical” that following certain steps to increase your “agency” will improve your mental health.
This is dangerous bullshit. We don’t know, and scientists and doctors shouldn’t misrepresent uncertain evidence as certain, particularly to vulnerable people. This could even backfire like other positive psychology stuff sometimes does. For instance, what if you’ve been through hell, and your healing comes from turning down the self-blame and recognizing that you were actually really powerless a lot of times? Turning up the belief in agency wouldn’t help and might harm here. Probably there are few one-size-fits-all solutions for mental health, where the universal ones are just about making sure basic animal needs are met for the machine to run fine (nutrition, sleep, exercise, touch).
More generally, psychiatry, like the rest of medicine and science in general, is a hot mess. We don’t have anything approaching this kind of certainty. We don’t even know that free will exists in the first place (or under what conditions) — much less whether it helps us to believe it sometimes, but not other times, and when/why/how.
In any event, Huberman-Conti reflect the popular consensus, Sapolsky-Hossenfelder reflect the scientific one, and both narratives are wrong. Here I focus on the wrong scientific consensus. It’s more interesting because it’s a mistake where there’s supposed to be a higher bar for accuracy (science), yet human error still pervades. The error stems from uncertainty aversion reflecting overconfidence, a general theme in areas where critical science suggests we can do better (though we can’t do perfect).
Wait, What’s the (Scientific) Question?
Most scientists say, in a deterministic universe, free will doesn’t exist. Their argument is tautological: We know the universe is deterministic, this implies everything is determined, and thus there is no free will.
But we don’t really know that the universe is entirely deterministic in a strict sense. And you can’t use an unproven premise to establish a conclusion. You can say “IF x, THEN y”: If the universe is deterministic, then there’s no free will. That’s as far as this argument can go. You can spin out implications of this possible universe, but it’s just one possible universe.
Meanwhile, other scientists, like Physics Nobel Laureate Roger Penrose, point out that it doesn’t appear that the universe is deterministic, after all. At least, not algorithmically deterministic. Consciousness could entail a quantum process, instead. You know, Heisenberg’s uncertainty principle — you can pick position or speed, but not learn about both accurately at the same time for a given particle. This is usually illustrated with the thought experiment of Schrödinger's cat — the cat who’s, at one point, simultaneously alive and dead. So much for determinism; many universes may exist, and we don’t know for how long, or if/when the different universes collapse into one reality.
It doesn’t matter who is right empirically for the fact of this debate to prove my point. Science doesn’t say we live in a deterministic universe; most scientists do. Science also doesn’t say we live in a world in which Heisenberg’s uncertainty principle carries over to the brain, enabling consciousness through quantum mechanics. A minority of scientists say that. The fact that this is debated illustrates the usual point about science as a social entity in which people observe, analyze, and interpret data. (And even if it weren’t debated, it would still be debatable and the point would hold.) So there is a meta-mystery about whether or not there is a mystery in consciousness. That’s a sufficient condition of saying free will may exist.
There is a debate about whether free will exists; so it’s unscientific to say that the matter is settled either way. This goes equally for pop culture gurus spouting off about the failsafe healing power of increasing your agency (assuming it exists), as it does for revered scientific figures claiming that we don’t have free will (and so we should be gentler on crime). But it adds a layer of inconsistency when scientists (dedicated to critical thinking) make such mistakes. It’s particularly inconsistent for Hossenfelder, of (we can’t just) “follow the science” fame, to forget that we have to interpret exegetical evidence when it comes to agency versus determinism. In that interpretation, we can acknowledge or deny pervasive ambiguity and uncertainty; and whatever the interpretation, its implications are political (about power).
Why the mistake?
Overconfidence
Saying we don’t know is harder than assuming that we do, cognitively and emotionally. But this specific instance of uncertainty aversion also illustrates a particular “idol of the tribe,” in Sir Francis Bacon’s terms. In his Novum Organum (1620), “a founding work of modern science” (Greenland), Bacon discusses cognitive problems for which we still don’t have good solutions. These include looking for patterns and simplicity where there is unpredictability and complexity. The latter mode of understanding physical reality arguably replaced the former in the 20th century, as quantum mechanics, chaos theory, and relativity challenged previous, Newtonian assumptions about the nature of the universe.
It means something to me, idolatrous young(ish) scientist that I am, that I am increasingly seeing not just statistical significance testing misuse every day in PubMed — but also mistakes of senior scientists I have idolized. Using personage as a heuristic for information value is another idol of the tribe. Doing it less is progress in my own scientific education. I still admire and respect these people; they’re just (fallible) people.
So Sapolsky is wrong about free will, Gøtzsche is wrong about the evidence on CBT for suicidality, and both venerable senior scientists’ mistakes result from uncertainty aversion that responds to ambiguous evidence with overconfident conclusions, replacing messy reality with dichotomous certainty. That’s the general theme of human error, going back at least to the structure of classical Greek tragedies: arete, hubris, ate, nemesis – virtue, arrogance, fatal mistake, divine punishment. Pride before the fall (Proverbs).
This is exciting. Not that human stupidity is pervasive and inescapable. But that it may be such a useful theme in doing better science. Imagine you didn’t have to get the right answers on a math test to get credit; you just had to say what a lot of other people did wrong, and how to do better. That’s the state of science. (Or is it? Maybe that’s too low a bar. But it’s still a lot of work, and it seems like someone should be doing it in science in the public interest.)
Three Projects Countering Overconfidence
Overconfidence. Per, e.g., Sander Greenland, Andrew Gelman, and Richard “the word ‘true’ should set off alarm bells” McElreath, it’s a common source of error in science policy, science communication, and research. Maybe even the common source of many different subtypes of cognitive distortions generating methodological errors.
This meta-scientific criticism links three projects I’ve been talking about:
In the realm of science policy, I’ve been thinking about polygraph programs for a long time. Then, I started thinking they’re one case of mass screenings for low-prevalence problems — a structure of program that comes up across security, medicine, information, and other realms. These programs are often doomed to fail according to laws and properties of mathematics that are not widely understood. And perverse incentives play a role in over-confident experts selling anxious societies on the idea of “mathematically” guaranteed (Conti throwback) advances in safety and health.
Now, I’m thinking maybe the structure is common also to interventions (e.g., vaccination). Not sure what would distinguish screenings as interventions, but maybe something does; or maybe only preventive interventions are like screenings.
It would be cool if one thing that ultimately resulted from this was a website where you could plug in varying program parameters, and see a simulation of the results in terms of accuracy and error. I affectionately refer to this as mass mass surveillance surveillance (or (MS)2). That’s also the name of a safety and monitoring subcorridor in many a ministry that should exist, but doesn’t.
In science communication, helping people make more informed choices has tended to focus on specific facets of risk in particular cases. This focus neglects the fact that the science on which such understandings of risk are based, is itself often ambiguous, uncertain, and fraught with bias and error. So I think we need something like a decision-making map, rather than a risk calculator, to help people make better decisions amid uncertainty and misinformation.
And in research, overconfidence and power can distort the agenda itself. People in open societies should have a way of responding to these distortions. When exclusive breastfeeding proponents, for instance, assume there’s no preventable harm from what is currently standard early infant feeding practice, such preventable harm can result from federally funded medical research. And it did; see documents from Flaherman et al’s ELF-TLC. Better science would have better listened to mothers with insufficient milk and their starving babies in order to prevent that harm. Better early infant feeding research is needed to change dangerously misguided current norms, including in medical practice and scientific research.
Another example: When medical researchers assumed that nicotine was harmful, or that ambiguous evidence proved lack of a Covid benefit despite suggestive evidence to the contrary, inadequate research resulted. We don’t know what we should know about nicotine’s possible harm prevention and treatment effects for Covid.
Maybe a platform that helps people conduct surveys and randomized experiments could bolster the ability of citizen scientists to set the research agenda, work towards answering open empirical questions, and create competitive pressure for scientific institutions to cut the administrivia and get with the open science program.
I don’t want to build these three tools. I just want to use them to do science. But probably I would have to convince other people to help me build them, first.
On a personal note, it would be hilarious if this turned out to be my thing. I am so often full of doubt and self-remonstration. Not that I don’t also make overconfidence mistakes and gain perspective on them only later, too. I do! I think of related corrections faster than I can blog them. Thus, the great joke here.
Did you hear the one about the insecure overconfidence researcher? She was just jealous, probably. But you’re still wrong.