Yesterday, I had to go to the rheumatologist. Mine had moved, and I needed a new one to prescribe my medicine so I can move, think, and not die. It worked out well — nice doctor, good phlebotomist, perfectly timed trams. But I still cried when it was over, because there had been a chance an authority would interpret the evidence differently, and then I would have had to scramble to plug a medication gap so I didn’t get sick. The precarity of having to seek out fresh scrutiny to meet a stable, vital need meant I had to let them take as much blood as they wanted (“drink a lot of water to get it back”) in exchange for the prescription, possibly with billing considerations, in the weirdly existential yet mundane quid pro quo that is modern medicine. But the encounter was also weirdly relieving, because it was the first time a doctor had used the word “lupus” referring to my condition while saying what he wanted to check and why, even though — to my mind — it should have been diagnosed 20+ years ago, and nothing changed on paper.
Known in rheumatology textbooks as “the Great Pretender,” and TV-famous as the difficult diagnosis par excellence, lupus has a special place in the disability discourse because it generated “spoon theory” — Christine Miserandino’s viral response to the refrain “but you don’t look sick.” People with lupus and many other potentially disabling conditions often don’t look sick. And while that may sound like a Very Good Thing (TM), it can also make it hard to communicate with people who don’t get it when you are sick. As I described in the Springer book chapter “Shame, Name, Give Up the Game? Three Approaches to Uncertainty” in Diagnoses Without Names: Challenges for Medical Care, Research, and Policy, ed. Michael D. Lockshin, Mary K. Crow, and Medha Barbhaiya (full-text), most lupus patients have struggled with inadequate medical help in this context, with dismissal before diagnosis the norm rather than the exception.
While lupus disproportionately affects women and blacks, some reasons for its usual diagnostic difficulty are a matter of banal, administrative evil: Insurers apply rules to decide reimbursements. This drives doctors to put patients in defensible boxes. Patients can’t prove subjective lupus complaints like pain and fatigue, so it’s hard to build defensible boxes for them. In this way, insurers incentivize doctors to minimize false positives, creating more false negatives in the eternal accuracy-error trade-off.
Similar incentives drive rheumatology researchers, too, to minimize false positives, running randomized controlled trials (RCTs) often only with the sickest subset of the most strictly diagnosed subset of a much larger group of patients with potentially disabling lupus-like disease. Drug companies want to get their money’s worth, so they try to structure trials to produce big effect sizes. This might help them sell the most desperately needed, new medications at the highest possible premiums.
This emphasis on minimizing false positives at the risk of letting disease processes rampage people’s bodies and lives is misguided. It can undermine doctors’ imperative to prevent harm, with lupus diagnosis often made on the basis of damage that the usual low-risk treatments (hydroxychloroquine and low-dose steroids) may prevent when given at an earlier stage. It also drives selection bias that may undermine research accuracy, as many of the same symptoms characterize lupus’s milder cousin on the same spectrum, undifferentiated connective tissue disorder (UCTD). The difference between the diagnoses can simply be whether patients accessed the right care at the right time — while sick, and often after repeated dismissal of their complaints. The usual treatments are the same. People often know lupus as the canonical difficult diagnosis. Nobody knows UCTD outside of rheumatology (it’s not even in some otherwise good medical AIs). And it doesn’t much matter which side of the line you wind up on, as long as you get your meds.
But you still have to get them. If you just report that you had the distinctive malar rash in the past, it doesn’t count. Because nothing says “first do no harm” like “my professional norms require that I don’t believe you.”
Portrait of the artist as a young woman not looking sick.
Do High-Uncertainty Case Study Features Carry to Low-Uncertainty Contexts?
It may sound like I have cumulatively spent too long in waiting rooms for too little benefit and am letting off steam, which may well be. But the mechanics of this carry; the structure of this case study, in some ways, seems to recur across many others. Consider its spine through three lenses.
(1) Cognitive science: The uncertain nature of many diagnoses and treatment options likely triggers cognitive dissonance in well-meaning but time-pressured physicians who may hit their limits and blame patients for it. (2) Economics: Short-term incentives for individual healthcare providers to save time and money may incur long-term costs — a collective action problem between healthcare providers on one level, and insurers and society on another. (3) Philosophy of science: The truth is not self-evident, stories can’t tell themselves, and everything remotely human requires interpretation that can’t escape the interpreter’s perspective, miring us in layers of bias and error from which we can never fully escape.
That the mechanics of this seem to carry implies important questions about how information asymmetries vary in different screening contexts. For instance, how do knowledge asymmetries among agent levels vary with regard to false positives in different cases or across different subject-matter contexts, and who cares? That is, in some screenings (e.g., most of security), people themselves know if they were correctly classified — but authorities don’t. In others, people themselves don’t know (e.g., many medical cases), and authorities have a better idea sometimes — but they often don’t know, either. And in still others, people may have some idea or opinion, but not know whether they were correctly classified or not. Is there any way to systematically help people approach screenings with better information to minimize possible harms from false positives and false negatives alike, or is the world just too complex and heterogeneous for that to be possible? I leave these questions for a future post.
This is a post about whether, sometimes, the evidence can speak for itself, after all. It looks at two opposing viewpoints on this question, and argues that maybe they are not necessarily opposed. Maybe they are working on different levels in different discourses, and ironically mean the same thing in practical terms: Please do better science. A second case study of focus is the abortion reversal one from the last post; its relatively unambiguous, certain outcome forms a useful contrast with the lupus case study.
Viewpoint 1: The Truth Will Set You Free
Call it Truth Talks (ToT). This is the dominant viewpoint in science and society. Most people, most of the time, think, speak, and live as if reality were somehow evidence-based.
This is clearly a useful fiction. Not to disparage all people who espouse this view as liar liars with their pants on fire, but the CIA headquarters has the King James Bible verse that is its best-known expression carved in stone in the lobby: “And ye shall know the truth and the truth shall make you free” (John 8-22). Tell that to the dudes who are still in Guantanamo Bay after being subjected to the Agency’s post-9/11 kidnap and torture program. If this is not reason enough to make one wonder who, exactly, is served by the narrative that the truth is self-evident — without power playing an inexorable role in our understanding of the truth and its consequences — I don’t know what is.
But that’s a caricature, and this is still a very sexy viewpoint. Take it from a woman who sometimes gets a rash smack across the middle of her face — a rash that actually has pretty high diagnostic value for a supposedly “difficult diagnosis”: We want to live in a world where the truth can be self-evident. We feel that we live in a world where “you know it when you see it.” When it’s your truth, it is self-evident, sometimes, to you. Also, it’s no fun to mistrust your own eyes and ears, or to look for bias and error everywhere. For one thing, the other kids call you names.
Besides, isn’t it true at least some of the time that some people know some of the things, and there’s no interpretation in the matter at all? Isn’t that how science gains in cumulative knowledge production, and society benefits from it, sometimes? Is there room for two apparently diametrically opposed viewpoints about scientific truth in the same reality?
Viewpoint 2: When Science Speaks, Call The Doctor
Nope, argues the dissenting voice in the current scientific and popular discourses on reality and how we interpret it (aka evidence, science, knowledge production, epistemology, truth). Not in any meaningful sense.
As Greenland quips: “DATA SAY NOTHING AT ALL! Data are markings on paper or bits in computer media that just sit there… If you hear the data speaking, seek psychiatric care immediately!” (See additional Greenland references here.)
Similarly, in her popular Youtube video “Follow the Science? Nonsense, I say,” physicist Sabine Hossenfelder argues that evidence doesn’t observe, analyze, and interpret itself.
These are funny rants. There is truth in them. And it would be very silly to disagree with Sander Greenland about anything relating to statistical methodology, because he’s a leading world expert on the subject who’s probably published a seminal paper on whatever you want to learn about this week.
But of course these jokes are playing on taking figures of speech literally, and I am going to ruin them. When we say “data say” or “science says,” we mean that the truth is self-evident from correct interpretation of the evidence. Not that the spreadsheet is speaking to us. And we should seriously consider the possibility that, sometimes, maybe the evidence really is indisputably clear.
But before considering that possibility, I want to say that this viewpoint is also very sexy. Call it Perspective Walks (PeW). No wonder there are so many mistakes in published serious scientific and journalistic outlets (to say nothing of less gate-kept spaces), if everyone is interpreting everything all the time. It explains a lot of bad quality without necessary reference to structural incentives that we could, perhaps, over time, change. And so, if we buy this viewpoint, then maybe we don’t have to have a crisis of conscience about science and society, because the problem of non-neutrality is unsolveable, anyway. Crises of conscience can ruin your whole day.
Do I mistrust the emotional convenience of a narrative that may be interpreted as absolving us of doing difficult work of political change using the typical revolutionary banner of statistics? Reader, I do.
Ambiguity and Uncertainty Are Spectrum Phenomena: So What?
Just because some cases (like lupus diagnosis) are notoriously difficult, doesn’t mean that all diagnoses are hard to make or all outcomes are hard to measure with relatively little ambiguity or uncertainty. Abortion reversal is on the opposite end of the spectrum. Reversing initiated medication abortion by flooding the system with progesterone after taking the progesterone-blocker mifepristone seems to usually work to save the pregnancy, resulting in a live baby several months later. The data can’t speak for themselves, but give them a year and they’ll be babbling. The outcome measure doesn’t get less ambiguous or more certain: Baby or no baby.
Ok, it’s not really that simple. That’s one binary outcome measure. Science involves lots of choices that require interpretation, and they add up to way more opportunity for ambiguities and uncertainties than can be envisioned on a single spectrum. Fine. Envision the spectrum waving like a wand along a great many different dimensions. Call it an ambiguity and uncertainty furball.
There’s still an unambiguous, certain binary variable value outcome in cases such as this. They exist.
Now here is a brief imagined debate between proponents of these two viewpoints:
Truth Talks (POV 1/ToT): The clarity of abortion reversal favors my story. It shows that science can speak for itself.
Perspective Walks (POV 2/PeW): No, the fact of this spectrum favors my story, because usually outcomes aren’t this clear, and if something is a little bit exegetical as a matter of truth, that carries the binary.
ToT: What? No, the case carries the binary in favor of my position.
(There is a brief arm wrestle over whose camp benefits from the fact that cases like abortion reversal exist on the pole of a spectrum; nobody wins.)
PeW (panting): Look, too, at how the dominant narrative says abortion reversal is impossible or unsafe. That shows science doesn’t speak for itself, even in a case where the outcome measure is totally unambiguous and certain. Just because you think your interpretation is less wrong than other people’s — even if you’re 100% right — doesn’t actually make it self-evident in science as a matter of social reality.
ToT: Doesn’t it? Maybe indisputable outcomes can set the terms of knowledge for those who care to acknowledge empirical reality (aka scientists, rational people).
PeW: Absolutely rational people don’t exist. Science is done by scientists, and we’re all human beings (aka bipedal disasters). It may be tempting to say that surely things like DNA evidence are not exegetical at all, when really, people are prone to making the same, old statistical inferential mistakes there as ever.
ToT: Ok, but this is about people making mistakes. When someone identifies a mistake, people should correct it. Science and society should improve. The truth should out. And that truth is discernible in scientific evidence. Evidence that speaks for itself, at least sometimes, metaphorically.
PeW: Just because ambiguity and uncertainty are spectrum phenomena, doesn’t mean we can provably bracket them sometimes and this changes the nature of truth. We still don’t know what we don’t know — always and forever. We’re still puny mortals fraught with hard-wired biases, playing a telephone game of trust with untrustworthy intermediaries with complex realities. Be humble.
ToT: What could be more humble than the using the scientific method to gradually grow cumulative knowledge production in service of the public interest? This is my Humble Face.
Everyone Wants Better Science and Science Communication
There is a delicious irony in how diametrically opposed these two perspectives appear: Practically, they both seem to want the same thing. Better science and science communication. No one is arguing that outcomes don’t matter. ToT and PeW could probably both get on board a Do Better Science Bus that runs on wise outcome choice, careful measurement, and honest reporting. In other words, proponents of both views are often really talking about improving quality. So why not just say that? Do we need these competing truth claims at all?
Maybe PeW doesn’t say that because, as every smart kid knows, you can’t just blurt out the right answer every time someone gets it wrong. People will hate you. Is the social reality of science reform like that? Do leading methodologists have to say “this is a matter of everyone having a perspective and being fallible in the difficult dance of interpreting uncertain, ambiguous evidence” when what they might really mean is “a lot of people are biased and wrong, and I’m correcting them because my interpretation is demonstrably, empirically better than theirs — now please stop making the same dumb statistical mistakes over and over again”? The former certainly sounds humbler and nicer. But the latter statement has obvious relevance to improving evidence-based policy that would help a lot of people if we could get it. Sometimes, though, scientists do say this harsher kind of thing (albeit far more tactfully). So I suspect there may be something more going on here, and it’s about the context-dependent politics of dissent…
There’s similarly a possible element of politeness when ToT, too, doesn’t only say “this is about science reform,” but also says “the science speaks for itself.” Because that’s also nicer than just saying “my opponents are biased and wrong.” How can opposite truth claims have equal social softening effects for different iterations of the same basic “please improve science” message? Maybe it has to do again with the audience…
Before dealing with audience and the politics of dissent, let’s have a moment of silence for the dumpster fire that is science. It really does look like bias and error pervade much of science, which makes sense in part because people invest in their expertise and can then incur big (social, professional, financial) costs if they turn out to have been wrong and say so. This can make publishing controversial research hard in spite of it merits.
Take, for example, abortion reversal researcher and Franciscan University of Steubenville Psychology Professor Stephen Sammut’s statements to Life Site News about one journal’s treatment of his recent study “Progesterone-mediated reversal of mifepristone-induced pregnancy termination in a rat model: an exploratory investigation,” co-published with Christina Camilleri in Scientific Reports in July (13, 10942 (2023)). The first journal they submitted it to reportedly rejected it without warning after revision because it didn’t find successful reversal in 100% of the cases (only 81%), didn’t address long-term health and behavior in offspring, and conflicted with the American College of Obstetrics and Gynecology (U.S.) and Royal College of Obstetrics and Gynecology (U.K.) opposition to reversal attempts as “unproven and unethical.”
Ok, so not everyone wants better science and science communication. Some people just want science that confirms their preconceived notions or policy preferences, and to censor science that dissents. But hey, the fact that bias and error exist doesn’t mean that self-evident truth doesn’t.
Sammut’s remarks contain many great examples of ToT language:
“My results are important because faith does not impact their outcome… The experiments are conducted in rats and no amount of holy water or catechizing would convert them into any faith… What my experiments show is an objective, purely physiological perspective… Our obligation as scientists is not to distort the truth, to bend to ideology, political platforms, or disordered whims… Our obligation as scientists is solely to rigorously and genuinely search for and serve the truth, present within nature around us, using the God-given capabilities/talents within our respective fields of expertise.
They [researchers] must speak out, address and challenge, with every ounce of effort that they have in store, this insult to the dignity and integrity of academia in general, but specific to this situation of science and medicine.
So both ToT and PeW proponents do also sometimes explicitly say this is about science quality. Just saying that might sound too harsh, and tempering it with either of these opposed truth claims might do some social softening work. Or, just saying that could leave out an important philosophical truth (if one of these perspectives is right and the other is wrong), and/or alienate a particular set of people who might otherwise be inclined to agree with one rhetorical framing or the other (if they are both useful in different political contexts).
The truth part is that maybe ambiguity and uncertainty really run so deep, including into what we don’t know we don’t know, that — even in what may be exceptional cases of super-clear outcomes like abortion reversal — nine out of ten rogue methodologist ninjas would still not want to say that science can speak for itself. And/or, maybe there is so much political communication going on all the time, everywhere, that smart dissenters know to use one of these truth claiming narratives (ToT) when they’re talking to people who agree with them, and another (PeW) when they’re talking to people who don’t. Which means effectively that nine out of ten rogue methodologist ninjas will prefer PeW, because the whole point of being a rogue methodologist ninja is to find interesting and important places where the consensus seems wrong, and think about what’s going on with them.
Political Communication Means Audiences and Thresholds
As Aristotle said, man is a social and political animal; that makes our science a social and political creation, too. This is both a strength and a weakness for PeW. It’s a strength, because it means complex psychosocial and political contexts condition everything we can observe, analyze, and interpret. Most science is not as simple as counting babies. Point to PeW. But it’s also a weakness, because maybe some science is that clear, and then PeW is caught in its own web of perspectival bias if it can’t just say that. Point to ToT. Both perspectives, in my view, need the flexibility to co-exist. So possibly they already have it and what I have done here is unfairly caricatured both perspectives (sorry).
But that’s at the level of what seems logically correct to me (philosophy of science), and not what is useful in political communication (empirically). Could PeW work better against confirmation bias and hyperpolarization than ToT? Maybe self-evident truths are a nice place to live, but no one wants to visit there. Then again, (almost) no one likes to change their mind, anyway.
That’s ok, Glenn C. Loury explains in “Self-Censorship in Public Discourse: A Theory of ‘Political Correctness’ and Related Phenomena*,” because persuasion isn’t necessarily the point. People self-censor in political discourse to signal their group membership. Except it’s not ok; Loury thinks it degrades science and society alike by driving publication bias (p. 451-453) and making it unwise to just say what you mean about moral matters when it breaks from the party line (p. 437-447, 454-5).
But at least we can understand such self-censorship as rational in the following sense. People often conform hard when dissenting would cost them, keeping quiet about polarizing stuff and so contributing to hyperpolarization when only hard-liners are willing to pay the price of nonconformity. What political scientist Elisabeth Noelle-Neumann called the spiral of silence, Stanford sociology professor Mark Granovetter modeled as a threshold model — something with a behavioral threshold like Thomas Schelling famously showed can drive residential segregation outcomes even when that’s not what people want. (Loury thanks Schelling in a note at the beginning of “Self-Censorship,” so we might envision lines extending out from Schelling to both Loury and Granovetter.)
So maybe whether “Truth Talks” or “Perspective Walks” is empirically a threshold phenomenon where people tend to think and say the truth is self-evident once enough other people do. This would mean that speakers can use the rhetoric of ToT when they’re voicing a consensus view (including to an audience that agrees with them), but had better use PeW when they’re dissenting — or they might incur social and professional disapproval, isolation, and other adverse consequences. In this view, scientists are in their very philosophy of science caught in the web of social and political forces that shape science.
That’s a meta-level point to PeW, but it doesn’t touch the underlying reality — just our performances of these different stories about what science is or does. Maybe that’s what’s really going on here, after all: ToT says ground truth exists; PeW says yeah, but we inevitably get our humanity on it trying to uncover and tell its story. One is about existence, the other about narrative?
Case Study: Abortion Reversal
Let’s look more at this case study that contrasts nicely with difficult diagnoses in rheumatology. There is just no ambiguity or uncertainty in the outcome measure here. It’s baby or no baby — the ultimate binary.
I’ve argued previously we should just get over (accept, not deny or try to solve away) the uncertainty we can’t get rid of in “diagnoses without names,” define the outcome of interest as stuff that actually matters to people’s lives instead (e.g., not “does the patient really have lupus or not according to strict classification criteria fraught with selection bias?” but “can the patient function?”), prioritize preventive measures and both research and clinical measurements that similarly matter to people’s lives, and stop systematically excluding uncertain cases from research because this creates a biased picture that excludes tons of patients for no good reason (aka, is bad science). Here there doesn’t seem to be uncertainty, so does that break the pattern? And does this tell us anything about whether we need to pick sides in the ToT versus PeW debate, given that they are both really about improving science quality (although context and consequence matter)?
The pattern was:
Cognitive science: The uncertain nature of many diagnoses and treatment options likely triggers cognitive dissonance in well-meaning but time-pressured physicians who may hit their limits and blame patients for it. Economics: Short-term incentives for individual healthcare providers to save time and money may incur long-term costs (a collective action problem between healthcare providers on one level, and insurers and society on another). Philosophy of science: The truth is not self-evident, stories can’t tell themselves, and everything remotely human requires interpretation that can’t escape the interpreter’s perspective, miring us in a sheen of bias and error from which we can never fully escape.
Cognitive science again: It’s possible that abortion providers don’t offer patients reversal as an option because they’re busy and it’s not cost-effective. But it seems likely that they could tack it on as another set of service and good fees (incentive!). They just don’t want to because of the extra cognitive dissonance it would incur (“I help women” versus “I helped this woman make a horrible mistake she regrets and wants to reverse, or her baby will die — and it may die anyway at this point”). The cognitive science aspect seems to hold here, even though the outcome (baby/no baby) seemed so binary and certain, especially in comparison with classically difficult lupus diagnosis. The uncertainty comes in again around the abortion decision (maybe a mistake) and abortion/reversal attempt outcome (it could work, or not). Denying these uncertainties seems to again be the establishment medical response.
Economics comes in again in similar ways, too: Abortion providers, like most healthcare providers, might be hard-pressed to fit in last-minute, time-sensitive appointments to counsel and prescribe progesterone to distressed women to whom they had recently administered mifepristone. Short-term avoidance of time, hassle, and maybe money for retraining/changing abortion provider protocols (and changing people’s minds in the face of rampant misinformation that this “impossible” thing really does look possible) — beats out long-term costs that would far outweigh it in any rational cost-benefits analysis. It does so because different actors pay those costs (collective action problems). Here the conflict of interests is between abortion providers on one hand, and the women and children their practice affects on the other. Abortion providers are hurting women by denying them accurate abortion reversal information and services, reinforcing the cognitive dissonance that drives denying that reversal may be possible. You’re not supposed to say it, but they’re also obviously hurting the babies who don’t get born from abortion reversals not being talked about and offered.
Here’s where the pattern didn’t seem to hold so well, and so I changed my mind: The philosophy of science story is still exegetical here, to be sure. You could argue that there’s a more correct interpretation of the data that suggests reversal is at least possible, more likely probable (say 60-80% odds). Does talking about the uncertainties in these estimates then count as PeW? But what if there is only one plausibly correct narrative one could spin from the available data, and it’s the dissenting one that abortion reversal looks possible (as I think is the case here) — we’re just trying to talk about the available estimates and uncertainties correctly? Just because some other cases (like lupus diagnosis) are more innately exegetical than this one, and the consensus on this case is wrong, doesn’t mean that this narrative is a matter of perspective. It means there is a singular ground truth to which the preponderance of evidence here points.
There’s no empirical leg to stand on denying the possibility of abortion reversal. The truth does talk in this case. But the rest of the structure from the medical mystery saga that graced the first decades of my life carries over pretty neatly here: Professionals experience uncertainty as threat, denying it and responding to perverse incentives to force patients to deny it, hurting some of the people they’re supposed to be helping in the process. The story can’t tell itself, but the evidence just isn’t all that exegetical when it comes down to outcomes we care about here. Point to ToT.
Maybe we can still have a ToT versus PeW debate in the abortion reversal case study. Remember I changed my mind in the course of thinking about lupus diagnosis (much ambiguity/uncertainty) versus abortion reversal (minimal ambiguity/uncertainty) from PeW to ToT for this case, but said that both perspectives seem to need to allow that the other exists and holds sometimes. It’s about context, and new research brings new context…
Future Abortion Reversal Research
Usually, the most potentially impactful experiments are ones we can’t run for practical and ethical reasons alike. This case study is different and special. Because abortion reversal research offers a chance to run an experiment that positively impacts a lot of people in a life-or-death way. And because it could even (theoretically) be done on citizen science terms, with women who want to try to reverse medical abortions helping themselves, free of influence by pro-choice or pro-life actors. (Although maybe you want pro-life help here, for the expectancy effects.)
The reason experimentation like this is possible here is that successful abortion reversal rates are consistently majority (it seems to work most of the time) but well less than 100%, and there appears to be a paucity of research on how to raise them. So one could ethically randomize women who want to attempt abortion reversal to progesterone and progesterone plus complementary therapy study arms, without putting either group at a known disadvantage.
Arguably, in fact, both sides may benefit from better communication about what we know about maximizing progesterone absorption. It’s not clear that clinical practice incorporates research suggesting that progesterone absorption seems to work best rectally. A substantial minority of women have pregnancy diarrhea, though. Similarly, a lot of women complain about yeast infections from vaginal progesterone. The oral mode can also be problematic for pregnant women, because not eating can make them nauseated, but oral substances absorb best on empty stomachs. No one likes shots, but intramuscular administration is another, relatively medical human resource-intensive option. So there’s reason to suspect that offering women information and choice about all this may improve abortion reversal rates by helping women improve their progesterone absorption on a case-by-case basis.
In addition, it doesn’t look like anyone has tested the interaction of an established herbal medicine that can raise progesterone with natural progesterone for abortion reversal. Chasteberry (aka vitex agnus castus) is a well-tolerated herb often recommended to women to increase their progesterone naturally. It’s common practice in several countries for gynecologists to recommend it to women with PMS, irregular cycles — especially short luteal (post-ovulation) phases (a common perimenopausal problem), problems conceiving or carrying pregnancies, and other problems. Maybe, in combination with progesterone, chasteberry would boost abortion reversal success rates. Maybe not.
A randomized trial could answer that empirical question. Like all randomized trials on anything remotely related to this, it should block on gestational age, which influences medical abortion outcomes.
It would be hard to compete with natural progesterone for demonstrated safety in pregnancy, but there’s no indication chasteberry is teratogenic (harms fetuses). It’s anybody’s guess how long the complementary treatment would be needed to possibly boost the ongoing pregnancy rate. Two weeks? Two months? Minimizing the exposure minimizes the risk of unknown side effects like possible birth defects (again, there’s not a known risk here). Maximizing it minimizes the risk of the pregnancy ending — assuming there’s a benefit to be had here at all (again, there’s not a known benefit here). Probably it’s worth exercising caution about unknown risks of taking something in pregnancy that messes with hormones, and capping chasteberry at a short exposure like two weeks.
It’s possible that the most effective intervention for increasing continuing pregnancy rates is at another point in the process. It may involve making information about reversal potential and progesterone itself available to women from abortion providers themselves. Because time matters, and women don’t know where else to turn. But that structural change doesn’t seem likely to happen anytime soon. Abortion providers seem to be too invested in denying the uncertainty and trauma of some abortion decisions, and their possible consequences.
In the meantime, women who might want to attempt reversal, might sometimes not contact pro-life helplines to do that, fearing social disapproval, intimate partner violence, or something else. So it might make sense to make the information and resources available to them through some other means, like science. One difficulty would then be getting the research to come up first in search, which is in theory a search engine optimization problem but in practice a market competition problem involving powerful, hyperpolarized social networks. That sounds hard.
Procurement, though, would be ridiculously easy. The U.S. is a crazy outlier here, as often, where you can buy progesterone cream (and chasteberry fwiw) on Amazon.com. So there’s no logistical reason that a citizen science platform couldn’t help American women independently run their own abortion reversal experiment to investigate the possibility of raising continuing pregnancy success rates among those who want that outcome. Then, women who want to reverse initiated medical abortions could try to help themselves and one another at the same time. (Yes, this is one of the platforms I keep talking about that doesn’t exist, but should.)
RCTs involve such simple, powerful methods and this case study focuses on such an unambiguous/certain outcome that this would be a great test case for further exploring related arguments. A clear live birth advantage or lack thereof in either group would seem to fit the ToT story. But, being me, I might stumble into enough difficult interpretive nuances that it would wind up fitting PeW. And I suppose one would learn something about political communication from doing this kind of research, like it or not.