Abortion Myths, Part 2
Come for the collider-stratification bias, stay for the “science crisis” and collapse of informed consent
My last post criticized consensus abortion science for “correcting for” confounds without regard for causality, risking introducing more bias than it corrects for. In it, I started to note other problems in this science, like dismissing a substantial minority of women’s self-reports of related coercion, trauma, and mental health problems, instead of triangulating observational data including fairly consistent, massive abortion-suicide findings from excellent population register studies with this type of data. There are numerous other problems in consensus science and the way experts communicate it to pregnant women, including nullism, statistical significance testing misuse, and generalizing from special subpopulations. On one hand, these are common methodological mistakes across medicine and science (the so-called “science crisis”). On the other hand, they invalidate most informed consent procedures about abortion. (To be fair, it’s a fairy tale anyway.) Here are a few examples.
Example 1, nullism - especially misinterpreting no proven causal link as proof of no risk
Planned Parenthood and Women on Waves are two prominent abortion providers that are also commonly cited as authorities on abortion risks. Both misinterpret lack of certain evidence of particular levels, kinds, and causal explanations for associated possible harms as “no risk.” Planned Parenthood says “Unless there’s a rare and serious complication that’s not treated, there’s no risk to your ability to have children in the future or to your overall health. Having an abortion doesn’t increase your risk for breast cancer, and it doesn’t cause depression or mental health issues. Abortions don’t cause infertility either.” But none of these possible risks are proven to not exist; there’s evidence that they may. In the case of suicide and other mental health problems, the evidence suggests there’s a substantial possible link. We don’t know if it’s causal, or not.
Similarly, as part of debunking alleged abortion myths, Women on Waves says “There is no evidence of increased risk of long term ‘post-abortion’ stress, depression or anxiety, or any other psychological illness.”
Again, the evidence suggests substantial possible risk of adverse mental health effects including roughly 2x increased suicide risk. Under the heading “What are the risks?” there are no possible acute psychological, fertility, or long-term health risks listed.
Pro-choice NGOs typically make similar claims. For example, the Guttmacher Institute says “abortion does not increase women’s risk of mental health problems.”
Other medical authorities make similar claims, too. For example, the MGH Center for Women’s Mental Health - Reproductive Psychiatry Resource & Information Center of Massachusetts General Hospital of Harvard Medical School has this web page entitled “Pregnancy Termination Does Not Affect Women’s Long-term Mental Health or Well-Being.”
These sources are all making the logical and statistical error of nullism, arguing that because evidence doesn’t clearly prove a certain, causal link of a particular magnitude between abortion and mental health or other harm, that one doesn’t exist. But inability to prove A doesn’t prove -A. The right answer is that there’s a lot we don’t know.
This mistake seems to go by different names in different conversations, even within statistics. What Sander Greenland calls nullism in a logical and statistical sense, Stephen Senn calls Hermione’s problem in a risk communication context. Just as, in Shakespeare’s The Winter’s Tale, Senn explains, Hermione can’t prove positive her fidelity (absent help from the Oracle) — so, too, medicine can’t prove positive that an intervention (such as the MMR vaccine) is safe. But in the MMR context, the public focus is on doubt sown by fraudulent research (Wakefield’s retracted autism paper). Whereas in the abortion context, scientific and professional pro-choice discourse focus on the idea that evidence of substantial possible harms is itself confounded at best (as in population register studies typically showing around 2x increased suicide risk) — and disreputable at worst (as in pro-choice researchers’ dismissals of pro-life researchers’ studies).
Example 2, statistical significance testing misuse - misinterpreting uncertain ranges as certain binaries
Like the others, this is a common methods mistake and abortion science is no exception. Take, for example, Warren et al’s 2010 Perspectives on Sexual and Reproductive Health study in which researchers used data from the National Longitudinal Study of Adolescent Health to see if abortion in adolescence was associated with subsequent depression and low self-esteem.” Table 2 shows very wide 95% compatability intervals including a 2x increased possible depression risk, but reports no risk on the basis of the point estimate <1. This is a misuse of statistical significance testing.
Another instance of statistical significance testing misuse in consensus abortion science comes from the Turnaway study. Turnaway is the standard citation for the assertion that abortion doesn’t adversely affect women’s mental health. But the evidence reported here is consistent with a substantial possible depressive effect of abortion. The authors misinterpret the evidence as showing "similar levels of depression" at 1 week after seeking an abortion, but the reported data show otherwise: “(turnaway-births, 0.13; 95/% CI, -0.46 to 0.72; turnaway-no-births, 0.44; 95% CI, -0.50 to 1.39).” This shows no possible depression increase in the turnaway group, but up to a 39% depression increase a week later in the group of women who received abortions before the cut-off.
Example 3, generalizing from the particular - misinterpreting convenient data as widely relevant
Horvath and Schreiber 2017 argue in Curr. Psychiatry Rep. that Turnaway shows abortion causes no mental health problems, while “ The early medical literature on mental health outcomes following abortion is fraught with methodological flaws that can improperly influence clinical practice” (e.g., not correcting for confounds). Turnaway studied women who sought abortion around the 13 week cut-off, some successful and others not.
This selected for a subset of women who have a single-digit minority of abortions. Most Americans and especially most American women are not ok with abortion after this cutoff, and most abortions are performed before 9 weeks. So it’s a fair bet that women who are fine with having an abortion around that boundary are not representative. That means these findings probably don’t generalize to other groups of women. In fact, there’s some evidence that women with moral misgivings about abortion may be more at risk for adverse effects (in line with research on moral injury as a PTSD phenotype). That would mean that Turnaway produced results biased against finding mental health harms — exactly what it’s routinely touted as disproving.
So Turnaway is a standard citation on how abortion causes no adverse mental health effects, cited by NAS among others, but it shows nothing of the sort. There are lots of problems with it, including also substantial selection bias from attribution. What its misuse arguably shows is that we would really like to have better data on abortion’s causal effects on women’s mental health. And, in the absence of that data, some researchers are motivated to behave as though that evidence exists, when it doesn’t. Why? Turnaway researchers are abortion proponents and providers with vested interests (psychological, professional, financial) in the conclusion that abortion helps women, so they produce science purporting to show that, which is then misinterpreted by people who also want to believe that narrative as evidence of it.
Other findings pregnant women should know (but no one tells them)
Backstory: I already closed all the tabs in my valiant effort first to reboot my computer in order to install updates to install software to do the homework for “Statistical Rethinking,” Richard McElreath’s amazing course on Bayesian statistics and causal inferences. This series is my desperate attempt to climb out of my research rabbit hole, so I can now set up my new computer and do the homework, so I can then do more and better science, so I can then do more and better science communication. This is just one of many topics where women deserve the truth, and they’re not getting it. And I should really stop using browser windows with tab explosions as a filing system.
So these are broad points with sparse references pulled from huge literatures. I’m trying to write for intelligent, well-meaning people who probably genuinely disagree with the proposition that abortion may cause common and preventable harm to women. Maybe someone else will recompile a nice list of links for these points, or maybe I’ll do it later if people want that.
Point 1: Abortion increases risks of something going wrong if you have a baby too soon after it, which a lot of women do
There’s some evidence that it may be safer to have a baby 16-18+ months after an abortion as opposed to sooner after it, with increased risks of things (like preterm birth) that can affect future offspring’s mortality risks and lifelong neurodevelopmental health, among other outcomes. In these findings, it’s also notable that a large proportion of women who have abortions, get pregnant and have a baby soon after. To me that suggests possible abortion regret, and that regret may be under-reported if you read that behavior (abortion shortly before full-term pregnancy) as indicating true preferences in a way that self-reports may not (people lie, including to ourselves).
For example, in Kc et al’s 2017 PloS One study, it’s hard to tell from small numbers, but it looks like surgical abortion may risk bad outcomes for future offspring, including premature birth. Medical abortion may be safer in terms of these risks (no surprise because surgery can damage delicate tissue), but more data are needed. This is a common pattern of findings.
Point 2: Multiple abortions are associated with substantial health risk increases
There’s also evidence that multiple abortions may carry substantially increasing physical and mental health risks to women. The increasing mortality risks are remarkable, and arguably belong in abortion counseling and informed consent procedures along with information on the suicide link. This is another common pattern of findings.
Many women choose multiple abortions without any recognition of the fact that this may massively increase health risks. Mazuy et al 2015 in Population & Societies find that, in France, first-time abortion has been steadily decreasing while repeat abortions have been increasing. They suggest “This is probably due to a growing diversity in women’s childbearing, conjugal, affective and sexual trajectories, and for some, more specifically, the use of contraceptive methods inappropriate to their life situations, [6] despite the wide availability of contraception. [7] The choice of whether or not to end a pregnancy has become a right rather than a last resort.” This highlights questions about whether substantial possible adverse effects of repeat abortions on mental health and mortality are cause, consequence, or both — and how to better inform women that we don’t know. (Pro tip: Say you don’t know.)
Further Readings
[H/t SG for the excellent readings.]
“Scientists rise up against statistical significance,” Valentin Amrhein, Sander Greenland, Blake McShane, Nature, 2019.
“Mathematical vs. Scientific Significance,” Edwin G. Boring, The Psychological Bulletin, Vol. 16, No. 10, October, 1919.
“The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics,” Sander Greenland, Wake Forest Law Review, Vol. 39, 2004, No. 2.
“Instead of maintaining a hypothesis until forced to give it up, a good scientist should regard any hypothesis (including a null one) as conjectural, seek and welcome refuting evidence, and willingly abandon the hypothesis when faced with an alternative hypothesis that better explains or fits the evidence” (p. 296).
…
“Based on classic cognitive studies [citing Tversky and Kahneman, 1982, “Judgment Under Uncertainty: Heuristics and Biases”; and Masimo Piattelli-Palmarinin, Inevitable Illusions: How Mistakes of Reason Rule Our Minds, 1994], I suspect that most scientists (like most people) tend to misperceive their own viewpoint as more typical or common than it is, and are overconfident about the validity of that viewpoint” (p. 307).
Sub-citations: Tversky and Kahneman; Piattelli-Palmarinin.
“Null misinterpretation in statistical testing and its impact on health risk assessment,” Sander Greenland, Preventive Medicine, 2011
“There have been several statistical expositions emphasizing how small P-values are routinely overinterpreted as providing strong evidence against a null hypothesis (Berger and Sellke, 1987; Sellke et al, 2001).”
…
The flip side of this “false-positive emphasis of some popular accounts (e.g., see Ionnidis, 2005, vs. Goodman and Greenland, 2007)” is that “testing also contributes to false-negative as well as false-positive conclusions” (p. 225).