Show Me the Mommies, Part 1
People are smart; we just need information presented so we can use it
This is the first in a set of two long posts about risk and the way we talk about it. This part is heavy on details and references, while the next recaps, calculates, and summarizes. The basic point is that life is risky, experts often present information about these risks in an unclear or incomplete way, and people are pretty good at making wise choices — when this information is presented so we can use it.
One big challenge in advancing science and science communication toward this end is that scientists usually perform statistical analyses without first thinking through causality, and then present results as effect estimates for aggregated populations. This is bad science and bad ethics. We can improve by thinking about causes, at-risk groups to whom we (social animals) owe a heightened duty of care, and how people make decisions under real-world conditions.
There is a massive risk assessment and communication literature to which I don’t begin to do justice here, but I would remiss if I didn’t say I’m enjoying learning and thinking more about cognitive psychologist Gerd Gigerenzer’s work (h/t Richard McElreath). Gigerenzer, director of the Harding Center for Risk Literacy here in Berlin, builds on Herbert Simon’s idea that people make better decisions satisficing — deciding “good enough” on imperfect information — rather than optimizing, or trying to figure it all out and make the best possible choice. This strain of decision theory sings the counterpoint to the Kahneman, Tversky, Piattelli-Palmarini melody: Our cognitive-emotional software is full of bugs that make biases and mistakes common if not unavoidable, we’re just smart enough to see how stupid we are “if we are very careful and try very hard” — and part of the problem is that we use heuristics (information shortcuts) to sort things out. Sure, but heuristics work, Simon, Gigerenzer, and others say. We just need better risk literacy, science, and communication so people know when and how to use them. “The fault, dear Brutus, is not in our stars, But in ourselves…”
All of this matters all the time in complex modern societies, but the lens through which I’ve been reading and thinking about it lately comes from this women’s health study that made the rounds last month: “Combined and progestagen-only hormonal contraceptives and breast cancer risk: A UK nested case–control study and meta-analysis” by Danielle Fitzpatrick et al, published March 21 in PLOS MEDICINE. Fitzpatrick et al reported increased breast cancer risk associated with hormonal birth control, even when it didn’t contain estrogen. These findings are consistent with other studies suggesting progestins (synthetic progesterone) may contribute to substantially increased cancer risks in children exposed in utero, as well as to women’s breast cancer risk in the context of birth control and hormone replacement therapy (HRT).
Communicating these findings requires seeing and showing people what net gain and pain accrues from hormonal contraception — how many births? How many deaths? Compared to what? Answering these questions means doing science and science communication differently from how they’re normally done. Which raises the question: Why don’t we already do this?
Normal Science: Not Made for Human Consumption
Stratifying on covariates is the norm. Scientists, usually with public funding, usually publish papers estimating average treatment effects produced with that funding. This doesn’t tell the people who funded the research (the public) or those most affected by it (at-risk subgroups) what they need to know.
People are people. Not averages. And they need to be able to tailor risk assessment by basic demographics like gender and age when they do things like decide medical treatments or consider differential diagnoses. That’s why diagnostic decision support tools like Isabel let you do that. But this type of tool is an outlier from common practice; science hasn’t caught up. (Fair disclosure: I did some dissertation research using Isabel a decade ago, after using it in the med school library to diagnose my disabled mom with lupus some years before.)
To be fair, there are some good reasons for this lag. In many contexts — like policing and security screenings, credit score ratings and housing assistance applications — there’s the whole racial profiling snag. If you feed data based on racially biased outcomes into an algorithm, then you’re going to get racially biased predictions out of it; the old “garbage in, garbage out” machine learning problem. We don’t want police or other important institutions doing this, because (a) it violates liberal democratic principles like non-discrimination, and (b) it undermines security at the same time. So some places have tried to stop it by banning algorithms that use protected categories like race to make predictions that could affect freedom of movement and other civil liberties (e.g., Oakland and predictive policing). But they’re in the minority. It’s probably possible for activists at this point in history to stop complete bullshit programs like AI “lie detector” iBorderCtrl that risk automating discrimination, but impossible to meaningfully slow the forward march of biometric surveillance technology that runs the same risks. See you at the AI apocalypse.
Meanwhile, medicine doesn’t have such a good reason for often failing to let people zoom in on subgroup data to do more tailored risk assessment, like ordinary people making everyday medical decisions may often need to do. Patients may need demographic categories like race to be part of the equation, because they often affect subgroup risks with life or death consequences. And they would ideally get to see their own subgroup risks, not averages from models that stratify on covariates. Why, then, do medical researchers usually fail to fulfill this duty of care toward the people who fund and are most affected by their research?
Cynical answer: It’s just not part of normal science to do this extra work, so most scientists don’t bother doing it. Scientific articles aren’t written for ordinary people. Though many scientists strive to write as well as possible, they’re usually writing for fellow specialists doing similar research, leaving the science communication to someone else. I’m sure most of them are good people who help lost visitors find the bus stop and don’t kick dogs. This is exactly why it’s important to say…
Science Is Sick, But Scientists Don’t Have to Be
This is a terrible norm. I’m not saying this to criticize any specific authors who follow it (most researchers). “Don’t hate the player, hate the game.” The incentives are all wrong for scientists to do quality research in the public interest, including letting people know what their findings mean. Stratifying on covariates is part of this larger disease. Scientific institutions, like other social institutions, are not doing so well right now. It’s almost like we’re in collapse or something.
Say the catechism with me now: “We need less research, better research, and research done for the right reasons.” The science crisis is real — even if that’s a hyperbolic way of saying science is done by scientists who are human beings with problems, and who are all too often responding to perverse incentives. So natural selection of bad science is a problem. And we haven’t made meaningful headway solving pervasive methodological, quality, and integrity problems in science over the decades that experts have been writing about them. My contribution to solving these systemic problems is zero.
I’m saying this to contextualize how important it is for scientists who want to do better science in the public interest to think critically about using subgroup information differently — so that the people who are most affected by it, know what science means for them. Methodologically, researchers do better science this way because it entails thinking about causality — ultimately, thinking like a graph instead of thinking like a regression. This avoids problems like stratification on covariates possibly introducing more bias than it accounts for — collider-stratification bias, as I’ve written about previously (h/t Sander Greenland).
At the same time, ethically, researchers also do better public service this way, because subgroup-updated net risks may help ordinary people better understand the possible life or death consequences of their decisions. This understanding should be the minimum floor for informed consent. It is absolutely fucking insane that it’s not already.
So better science on both levels (methods and ethics) entails zooming in on subgroups to show at-risk people net life or death risk estimates that better apply to them, instead of stratifying on covariates to estimate average risks in absolute and relative terms. People should get to know better what they’re choosing, when they choose medical treatments.
But wait! Do we get any other cool bonuses for just doing better science in the public interest? Monty, show our contestants what’s behind door number three!
Got Deadlock? Maybe Baby Bayes Can Break It
Making net risk estimates might also be a way for methodologists to intervene orthogonally in politicized debates (abortion, guns, climate…). In hyperpolarized discourses, scientists often either have or are perceived as having policy agendas that may delegitimize their work. This isn’t an argument that scientists should stay out of politics (I’ve argued previously that we can’t and shouldn’t). But we can still bring better methods to fields that desperately need them (most fields), without having a dog in the fight.
For example, one could estimate net risks to women’s health from different abortion regimes (conservative and liberal) in terms of maternal mortality from pregnancy/birth on one hand and substantially increased suicide risk associated with abortion on the other; often, researchers with more or less explicit policy agendas ignore each other’s points to focus exclusively on abortion’s women’s health benefits or risks alone, which does a disservice to women. The same sort of calculation could be done for exclusive breastfeeding versus formula-feeding and net risks of infant death in poorer and less developed settings with respect to jaundice and diarrheal disease; usually, researchers who believe in exclusive breastfeeding assume without calculations or evidence that formula-feeding would cause more infant deaths from diarrheal disease than exclusive breastfeeding contributes to from jaundice. What do these cases have in common?
Fuzzy intuition: There is something about the focus on one sort of risk (e.g., women’s health harm from abortion access restriction), and a preferred story for not entertaining the possibility of another sort of risk (e.g., women’s health harm from abortion access liberalization), that makes for a common type of bad science in hyperpolarized discourses. This sounds like confirmation bias, so it’s probably confirmation bias (it’s always confirmation bias, says my confirmation bias). Pivoting to compare different outcomes in different universes — allowing that every course of real-world action in these contexts yields both possible gains and possible pains — offers us the opportunity to learn from competing assumptions using data. In the medical realm, that looks like comparing different individual treatment options for different subgroups; but, at a larger level, like the examples above start towards envisioning, one could also estimate net costs and benefits of different policy regimes in much the same way. My loose model for this thinking is Steve Fienberg and the NAS committee’s Bayes’ application to alternate polygraph testing regimes at National Labs. So I think of these sorts of comparative net risk calculations as Baby Bayes, not to be confused with Full-Luxury Bayesian Inference.
“Just Get On the Pill”: Paternalism in Risk Communication
Returning to the hormonal contraception and breast cancer risk terrain: These risks come, like all other complex social and political realities, with some sad and unequal real-world baggage. Some women who die preventable deaths from breast cancer or other problems caused or contributed to by hormonal contraception may have not known the risks. They may have not even wanted to be on it in the first place. A lot of women have said that their bodies (and brains) have said no to medicalized birth control; Julie Holland, Sarah Hill, and Betsy Hartmann have made important criticisms of possible adverse effects of these interventions on women’s well-being and lives. A lot of healthcare practitioners and others have dismissed women’s experiences of these harms, and pressured them to do the same. Sometimes, the people exerting that pressure have real dominance, like financial power, to get what they want. It’s not universal, and reproductive coercion can work every which way; but “just get on the pill” is a thing.
This pervasive pressure extends from bedrooms and family meetings, to doctors’ offices and newspapers. One of the ways it plays out is through selective emphasis on absolute, as opposed to relative or (alternatively) net, risks. Again, my main methodological point is that everybody’s stratifying on covariates to estimate average risks, but this mashes subgroup information into a whole that’s often not particularly easily applicable to anyone.
People need to see net life or death risk estimates for their own subgroups, instead. And they need to see them in a comparative context. Because, in addition to not being walking averages, people are also not living in a world where absolute safety is possible. We all make less-bad choices between imperfect options, accepting some risks and minimizing others, all the time. Yet, net risk is not part of normal conversations about risk, at all.
Meanwhile, popular risk communication is largely still failing to just make both absolute and relative risks part of the same conversation, as they should be. One of the larger methodological questions this raises is how we should assess the risks of low-probability events that vary with subgroup membership like age. For example, breast cancer risk increases with age, as Fitzpatrick et al’s Figure 6 (below) shows. This means hormonal contraception is riskier for older users, because it compounds that increasing risk. One way to deal with this is to stratify on age in the main analysis, but to also clearly show the different absolute risks of breast cancer by age — as the authors did nicely here:
Many commentators communicated this picture in terms of absolute risk. For instance, The Guardian’s Health Editor, Andrew Gregory, explained: “In numerical terms, for every 100,000 women aged 16 to 20 who use progestogen-only or oral combined contraception, there are an extra eight cases of breast cancer. For those aged 35 to 39, there will be about 265 extra cases in 100,000.” This is nicely concrete.
But this interpretation omits the fact that any hormonal contraception prescription may be associated with up to around a 41% breast cancer risk (see Fitzpatrick et al’s unadjusted OR upper bound in Figure 1, below). The possible 41% increase frames the possible risk in relative terms, making it sound substantial. Conversely, the absolute risk frame makes the same risk sound negligible; 265/100,000 rounds to zero.
(As a sidenote, Gregory got this figure from Fitzpatrick et al, who calculated it using the figures 1952.9 and 2218.0 in the last row of Supplementary Table 4, dividing by 100,000 to get the proportions. So the 2 and 2% figures in Figure 6 result from rounding those proportions, while the 265 extra cases in 100,000 result from calculations using the unrounded versions.)
Setting the Terms: Power to the People
Best practice in clinical communication is to talk about possible outcomes in terms of both absolute and relative risks. The reason being, neither gives patients the fuller information that they need to give informed consent, especially when it comes to low-probability, high-impact events like breast cancer. Picking just one frame tends to stack the deck toward a preferred interpretation (paternalism). Why is that not ok? Why shouldn’t experts decide which frame makes most sense, and “nudge” people in the right direction?
Gigerenzer critiques the evidence for this sort of libertarian paternalism, arguing that teaching people to become risk-saavy is a better idea than tricking them into behaving according to your idea of rationality. This is the same general idea that you see in a lot of literature on countering misinformation, too — teaching people to question the source works better than debunking a particular myth. In other words, “Give a man a fish and he’ll eat lunch. Teach a man to fish, and he’ll make us all sandwiches.” Or something.
My intuition here runs the same way, but goes more deeply anti-authoritarian, for a change. We have to worry, for instance, about rampant industry corruption in science. This could be reformulated as empirical questions like: Do peer-reviewed articles tend to disproportionately emphasize one absolute versus relative risk frame, and to do so in favor of reported or easily verifiable conflicts of interest? People can be pretty good at correcting for biases; but does the frame game make that harder unless — or even when — they’re reminded to weigh these evaluations as biased?
There are larger term-setting issues here, too. The evidence-based medicine movement has long been characterized by patient attempts to highlight that their outcomes of primary interest are not necessarily the ones that researchers or clinicians prioritize. For example, mothers might care more about preventing net injuries and deaths than about keeping hospitals’ C-section rates down. They might also care more about ensuring their newborns don’t suffer common and preventable harm from accidental starvation, than about breastfeeding rates — especially if they knew that breastfeeding’s claimed benefits lack evidence and may even reflect misinterpreted harm.
Back in hormonal contraception and cancer-land, there’s an argument that the terms of the discourse favor pharmaceutical company interests. When it comes to contraceptive research, the modern medicalized paradigm has a snooty attitude toward the free, universally accessible, non-carcinogenic rhythm and withdrawal methods that don’t stand to make pharmaceutical companies any money. Researchers could be running studies on teaching men perfect-use withdrawal with its 96% efficacy rate - comparable to most other contraceptives. No one runs these studies, because no one funds these studies, and some women die as a result of using carcinogenic contraceptives instead.
Show Me the Mommies: Highlighting Postpartum Breast Cancer Death Risk
Getting back to Fitzpatrick et al, the authors did this totally normal thing by adjusting for whether women had given birth recently, and how recently. But mothers are at substantially increased risk of breast cancer. Modeling away this information is a different strategy from zooming in on this at-risk subgroup. There’s a case for zooming in, instead: Postpartum breast cancer (PPBC) is around 2-5x more likely to metastasize and thus kill than other types. How do we know?
In a pooled analysis of 15 prospective studies, Nichols et al 2018 found childbirth substantially increased women's breast cancer risk, peaking around five years postpartum, with the association reversing in sign only around 24 years after birth.
In a retrospective cohort study of 619 women aged ≤45 years diagnosed with breast cancer during 1981-2011 (the Colorado Young Women’s Breast Cancer Cohort), Callihan et al 2013 found cases diagnosed <5 years postpartum had substantially increased risk of distant recurrence (31.1 versus 14.8 percent) compared to nulliparous cases, and even more substantially decreased chance of five-year overall survival probability (65.8 percent versus 98 percent). Even after adjustment for biologic subtype, stage, and year of diagnosis, PPBC was more likely to metastasize and kill.
In a cohort study of 701 women aged ≤45 diagnosed with breast cancer (also using cases from the Colorado cohort, but now diagnosed between 1981-2014), Goddard et al 2019 replicated and extended these findings to show that PPBC diagnosed within 10 years of birth was also associated with increased lymphovascular invasion, lymph node involvement, and metastasis risk -- around 2x average increased distant metastasis risk compared to nulliparous cases, going up to 3.5-5x higher risk for cases diagnosed at stage I or II, and remaining significant after adjustment for other predictive features.
There’s also evidence suggesting that it may be particularly damaging for children who are still growing up to lose a parent. One might argue that these deaths are qualitatively different.
Bracketing that, let’s just say that cancer deaths (not cases) are the outcome we care about most for risk assessment purposes. This changes the analysis we want to do on Fitzpatrick et al’s data to talk about these findings’ practical significance, so women can make better-informed choices. To zoom in on appropriate subgroups to estimate net life and death consequences of birth control choices when it comes to breast cancer, we need to think a little more first about what breast cancers are deadliest, and why.
That’s So Meta: Metastases and Tumor Types Both Matter for Breast Cancer Survival
The reason PPBC is so lethal compared to other breast cancers is the same reason other cancers generally kill: Metastasis. PPBC metastasizes at much higher rates than other breast cancers, and metastasis “is the primary cause of cancer morbidity and mortality.”
Within metastatic breast cancers, hormone receptor “status is the significant prognostic factor which determines the survival of bone metastatic breast cancers.” Triple-negative breast cancer seems to be the deadliest regardless of stage of diagnosis — probably because it’s aggressive and has fewer treatment options, although newer treatments (immunotherapy and PARP inhibitors) are being tested. The “negative” refers to estrogen and progesterone receptors (ER and PR) and the level of the growth-promoting HER2 protein. Triple-positive breast cancers, conversely, have more treatment options and thus better survival.
ER+ breast cancers seem to be less lethal long-term than ER- ones, according to Caggiano et al 2011’s analysis of 2001-2008 California Cancer Registry data. This holds true even among postpartum cases, with ER+ breast cancer patients’ 5-year metastasis-free survival rates around 75% — versus around 50% for ER- breast cancer patients, whether they had children or not, according to Goddard et al 2019’s study of the Colorado Young Women’s Breast Cancer Cohort of 1981-2014. Jiang et al 2022 similarly found in a larger U.S. cancer database (SEER, 2010-2018) that ER+/PR+ breast cancer patients with bone metastasis survived best (of patients with bone metastases), followed by single PR+ bone metastatic cancer patients.
This raises more questions: Why the metastasis spike in postpartum breast cancers? Why does ER+ seem to predict better survival (although it still carries increased PPBC metastasis and long-term death risk)? To what extent is it a useful heuristic to talk about “breast cancer risk” instead of looking at subtypes like this when we talk about risks of hormonal birth control? Is this type of narrower subgroup focus helping or hurting us in coming to grips with information in an actionable way, anyway?
It’s about moderation — some subgroup focus helps us make the most relevant information more actionable, while too much overwhelms with complexity. Here, the limits of the literature make the issue theoretical. Because ideally, we would look at PPBC cases by subtype (e.g., triple-positive), analyzed by hormonal contraceptive use before and after pregnancy, to better estimate deaths. There would likely be power and sparse data bias problems in the analysis due to limited data. Maybe that’s why I can’t find that sort of analysis. Experts frequently lament having limited data on PPBC patients. It’s luckily not that common (more on this later).
So it appears we should try to but can’t now more directly assess the causal contribution of hormonal contraception to PPBC death risk. But we can still do better using subgroup data to help people improve their own risk assessments using the evidence we do have. We just need to do a quick tour of what we seem to know about the first two questions here — why the lethal PPBC risk spike, and what’s up with these hormones…
Breast Is Stressed: The Breastfeeding Myth and Lethal Cancers
Welcome to another chapter in our ongoing saga of debunking the breastfeeding myth: The oft-proclaimed advice that breastfeeding lowers breast cancer risk is, at best, substantially misleading. Like much of modern science, it fails to consider causality. It also simply gets the facts wrong. A central mistake is comparing apples, or much more common postmenopausal breast cancers — and oranges, or premenopausal breast cancers with much higher metastasis and thus death rates.
In reality, pregnancy, breastfeeding, and/or involution — the process of breast tissue remodeling that usually happens postpartum if mothers don’t breastfeed, or post-lactation if they do — generate a huge spike in particularly lethal breast cancers (PPBC). Many factors seem to contribute: Genes matter. Hormone exposure, cell division, inflammation, and immune suppression matter, especially during pregnancy (estrogen, progesterone, and growth factors likely all play a role). Tissue injury and related wound healing processes during lactation and involution matter, in combination with the usual suspects — genes, inflammation, and immune suppression. Along with these factors, cell migration during involution looks especially dangerous, as cancer cells may walk along the rebuilding tissue latticework from breast to lymphatic tissue (“seeding”), dooming some mothers in days to die in years of metastatic cancer.
The postpartum effect (more breast cancer) doesn’t reverse to a possibly protective effect (less breast cancer) for over 20 years — and then only for women who had their first baby before age 35. Maybe. We don’t know if there really is a protective effect of breastfeeding against later breast cancer at all. It could be instead, for example, that healthier girls and women develop healthier breast tissue that is both less prone to developing later (postmenopausal) breast cancer, and better able to lactate, both as a result of cells differentiating more. This may be the case in subgroups of women who had low progesterone during adolescence, as from the common endocrine disorder polycystic ovarian syndrome (PCOS). We don’t know. But reverse-causality could explain the possible link, and the evidence does not suggest that breastfeeding mitigates the particularly lethal risk of PPBC (Nichols et al, p. 7). So stop telling mothers that breastfeeding will save their lives and do their laundry.
There’s even some suggestive evidence that the way we do breastfeeding today may be risky rather than protective when it comes to PPBC. For example, experts often encourage women with mastitis to treat it, in whole or in part, by breastfeeding more. Mastitis is a common and excruciatingly painful infection of the breast with possible bacteriological, fungal, and autoimmune inflammatory etiological components. Even when mastitis has progressed to an abscess, the standard recommendation is to drain it and keep breastfeeding. But, strangely, many mothers stop breastfeeding when they get mastitis, instead — so-called “early cessation.” Healthcare providers often see their role here as pushing for more breastfeeding. (I had to flash a hospitalist to get antibiotics along with the lecture for mine.)
A harm prevention-focused alternative approach might instead involve offering education and support to mothers with mastitis and other breastfeeding problems who may wish to limit further opportunities for tissue injury by stopping. Mothers need to know that weaning looks potentially quite dangerous as a window for cancer development. There’s some evidence that they may minimize these risks by weaning slowly not abruptly, and tamping down inflammation — with ibuprofen the evidence-based, potentially cancer-preventive choice as a widely available COX-2 inhibitor. It may also be a good idea to not add extra hormones during this window, although nursing women are routinely offered hormonal contraception… One reason is that these hormones appear to be carcinogens. What’s up with that?
Estrogen: The Good, the Bad, and the Worse
Estrogen is usually flagged as the dangerous lady hormone, with progesterone cast as its harmless sister. But the truth is less certain and more complex.
In some ways, estrogen’s bad reputation is well-earned: Estrogen exposure in premenopausal women usually happens in the form of the widely-used combined birth control we call “the pill” — which contains the synthetic estrogen and progesterone derivatives ethinyl estradiol and progestin. It substantially increases ischemic strokes and blood clots, and may increase autoimmune disease activity. With modern, low-dose formulations, we’re talking about an increased ischemic stroke risk of around 56-186%. With blood clots, the relative risk of thrombosis increases 3-5x with combined oral contraception. And it’s not clear estrogen is safe for women with active or familial autoimmunity: Estrogen receptor genetic mutations may contribute to cytokine dysregulation in lupus, estrogen seems to play a role in lupus development, and we don’t know what roles added estrogens (e.g., from hormonal contraception) may play. This goes back to highlighting net payoffs: With alternatives like progestin-only hormonal contraception, the copper IUD, and withdrawal that don’t increase these risks, why would you accept them — as the majority of women using modern, medicalized birth control, do?
One possible explanation is that tolerability varies widely. A lot of women try several things to find what works for them without common side effects like crippling depression. And, as usual, subgroups matter: Most women don’t have migraine with aura, a stroke risk factor. Many don’t have stroke risk factors like being over age 35 and smoking. Autoimmunity is far more common women than in men, but it’s still a minority concern. So it’s not clear from this evidence so far that adding extra estrogen to most women’s lives is a bad option.
Robert Sapolsky goes much farther: On the whole, if you have a choice between having a lot of estrogen in your bloodstream, or not, he says you should go for having a lot (around minute 34). “It enhances cognition. It stimulates neurogenesis in the hippocampus. It increases glucose and oxygen delivery. It protects you from dementia. It decreases inflammatory oxidative damage” — with an asterix for possibly triggering autoimmune inflammation — “to blood vessels, which is why it’s good at protecting from cardiovascular disease” — with asterices for possible ichemic stroke and blood clot risks. But this is all about brain and cardiovascular health, ignoring substantial risks in those contexts. What about breast cancer?
There’s some evidence that estrogen heightens breast cancer risk, with lots of ways this can work: Estrogen is endogenous, but body fat, diet, chemical exposure, and estrogen-containing birth control can increase it. Research links all these pathways with possible breast cancer risk.
Snacking Against Cancer? Estrogen, Soy Consumption, and Some Big Effects
Enough doom and gloom; let’s talk about snacks. The meta-science disclaimer is that everything we eat is associated with cancer, and Shoenfeld and Ioannidis showed in a 2013 systematic cookbook review that these associations are often implausibly large, based on weak evidence, and shrink in meta-analyses. Breast cancer is no different, although there’s suggestive evidence on lots of things like citrus, cruciferous vegetables, green tea, fish oil, and other standard lifestyle recommendations like exercising and not smoking or drinking alcohol. Basically, the literature says to be healthy with respect to breast cancer prevention as you would normally seek to be healthy if you sort-of know the health literature. But then, there’s soy.
Animal studies sometimes yield scary results, like that soy may stimulate breast cancers, especially estrogen-dependent tumors. But people are different animals. And human studies fairly consistently suggest that eating lots of soy, which binds and activates estrogen receptors, is either neutral or decreases breast cancer risk, as well as potentially reducing risks of recurrence and death in breast cancer patients. Some research suggests timing matters, and eating lots of soy in adolescence is most impactful.
Most experts think this is because soy isoflavones are so estrogen-like they short endogenous estrogen’s receptor space. There are additional possibilities, like particular intestinal bacteria that only some people have metabolizing the soy isoflavone daidzein into equol, a substance that may block etsrogen’s negative effects; small-N research doesn’t seem to bear that out, but does suggest different gut microbiomes respond to soy quite differently, so maybe the pathway is still “soy hits something in the gut microbiome —> this factor metabolizes soy differently —> the resulting other something creates special estrogen-blocking and cancer preventing/beating goodness — but only for some subgroup(s).” Equol still looks promising as a chemoprotective agent. It seems to make women excrete more estradiol in urine, so it’s apparently changing estrogen metabolism. And if we can’t make it, that’s no problem! Because cows can make it for us by eating phytoestrogen-rich things that cows like to eat anyway, like clover, and then we can drink it in their milk. Beats a fecal transplant. Or it would, if you could buy equol-standardized milk.
More practically speaking, Harvard’s soy page has a nice breakdown of how much of what soy foods contains how many isoflavones, and a great summary of research on breast cancer and soy. This includes many highlights from the Shanghai Women’s Health Study showing some massive possible protective effects of eating large amounts of soy against breast cancer risks including ER+, PR+ postmenopausal and ER-, PR- premenopausal cancers and risk of breast cancer death. Caveats: They’re often talking about sub-subgroup effects, and the results could be due to chance. So YMMV, and we don’t really know if soy matters at all, or could save a lot of lives. Still, soy snacks are tasty, and this seems like an easy, low-cost lifestyle experiment to run.
What if you hate soy, or soy hates you? Don’t feel left out! Other plants have genistein, the soy isoflavone that seems to be doing good stuff, too. Hops and red clover are a few. Hops is the calming herb that usually helps preserve and flavor beer, and it makes an amazing funky tea, if you’re into that sort of thing.
So the overall picture adds up to one where estrogen does indeed look probably risky in the context of breast cancer. Got it! Find estrogen and hit it over the head with a frying pan of soy!
Was Estrogen Framed? The Paradox and the Other Suspect
There’s just one problem with this story: Estrogen also treats breast cancer. This is “the estrogen paradox.” Richard Santen summarized it thusly: Short-term use seems beneficial and long-term use risky, except that high-dose estrogen therapy treats postmenopausal breast cancer — and so does the anti-estrogen agent tamoxifen. Santen fingered the same possible mechanism for the paradox as Sapolsky to explain why HRT seemed quite risky for postmenopausal women in cardiovascular terms, when other data suggests estrogen is hugely protective: Whether women’s bodies still have premenopausal estrogen levels, or not, seems to make a big difference.
In the case of breast cancer, Santen suggested adding estrogen to an estrogen-rich milieu reduces apoptosis (the cell death you want cancer to die) — while adding it to a postmenopausal milieu that already had long-term estrogen deprivation, triggers apoptosis. Sapolsky said the opposite, that estrogen deprivation in postmenopausal women changed estrogen receptors in a way that made later HRT risky. There’s no conflict here: They’re talking about different risks with different mechanisms, making the same operative distinction between pre and postmenopausal hormonal milieus. This is a frequent explanation for conflicting hormone-cancer findings; e.g., Stoll 1999 suggested the same distinction explains why DHEA might increase breast cancer risk in postmenopausal but not premenopausal women. It’s plausible, but it’s also possible that something else is going on here and we just don’t understand it yet.
So maybe you would want more estrogen or estrogenic exposure to prevent or treat breast cancer, after all — if you’re postmenopausal — but then you still might not want it due to other risks (e.g., cardiovascular disease). And for breast cancer, it probably still depends on the subtype and stage, with estrogen seeming more dangerous in the early stages, and possibly more helpful later… Not exactly advice you can apply at home, since cancer doesn’t send you a memo.
So much for exonerating estrogen. But what about the other suspect? That sweet, innocent lady hormone, progesterone? Could it actually be responsible, in whole or in part, for the increased breast cancer risk most people associate with estrogen?
Progesterone and Breast Cancer Risk
There are two, diametrically opposed views on this. Some researchers think estrogen was framed, progesterone is the scary breast cancer risk-elevating culprit, and this misunderstanding is lethal. Voicing this position in 2008, Cecylia Giersig of the German Federal Institute for Drugs and Medical Devices wrote in “Progestin and breast cancer. The missing pieces of a puzzle” in the Federal Health Gazette Health Research Health Protection (Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz):
The previous assumption that progestin does not promote breast cancer development needs to be re-examined since a growing body of evidence indicates the opposite… a considerably higher number of breast cancer cases have been reported from Germany on POC [progestin-only oral contraceptives] than on the widespread used COC [combination oral contraceptives] (111 versus 12)… the big resemblance among the breast cancers reported for POC and their similarity with breast malignancies diagnosed in pregnancy suggest the existence of a pattern rather than pure coincidence.
This article was the eighth hit in an April 2 PubMed search for “birth control progestin breast cancer.” It suggests that, fifteen years ago, we had good reason to suspect that progestin-only contraception causes breast cancers that look like what used to be called pregnancy-associated breast cancer — and is now called postpartum breast cancer — including “in many cases rapid tumour growth and large tumour sizes,” with daily dose and exposure duration before breast cancer diagnosis inversely correlated. This flags progesterone, which is also heightened during pregnancy and is immunosuppressive, as a possible contributor to these particularly aggressive and lethal breast cancers. Although, again, there are probably lots of potentially contributing factors to different subtypes, and we really don’t know.
As one might expect from this suggestive research, there’s cutting-edge science on progesterone blocker (and abortion pill) mifepristone and breast cancer. As usual, subgroups matter —the early promise seemed to be for patients with higher levels of Progesterone Receptor isoform A (PR-A) than PR-B. More recent research suggests promise in triple-negative breast cancer in mice, and in twenty women with luminal breast cancer with high PR-A/PR-B ratios.
So we do have increasing evidence that progesterone — like estrogen — may be a key lever in some breast cancers: Increased progesterone correlates with increased cancer, and blocked progesterone with successful cancer treatment. In the case of both hormones, there’s evidence that more hormone exposure means more cancer risk that’s sensitive to that hormone, and less hormone exposure (through blocking the receptors) means less cancer or better treatment outcomes. This may come as a surprise to many women…
Natural = Safe? The Alternate Progesterone Universe
In conflict with this evidence, experts often tell women that we know that progesterone is totally safe. A lot of women who have struggled to conceive, carry pregnancies to term, and deal with hot flashes or insomnia from polycystic ovarian syndrome (PCOS) or (peri)menopause have heard this from gynecologists who offer progesterone or progestins to help. The common safety reassurance is wrong, perhaps horribly so.
Giersig’s appears to be an extreme cautionary voice, but she makes a plausible case that estrogen was framed and progesterone is the breast cancer baddie. She suggests estrogen got an unfairly bad wrap in the late 90s in association with breast cancer risk associated with HRT — coinciding with the confounding addition of progestin. She points out this is also a confound in studies linking hormonal contraception with increased breast cancer risk. Experts added progestins to HRT to decrease endometrial cancer in postmenopausal women, but then stopped the big Women’s Health Initiative (HRT) study early when they found the combined hormone treatment increased breast cancer, cardiovascular disease, strokes, and thrombosis. According to Giersig and others, it was probably progestin increasing breast cancer. But other evidence suggests that estrogen alone may be risky, while some progestins really are riskier than others when it comes to breast cancer risk. So combined treatment with estrogen and progesterone or dydrogesterone (in HRT-land) really do look like the probably-safer options compared to estrogen monotherapy. But beyond that, we still can’t say for sure what’s going on here.
Meanwhile, there’s also a camp of experts who say synthetic progestins are carcinogenic, but natural progesterone is protective and effective against ills from PMS to fertility problems. Some make the same sort of claim about bioidentical hormones more generally. These claims are unproven. Citing Santen, Files et al 2011 write that “Despite the contention by proponents that [compounded bioidentical hormone therapy] has been found to be safer, more efficacious, or less likely to cause breast or uterine cancer than FDA-approved HT, no reports published in peer-reviewed journals support this claim.”
Still, absence of evidence isn’t evidence of absence, and the natural = safe argument has long resonated with many. For example, Campagnoli et al 2005 summarized the consensus then as shifting from the idea that estrogen plus progesterone increased breast cancer risk, to the suspicion that natural progesterone is risk-neutral or protective against breast cancer, and it’s synthetic progestins that pose serious risks. This does seem to be how a lot of gynecologists currently practice.
But if that pendulum swing was then underway, then the swing back seems to have already begun. Trabert et al 2020 present an updated comprehensive review on progesterone and breast cancer in which they say “Inconsistent evidence suggests that progesterone can increase, decrease, or have no effect on mitotic activity and proliferation in breast epithelial cells.” In other words, we still don’t know, on balance, what effect natural progesterone has on breast cancer risk. But doctors routinely offer hormone treatments, especially progesterone, to women seeking to become or stay pregnant (or, for the hormone-addled among us, simply get some sleep). Is that safe? What does history suggest?
Death Cab for Cutie: A Short History of Prescription Hormones in Pregnancy
Maybe it’s completely safe, or maybe natural progesterone increases the risk of particularly aggressive and highly lethal breast cancers (think: PPBC following on the heels of sky-high pregnancy progesterone levels) — even though many gynecologists are telling healthy young women to smear themselves with this stuff from head to toe in order to get or stay pregnant. This dark potential resonates with the tragic DES chapter in medical history. A few generations ago, the synthetic estrogen diethylstilbestrol (DES) was widely used to prevent miscarriages. It didn’t work and had many adverse effects, including increasing breast cancer risk around 2x in women exposed to it in utero.
Similarly, research suggests that more recent use of the progestin 17α-hydroxyprogesterone caproate in obstetrics massively increased cancer risks in offspring. Risk increased with number of injections, and the possible risks appear quite substantial, including increased risks of colorectal (adjusted HR 95% CI 1.72-17.59), prostate (1.24-21), and pediatric brain (7.29-164.33) cancers. Early-onset colorectal cancer is a special concern.
But “Progesterone is not the same as 17α-hydroxyprogesterone caproate,” Drs. Romero and Stanczyk say. There’s suggestive data on male genital malformations and in utero exposure to various types of progesterone apparently including natural progesterone and progestins, but we’re not sure about cancer. Apparently, the old idea was that estrogen and progesterone together would synergistically help prevent miscarriage; but all that did was increase maternal cancers.
So does natural progesterone work to prevent miscarriage? Maybe, but findings from two recent randomized controlled trials suggest that it’s uncertain. Women need to know that doing nothing may be a safer bet.
Coomarasamy et al’s recent PROMISE trial found a meagre .94-1.15 95% CI for all participants, meaning the effect could have been null or small in either direction. The authors want to say that PROMISE and another big progesterone/birth trial from the UK, PRISM, found a small treatment effect (3%), and that the treatment worked way better for women with more previous miscarriages and threatened miscarriage. But these were post-hoc subgroup analyses that still left 95% CIs overlapping zero, with the N going down and miscarriage risk going up in the subgroups of women with more and more past miscarriages. Based on this limited evidence, the authors say “treatment with vaginal micronized progesterone 400 mg twice daily was associated with increasing live birth rates according to the number of previous miscarriages.”
Stop. Post-hoc testing is bad. And these post-hoc analyses don’t even support that conclusion. They do not prove that the treatment (natural progesterone for miscarriage prevention) works. They just suggest that it could work, or not. Misrepresenting the evidence to vulnerable women who desperately want children is bad.
Coomarasamy et al don’t come out and say, but maybe we should suspect, that the treatment could also cause cancer in mothers and/or offspring, or not. The hope is that we would know from a smoking gun by now if natural progesterone during pregnancy were another DES or some-progestin-type chapter in medical history. But that’s a hope, not a certainty. This is why Coomarasamy et al also conclude “we recommend long-term follow up studies of babies exposed to first- trimester progesterone.”
Progesterone isn’t a miracle pregnancy drug, and we don’t know about its long-term safety for exposed women or children. But it’s used generously in common practice without informing patients that this is the state of what we know, and don’t know. It’s more pleasant for everyone to believe that doctors can offer patients a safe and effective treatment, than to admit that we may not have such a treatment.
Amid this uncertainty, there is relatively certain danger: It seems unhelpful and potentially dangerous to give pregnant women estrogen, synthetic estrogen, and at least some forms of synthetic progesterone (progestins). Progesterone may or may not help prevent miscarriages, premature births, and other adverse obstetric and neonatal outcomes. And in return, it may contribute, especially in some synthetic forms, to increased cancer risks in exposed children, as well as to women’s breast cancer risks — just as progestins seem to do in the context of birth control and hormone replacement therapy (HRT).
This raises more questions, like, what are the risks of preconception hormone exposure for children? This is a very important question, given that globally, hundreds of millions of women use hormonal contraception, and they often come off it right before trying to get pregnant. So we do know the answer to that question… Right?
Preconception Hormone Exposure and Offspring Cancer Risks: Unknown
We don’t know whether preconception hormone exposure may heighten offspring cancer risks in the same sorts of ways that fetal exposure in utero to some exogenous female sex hormones appears to. This apparent gap in our knowledge has potential implications for a very large number of children. Gynecologists routinely prescribe known carcinogens to women without telling them that we don’t know whether you should maybe not take these known carcinogens during the preconception period. How big of a problem could this be?
I previously wrote a bit about a related issue, overlooked preconception risks of common antidepressants. The issue here is similar. Oocytes (ovarian follicles that develop into mature eggs) take six months to develop. We know from other research on preconception risks from factors like famine and folic acid deficiency that something that can matter for lifelong health, may matter months before conception. The literature on preconception exposure to common antidepressants and possible offspring harm is concerning. All the studies I looked at dealing with this — Brown et al, Hvid et al, Sørensen et al, Suarez et al, and Sujan et al — found that autism and other risks may have been heightened for children whose mothers took antidepressants in the preconception period. It’s safe to say that serotonergic antidepressants (SSRIs and SSNIs) are not proven safe in the preconception period. It would be great if doctors usually told patients this, but we’re not there when it comes to informed consent.
What’s different here is just how poor the evidence base appears to be. Although technically, it’s not an evidence base problem, but rather a problem using it. With the usual caveats (data have problems), anyone with access to a national registry could check on hormonal birth control prescriptions in the preconception period, and their relation to offspring cancer risks. But I keep searching PubMed for things like “preconception oral contraception offspring cancer,” and can’t find any studies like this. Could be because they’re null and sitting in a file drawer (the file drawer problem, aka publication bias). Could be because possible preconception risks are under-recognized and under-studied. Could be something else…
The potential problem here is not just with hormonal contraception. Similar criticisms could also be made of common infertility treatments, in which experts often tell women struggling to conceive or carry pregnancies to term that they should use hormones like estrogen or DHEA (which is metabolized into androgens and estrogens) in combination with progesterone or progestins until they get a positive pregnancy test. Leaving aside these treatments’ potential risks for the women themselves — possibly accumulating increased cancer risks over years of trying to have a baby — and unproven efficacy for supporting full-term pregnancies — that’s two weeks after fertilization. Two weeks of potential embryonic exposure to carcinogens.
Women consenting to these treatments deserve an estimate of the possible increased live birth versus offspring cancer death risks that these treatments may incur. They don’t get to come in at that ground floor of informed consent. According to Lim et al’s 2013 Cochrane review, we don’t even know if these estrogen plus progesterone therapies help or hurt miscarriage rates and pregnancy outcomes — in exchange for raising maternal cancer risk.
This is the kind of thing I think should be big women’s health news. But I guess it’s hard to make a headline from the uncertainty surrounding the risks of current standard care. You don’t make any friends telling women undergoing fertility treatments in desperate hopes of having families that it might kill them and not work. Strangely, you hurt their feelings instead.
Still, synthetic estrogen and at least some forms of synthetic progesterone are carcinogenic. They’ve been widely recognized as such for many years. The International Agency for Research on Cancer in 2007 classified combined estrogen-progestin hormonal birth control and HRT as carcinogenic to humans (group 1). In this context, it’s weird that we’re still reading studies announcing hormonal birth control increases breast cancer risk as breaking science news.
Millions of women use hormonal contraception, which is carcinogenic, and then stop using it to become pregnant. We should know if a waiting period using barrier methods or withdrawal as birth control, to minimize preconception offspring exposure to these carcinogens, is safer for these women’s subsequent children. And we don’t know.
What do we know? Returning to the question of increased breast cancer risk from hormonal contraception: What are we talking about here? How much risk, for how long, and from what types of hormonal birth control?
Hormonal Birth Control and Breast Cancer Risk
Fitzpatrick et al’s findings suggest that progestin-only birth control may be short-term slightly safer, and medium-term slightly riskier, because the possible risk associated with the combined pill may attenuate faster (see Fitzpatrick et al’s Figure 2, below). But we’re not sure; the confidence intervals largely overlap. These results debunk the myth that progestin-only birth control is safer with respect to breast cancer risk than the combined pill. We don’t know that, and it may or may not be.
Like a lot of debunkers, Fitzpatrick et al accidentally perpetuate a part of the old myth, wrongly declaring “There is no excess risk more than 10 years after stopping” (caption, Figure 6). While that possibility is consistent with the attenuation heir findings show, it’s only one possible universe contained in the data. Their data also show that the risk may instead persist, with the unadjusted model estimating an up to 51% increased risk with the last progestin-only contraceptive pill prescription 5+ years ago.
This mistake seems to reflect the common cognitive bias of dichotomania in methods: The right answer is “we don’t know” — whether progesterone-only contraception carries substantial long-term increased breast cancer risk — but it feels better to say that we do, and it doesn’t. We should resist this impulse. But it’s hard. Probably even moreso for clinicians, whose patients just want to know whether something is safe or not. This gets back to the pervasive paternalism of “just get on the pill.” It’s a norm, so confirming it’s fine is the preferred story. Everyone says their lines and culture reproduces, but women sometimes die from the way we do things.
Overall, there’s insufficient evidence to support the claim that progestin-only birth control heightens breast cancer risk less than its estrogen-containing hormonal contraception alternatives. Some evidence suggests that it may actually be more dangerous in this regard, instead (see Giersig). We really want to know if breast cancers that result after women take progestin-only birth control — and any hormonal birth control — are killing them at higher rates than other premenopausal breast cancers, like postpartum breast cancers are. And how hormonal contraception contributes to different subtypes and metastasis rates of all breast cancers, especially PPBC. There’s a lot we don’t know, so we have to let a lot of possibly important subgroup data go. But what we do know doesn’t warrant dismissing possible risks.
It also looks like phasic hormonal contraception (i.e., combined birth control pills with a dummy pill week) may be safer than non-phasic (see Fitzpatrick et al’s Figure 3, below). This is what you would expect to see if hormone exposure causally contributes to cancer risk: Less hormone exposure, less cancer. This may be an argument for exploring a different use pattern of the progestin-only pill, which is now only taken continuously, as well as advising more caution in foregoing the combination pill break week. Experts often say there’s no need for a period, and some even encourage alternative regimens of combination pill use including continuous use. But women need to know that the break week in the typical use of the combined pill has a substantial possible cancer risk-mitigating benefit.
In addition, it looks like hormonal IUDs, which are often billed as lower-risk than birth control pills, may carry an up to 54% unadjusted increased risk of breast cancer. So much for the theory that having hormones coming from a device embedded in an internal organ is definitely safer than taking them by mouth. Maybe, maybe not.
Myth-Based Medicine and Our Shared Vulnerability
Once you’ve seen these findings, the ostensible relative safety of hormonal IUDs over birth control pills may sound like a ridiculous modern medical myth. After all, IUDs are medical devices embedded in an internal organ. OF COURSE there’s substantial possible risk here. How did anyone ever believe otherwise?
No one has the time or inclination to think through and look up everything. We are all prisoners of the telephone game of trust. And the experts —doctors, friends, and others — who we trust are also human beings who have pervasive cognitive biases, and respond to perverse structural incentives, in a universe full of uncertainty and complexity. We need one another, and we let one another down.
Ours is also a universe without absolute safety. Everyone dies. All choices have risks. So we need to go beyond usual conversations about absolute and relative risks, to compare net risks from different choices. In the contraception context, this clinical risk communication job is doubly existential, because it means comparing net birth and death risks. On one hand, doing this even remotely right means doing a lot of work for deceptively precise numbers that will still contain a lot of uncertainty and reduce a lot of complexity. And there will always be more subgroup calculations to tailor.
On the other hand, risk communication has to happen somehow. Estimating hypothetical projected births and deaths resulting from different forms of contraception for a particularly at-risk subgroup seems like a good place to start giving women the most vital information about the possible consequences of their everyday medical choices. Information that it’s surprising they don’t already have. It’s also more methodologically sound than the standard practice of instead stratifying on covariates without first thinking about causality… Zooming in on subgroups can be seen as a very simple way to hash out some empirics while thinking about causes. Causal diagramming is important, too, but this set of posts brackets it. (My DAG being illegibly complex.)
The next post offers imperfect net birth and death risk estimates for different common birth control methods, focusing on the at-risk subgroup of mothers. This illustrates an alternative to the standard approach, stratifying on covariates — using subgroup information differently in the interests of better science and a better society. If patients could tailor their own estimated risks by subgroup categories before consenting to medical treatment, it would also enable a different, more meaningful approach to informed consent. We have the tools to often support this in terms of evidence base and computing power. We just don’t prioritize translational medical research as a society, which is unfair to the people who fund our science and stand to benefit from it the most.