Confronting spin science with statistical literacy changes what we make of cutting-edge research results on the seasonality of preterm birth risk, metformin versus insulin for gestational diabetes, growth hormone for IVF, and breastfeeding interventions. Welcome to my accidental semi-regular digest of the medical literature and its methodological mistakes in contraception, pregnancy, and neonatology.
***
Psychosocial breastfeeding interventions may substantially raise immediate risk of postpartum depression, latest Cochrane Review finds but buries
The Cochrane Library published my comment on the recent breastfeeding-depression review (Lenells et al. 2025) that misinterpreted its core statistical evidence, hiding risks and illustrating the very confirmation bias about which I warned them in response to the planned review protocol’s publication years ago. According to Cochrane guidelines, Lenells is required to respond to the comment’s substance. Watch this space.
(Or not: publishers often don’t follow their own guidelines, responding to perverse incentives to refrain from correcting the scientific record when they’ve published something wrong. Democracy may “die in darkness,” but apparently science drowns in post-publication peer review.)
***
“April showers bring May flowers,” but could conceiving March to May bring preemies your way?
The blackbirds are singing in the dead of the Berlin night. The fields of crocuses are blooming in Humboldthain. And Nature is grabbing even us the screenest of screen monkeys by the short and curlies, suggesting that we get us to the funnery and go — outside.
Not so fast, Rabbit McBunny. A recent retrospective cohort study of singleton Dublin births between Jan. 2013 and Dec. 2022 found higher preterm birth risk for babies conceived in the spring compared with those conceived in other seasons (“Seasonal variation in the incidence of preterm births,” C. MacBride et al, Eur J Obstet Gynecol Reprod Biol, Feb. 2025: 305:298-304). So maybe better wait for summer to let spring fever take over?
Maybe. However, there are three common methodological mistakes in the abstract alone.
Mistaken emphasis on winter conception as relatively safe
First, the authors emphasize that spring conception is risky while winter conception is safest, concluding “A low prevalence of PTB was demonstrated when conceptions occurred in the winter months. However, there was a greater incidence of preterm births between December and February.” Yes to the second claim, no to the first. What do the findings really say?
“Women who conceived in winter between December-February had significantly lower rates of PTBs when compared to other seasons (5.4 % vs 6.5 % (spring) vs 5.6 % (summer) and 5.4 % (autumn), p < 0.001).” The PTB risk here appears to be identical for winter and autumn conceptions, which are both very close to that associated with summer. Spring appears to be the outlier.
The authors continue: “When considering only spontaneous preterm labor, this trend persists, with most women experiencing spontaneous PTBs conceiving during spring (6.7 % vs 5.5 % (winter) vs 5.7 % (summer) vs 5.5 % (autumn), p = 0.001).” Again, the spring risk jumps out as the largest risk in terms of effect size, versus identical winter and autumn risks and slightly higher summer risk — the same pattern reported before but now holding out PTBs not classified as spontaneous (whatever that means).
There is no evidentiary basis here for emphasizing winter as the safest season to conceive vis-à-vis PTB risk. Merry Christmas. In addition to this peculiar favoritism, the authors commit a classic statistical significance testing mistake.
Statistical significance testing misuse
They write:
Conversely, women who gave birth in December-February had significantly higher rates of premature births when compared to other seasons (6.2 vs 5.8, 5.5 %. and 5.5 %, p < 0,01). PTBs of spontaneous onset were highest between December and February, however no statistical significance was found (6.2 % vs 6.1 % (spring), 5.7 % (summer) and 5.4 % (autumn), p = 0.13).
In other words, when restricting the analysis from all PTBs to only those of spontaneous onset, the effect lost “statistical significance.” But the order of magnitude of the effect size did not change; it’s actually about the same effect size at issue, depending on the point of comparison. In fact, it even grew a bit in relation to autumn PTB risk (winter conception)!
However, we shouldn’t make much of either the reported loss of “statistical significance,” or the tiny growth in effect size using the latter comparative lens. The effect size on either universe is really pretty small; we’re talking about less than a percentage point. We can’t be sure this represents a real effect, at all. And, if it does, it’s unclear what its practical significance is. Should people do their family planning around a <1% risk difference in PTB (or anything else)? That seems unlikely to make sense for most people, since other factors (like age, work/school schedules, and, well, spring fever) are usually also in play. Thus, this article looks like a great example of statistical significance testing misuse to hype claims that actually lack practical importance.
Finally, the authors make a third common methodological mistake…
Missing causal logic
Causal logic is missing here. It should precede statistical analyses.
Here’s what the authors say: “This study has shown that there is a seasonal variation in the incidence of preterm births in this Irish-based cohort… This is suggestive that there are potential risk factors associated with seasonal patterns that may be modifiable. Further research to identify these specific risks is warranted.”
Here’s what this means: They should have decided before running analyses if they were looking for potential causes or predictors, and then said that. The alternative was running analyses looking for statistically significant factors associated with outcomes, which says “nothing at all, since this ‘goal’ has no clinical utility whatsoever (beyond what it might suggest about causation or prediction)… [“Risk factor” is] basically just a catch-all phrase to cover up muddy thinking” (Darren Dahly, “Sorry, what was the question again?” Jan. 3. 2024, Life is pain, especially your data).
One possible interpretation, for instance, is that spring conception is riskiest because it follows relative winter inactivity and dietary differences (e.g., less fresh fruit) in the preconception period. Recall that babies affected in utero by the Dutch Hunger winter had the worst outcomes when they were conceived around its peak: Preconception and early pregnancy nutrition may matter even more than nutrition during the rest of pregnancy.
But would we then expect to see the same relative low PTB risk for fall conception corresponding to summer birth extend beyond the European context? Given that heat exposure also predicts preterm birth, and suspected causal mechanisms have to do with greater oxidative and metabolic stress (including higher blood sugar in hot months), fall conceptions may instead increase PTB risk in less temperate places because they put third-trimester pregnant women and fetuses through too much heat stress, especially as climate change raises summer temperatures. (This was a Dublin study, and Ireland doesn’t get that hot.)
(Here it seems possible to both speak of prediction and address causality. Yet, I find it difficult and unnatural to weed out the language of risk, even while trying to do so. Is this a case of me needing to do better kicking a bad habit? Or methodologists needing to accept that the world uses this language in this way? I think the important substance is talking explicitly about causality, rather than forbidding certain terms. But I could be wrong.)
Non-random errors
As often, these common methodological mistakes reflect both dominant sociopolitical bias, and missed connections between the upper-echelon methods discourse and the rest of science. The bias here reflects sociocultural norms of “paranoid parenting” described by sociologist Frank Furedi and his parenting culture studies colleagues. The (familiar) idea is that modern Western societies increasingly expect parents, especially mothers, to mitigate all sorts of risks for their kids by being the best-educated, most active (and helicoptering) parents possible. As opposed to just using common sense and community to have a family and enjoy it.
The methods discourse this keys into, beyond statistical significance testing reform and the causal revolution, is the distinction between prediction versus risk factors.
***
GLP-1 agonists may reduce preterm birth risk, but other effects are small and long-term risks unknown
There’s a case for preconception GLP-1 agonist randomized trials: type 2 diabetes increases adverse pregnancy outcome risks for mom and baby, and evidence suggests periconceptional glucose control matters on a continuum (not just for diabetics). But then new evidence comes out about their possible benefits, ignoring the ocean of unknowns including possible risks, and I see the effect sizes and wonder if it’s really worth it. What if kids exposed to this early on turn out to be infertile as adults? What if they have heightened risks of schizophrenia or leukemia or (insert badness here)? More specifically moored in the literature (though still unknown), what if they are more aggressive, or have less sex? To be fair, by the same token, they could also grow up to have better orgasms, tighter tummies, and clearer skin. There are actually reasons to suspect the exposure could cause better epigenetic programming than the control: better metabolic health could have positive intergenerational consequences just as worse metabolic health could.
We just don’t know whatall’s going on or how it nets out. Meanwhile, the obstetric outcome effect sizes we’re talking about seem practically quite small, and there are other ways to push the same causal levers — known safe blood sugar control/gestational diabetes prevention interventions like diet, exercise, myo-inositol, and vitamin D — so why don’t we use them instead? (Ok, “known safe” is a big word. Inositol could also lower child IQ a bit by lowering maternal testosterone, for instance. Sorry, kids, mommy was treating her PCOS.)
The new evidence: In a retrospective cohort study using data from the nationally representative US Collaborative Network in TriNetX, matching 4267 women on age, race, ethnicity, and history of comorbid conditions, Imbroane et al found using logistic regression that women with a GLP-1 agonist prescription within 2 years preceding the pregnancy
were less likely to develop gestational diabetes mellitus (18.2% vs 15.2%, respectively; odds ratio, 0.81; 95% confidence interval, 0.72-0.91) and hypertensive disorders of pregnancy (22.8% vs 19.9%, respectively; odds ratio, 0.84; 95% confidence interval, 0.76-0.94), experience preterm delivery (4.4% vs 3.0%, respectively; odds ratio, 0.68; 95% confidence interval, 0.54-0.85), and undergo cesarean delivery (19.7% vs 17.6%, respectively; odds ratio, 0.89; 95% confidence interval, 0.87-0.97) — (“Preconception glucagon-like peptide-1 receptor agonist use associated with decreased risk of adverse obstetrical outcomes,” Am J Obstet Gynecol, Jan. 2025, S0002-9378(25)00041-9).
First off, the authors said in their background section “there is limited research on the long-term effects of these medications on future pregnancies.” That’s true, but then I expect this article to offer a solution to this problem. Then I’m disappointed when the results don’t pertain to a long-term analysis. Even though these things haven’t been on the market long enough to have long-term data in the first place. (They should’ve been, but researchers upholding corporate silence norms set them back decades.)
Second, at a glance, these look like about the same effect sizes as in MacBride et al’s seasonal conception/PTB risk paper (see above). The ballpark is 1-2%. And, inching toward fuller interpretation of the 95% compatability intervals (aka confidence intervals), it’s not clear there’s much of any effect here at all. The upper bound of three out of four is over .90, with one all the way at .97. This is basically 1 (aka nothing/no effect).
Third, let’s fully interpret the CIs: the possible effect estimated here of preconception GLP-1 agonist exposure on gestational diabetes risk is 19-28% — a moderate effect of interest on either end of the interval. The effect on risk of hypertensive disorders of pregnancy is 6-24% — a negligibly small effect on the lower bound or a moderate one on the upper bound. For preterm delivery, the estimated risk reduction could be as small as 15% or as large as 46%. This is the only effect that looks both arguably clinically important throughout the whole estimated range, and possibly substantial on the upper bound. It’s also the biggest deal of all these outcomes: preterm birth risks death and disability. For C-section, GLP-1 agonist exposure decreased risk 3-13%; again, it should be said that the lower bound here is negligibly small.
Given that half of these effects may be negligibly small, it reflects statistical significant testing misuse (thresholding) to say all these associations are statistically significant, and thus to conclude (as the authors do) that GLP-1 agonists “may be a powerful tool to improve perinatal outcomes in high-risk populations.”
If we really care about improving these outcomes, we should throw everything we have at them. Instead of touting possible GLP-1 agonist preconception benefits without having long-term data to assess possible risks, we should design randomized trials that help women improve their metabolic health using well-established dietary interventions like the MODIMED (modified Mediterranean) diet as well as exercise, glucose control medications and/or supplements (there are some indications that inositol may be safer than Metformin on the testosterone-lowering score), and GLP-1 agonists. If all these things have an effect size of a few percentage points on outcomes we want to improve, then why not assume the effects may add up for people who are pretty far down on the continuum of bad blood sugar control/metabolic illness? Why not target really unhealthy populations with really bad healthcare access, with a combination of the best available interventions including really good healthcare?
It’s fine if you don’t think the state should be spending money on this; that’s a value judgment. But it’s not fine for scientists to turn away from the highly racially and socioeconomically stratified nature of maternal and fetal health outcomes and act like we can pay pharmaceutical companies to solve the problem, one needle at a time. In addition to being weirdly blind to the bigger sociopolitical picture, these effect sizes are just too small for that to work. And if society cares about moms and babies, then we should really be asking ourselves if we want to invoke the precautionary principle when there’s a lot of money to be made (e.g., selling GLP-1 agonists to reproductive-age women or at least to their insurance companies), but a lot of kids could get hurt (e.g., experiencing as-yet-unknown harms of preconception/prenatal drug exposure).
So why not run a preconception MODIMED diet randomized trial before we run a preconception GLP-1 agonist one? Because you think people won’t do the diet/lifestyle intervention? Which people? Hard to imagine the line of argumentation against this that doesn’t invoke negative racial stereotypes, when we think about which people in the U.S. context are most affected by diabesity and related adverse obstetric health outcomes.
Another option would be to compare MODIMED versus GLP-1. But I think we do want to see what happens to the effect sizes when we just throw everything we’ve got at the problem. Do they stall out because the mechanism(s) they affect only go so far — e.g., you only want to lower blood sugar so much? Or is there a continuum of metabolic health that we can push people farther along with a combination treatment? And, if so, why is that treatment not standard of care?
The easy answer is, it wouldn’t fulfill doctors’ goals (getting a lot of patients in and out quickly), doctors’ offices goals (coding stuff to bill insurance — clear binary diagnoses, easy prescription treatments)… And patients’ goals to have some easy solutions to hard problems. A preconception GLP-1 agonist randomized trial would serve those goals. Designing research to serve alternative goals requires first asking what goals this type of research should serve.
If we really want to lower preterm birth risk to protect babies, then maybe the research should include interventions with different posited causal mechanisms to maximize the odds that the effect sizes do add up, instead of one intervention largely already tapping out a shared metabolic health mechanism (e.g., lowering blood sugar along a continuum). Those interventions might include education about the seasonality finding (see above), if we think it’s real and has a different causal mechanism (like seasonal variation in maternal nutritional status) — as well as about the possible heightened preterm birth risk associated with abortion (e.g., Kc et al find 4-36% higher risk — which is next to nothing on the lower bound, moderate on the upper, not an outlier in the literature, and much larger than the other effects on display here).
If we really want to lower preterm birth risk to improve infant survival and children’s health, we might want to ask broadly what does that and try it all in a package deal, like Australia did recently in the Safer Baby Bundle in an effort to lower stillbirth risk. (Speaking of which, the Stillbirth Advocacy Working Group of the USA published a Jan. 2025 Am J Obstet Gynecol letter to the editor arguing it’s past time to implement such a bundle in the U.S.)
It just happens to be politically inconvenient to address possible risks of abortion to future pregnancies, as part of that bigger picture. That’s one small facet of abortion providers’ and proponents’ much broader failure to accurately inform women about possible associated risks.
The point is that perverse incentives — ones that don’t make anyone evil, just normal, like everyone being busy all the time, companies trying to make money, and (normal) people avoiding uncomfortable topics including uncertainty about possible risks — shouldn’t structure what research gets done. But researchers have to actively think about how these incentives structure the existing literature, to do better research building on it. Research that doesn’t take these perverse incentives and biases as part of science in a telephone game and pass them on. (Not that we can entirely help it.)
Media coverage of preterm birth risk, and what might make sense to mitigate it, faces similar problems. Whereas one might expect to see coverage addressing the possibility that abortion bans could lower preterm birth risks, for instance, one sees instead the opposite (e.g., “The U.S. has a high rate of preterm births, and abortion bans could make that worse,” NPR, March 51, 2023). The medical literature and underlying causal logics point in the opposite direction; but it would undermine the preferred narrative of powerful sociopolitical networks to say so.
The upshot? Maybe what pro-choice advocates see as regress (more restrictive abortion regimes) is actually in some ways also progress in health.
***
More evidence on preterm birth risk and RSV vaccination
There continues to be a preponderance of evidence that RSV vaccination in pregnancy increases preterm birth risk (“Respiratory Syncytial Virus Vaccination Is Associated With Increased Odds of Preterm Birth,” Ilari Kuitunen and Marjut Haapanen, Acta Paediatr, Jan. 2025). It makes no sense to take this risk, given that there exists an alternative — newborn monoclonal antibody treatment — that looks possibly more effective as well as safer. Nevertheless, ACOG still recommends the RSV vaccination for pregnant women.
Coincidentally (or not), this looks like another fully preventable contributing cause to preterm birth risk. And not just any old cause. But one that, like the possible preterm birth risk increase associated with abortion, one that is distinctly iatrogenic (caused by medical treatment). But if doctors have perverse incentives to not see or communicate about those (because, e.g., they may profit from the interventions causing the risks, or have social and professional ideological and identity affiliations with people who have those sorts of vested interests) — then whose job is it to tell patients?
It’s not a rhetorical question. I keep wondering if we need a medical information service that doesn’t profit from medical care or take policy positions. Not to overcome the neutrality problem (that’s impossible). But to come closer to addressing it effectively than the current lack of such a service does. It can’t be a governmental service, either; I suppose it can’t pay people at all. So we’re back to the rogue methodologist ninja helpline.
***
Insulin trumps Metformin for gestational diabetes — as long as you don’t fully interpret the compatability intervals or ask the patients
Results from a recent randomized trial in the Netherlands are being trumpeted as showing insulin remains the best option for pregnant women managing gestational diabetes (“Oral Glucose-Lowering Agents vs Insulin for Gestational Diabetes: A Randomized Clinical Trial,” Rademaker et al, JAMA, Feb. 2025, 11;333(6):470-478). There are a few problems with this interpretation. First, it misinterprets its evidence in the usual way, presenting uncertainty about which treatment is better for lowering the risk of LGA (large for gestational age) infants as certainty. Second, it doesn’t account for patient preferences.
Here are the stats:
With oral agents, 23.9% of infants (n = 97) were large for gestational age vs 19.9% (n = 79) with insulin (absolute risk difference, 4.0%; 95% CI, -1.7% to 9.8%; P = .09 for noninferiority), with the confidence interval of the risk difference exceeding the absolute noninferiority margin of 8%. Maternal hypoglycemia was reported in 20.9% with oral glucose-lowering agents and 10.9% with insulin (absolute risk difference, 10.0%; 95% CI, 3.7%-21.2%).
In other words, oral agents may have slightly reduced LGA risk by up to 1.7% or increased it by to 9.8%. We don’t know from this data which way the sign on this effect goes, or whether there is one at all. Either way, it’s a fairly small possible effect. But the upper bound starts to look like it could be clinically important.
The Comment headline on this should read something like “More Research Really Is Needed This Time.” Not “For Gestational Diabetes Pharmacotherapy, Insulin Reigns Supreme” (as it actually does read).
Similarly, for maternal hypoglycemia, the evidence suggests a possible 3.7-21.2% risk increase from oral agents compared to insulin. On one hand, the fact of an effect and its sign looks comparatively certain here. On the other hand, 4% is a pretty small effect. What if patients would rather risk it than inject insulin and conduct closer blood sugar monitoring — the apparent alternative to popping some Metformin and calling it a day? What if the only practical effect of the phenomenon is that women feel light-headed one day and take less medicine (adjusting their dose) the next?
That human element is oddly missing in this story, too. Quality of life matters, what patients would choose given something resembling full and accurate information for informed consent matters, and misinterpreting the evidence to spin this trial’s results as showing a clear “winner” — when they don’t — undercuts the valuation of those central, human concerns.
Medicine is not top-down. Researchers do not get to decide what patients want, and then spin the (ambiguous) evidence to sell it to them, just to get a publication line on their CVs. Unless they do.
***
*Uncertainty aversion in randomized trial results reporting on growth hormone in IVF
“Empirical use of growth hormone in IVF is useless: the largest randomized controlled trial,” report Mourad et al (Hum Reprod, Jan. 2025, 40(1):77-84). But the reported evidence doesn’t actually establish that claim. The researchers* misinterpret ambiguous results as certain. [*An earlier version of this sentence said they “engage in the usual statistical significance testing misuse,” with a matching subheading. That was wrong. Sorry. - VW] Their data are ambiguous. Ambiguous results are less likely to get published. Perhaps responding to these perverse incentives, perhaps just ignorant of now-prevailing statistical norms, the authors present their ambiguous results as certain.
In a phase III RCT with 288 women undergoing IVF in Montreal between 2014 and 2020, women randomly assigned to receive growth hormone (GH) from Day 1 of ovarian stimulation until oocyte retrieval may have benefited or may have been harmed by the treatment. Both are possible. GH treatment was associated with an up to -22% decrease and an up to 9% increase in clinical pregnancy rate after fresh embryo transfer. If you’re desperate for a baby, both of those effect sizes are practically important.
The available evidence doesn’t establish an effect for certain either way. Can you imagine the crushing disappointment for the researchers who ran a large phase III trial and then couldn’t answer their research question? Great, now imagine how the women and their families (or lack thereof) feel. Dashed research hypotheses or no, please science better and tell these people the truth.
Similarly, after frozen embryo transfer, clinical pregnancy rates may have decreased up to 28% or increased up to 10%. Both effects are big enough that we would arguably care about them clinically. We just don’t know which effect sign is right.
I love this example, even though it is just another case of uncertainty aversion — a dime a dozen in recent medical and scientific literatures. Because there is perpetual hype around embryo screening techniques that are far more technologically complex and involve much more uncertainty in outcomes than just whether or not a baby gets made — which is a great binary variable to measure unambiguously if I ever saw one (as opposed to most outcome variables that get dichotomized). This example illustrates how hard it is to actually know what works (or backfires) even with a well-designed randomized trial with a (sort-of) large number of participants and a defensively binary outcome variable.
Knowing stuff is hard. Getting inaccurate interpretation into the scientific record is (apparently) easy. Ain’t nobody got time to try to get it out most of the time (and usually such efforts fail anyway). Sometimes I am not sure why I bother doing this. I just find it so fascinating, am curious what the latest data really suggest, and wish that people had a way to find out the truth more easily for themselves. It also really matters for me and my family and friends what the medical and scientific literature actually shows. So someone has to read it.
***
Lactation consultant interventions may have little or no effect on breastfeeding, but statistical significance testing misuse leads researchers to spin them as effective
In a recent systematic review and meta-analysis, D’Hollander et al conclude that evidence from 40 randomized trials for lactation consultant interventions (LCIs) shows they “are an effective strategy for improving exclusive breastfeeding and any breastfeeding” (“Breastfeeding Support Provided by Lactation Consultants: A Systematic Review and Meta-Analysis,” JAMA Pediatr, March 2025, h/t José Díaz Rosselló). But that is not what their evidence actually suggests.
As usual in this literature, the authors fail to account for social desirability bias; women might lie to report more breastfeeding than they’re doing. Especially considering that possibility, some of the effect sizes measured are basically nothing. For instance, they find LCIs decrease the risk of stopping exclusive breastfeeding by 1-6% (95% CI .94-.99) and reduce the risk of stopping any breastfeeding (i.e., exclusive or mixed with formula-feeding) by 4-13% (95% CI .87-.96). The lower bounds of both these compatability intervals are basically 1, meaning no effect. Similarly, they find LCIs extend breastfeeding duration by about a day to 7.1 weeks (95% CI .13-7.12 weeks). Again, the lower bound here is basically nothing.
The argument for efficacy is even worse when we consider their results on exclusive breastfeeding duration: LCIs may decrease it up to 2.7 weeks or increase it up to 5.6 weeks (95% CI -2.73-5.60). So the effect could go either way, if there is one at all. Similarly, LCIs may decrease maternal breastfeeding self-efficacy or increase it (-1.23-6.90). There may be an effect either way or no effect, and it’s not clear what this particular finding means in comprehensible units in real life (much less why we might care).
Finally, it’s weird that the researchers evaluated the impact of LCIs on infant growth only in terms of infant overweight/obesity risk and not in terms of underweight/failure to thrive/related complications risks. There is growing recognition of common and preventable harm to newborns from current exclusive breastfeeding guidance and (mis)education — a recognition that is, sadly, largely absent from breastfeeding medicine and scholarly discourse. Maybe the authors inherited this ideological blindness from the studies in the literature they were analyzing? Then they should say so.
Anyway, the evidence here is again uncertain and does not support the authors’ conclusion that LCIs are hereby proven effective for any logical value of that word. Their findings suggest LCIs may decrease infant overweight/obesity risk by up to 6%, or increase it up to 146%. So there may be no effect, or there may be a quite substantial effect in a direction that suggests iatrogenesis from these interventions. Not efficacy.
This parallels the confirmation bias and (supporting) statistical significance testing misuse in Lenells et al’s recent Cochrane review on psychosocial breastfeeding support interventions and postpartum depression. It looks like both sets of authors agreed with the consensus (“breast is best”; so supporting breastfeeding has got to be good for moms and babies). Those priors in turn blinded them to what their own reported evidence actually suggests. This is exactly what we expect from spin science.
***
Common methods mistakes coda
If there’s one thing this roundup shows, it’s that effect sizes matter more than p-values, and researchers, journalists, and the rest of us would do well to keep asking: what does the evidence really establish? What does it leave out? And why do we care?
It’s always the same:
researchers want to make a name
making black-or-white statements, when the evidence is murky gray.
I’ll find another dozen of these tomorrow after posting today.
The record will (all wrong) stay.
All I wanted was to know what was newsy.
But all I can see is what people do screwy.
***
I remain a bit behind on the literature since having a baby in January. This post only makes a small dent in my backlog of contraception, pregnancy, and neonatology medical digests. I look forward to catching up more another time. In the meantime, feel free to send me medical, methods, or other interesting news I may have missed.