Book Review: We Never Make Mistakes
Alexander Solzhenitzsyn and Contemporary Western Mass Surveillance
Around this time, five years ago, it set in that our good friend Arjen Kamphuis was probably not coming home. After volunteering with WikiLeaks, co-authoring the crypto how-to book Information security for investigative journalists, training numerous activists, journalists, and researchers worldwide in infosec, and founding a company with NSA mass surveillance whistleblowers Bill Binney and Kirk Wiebe that he may have considered a means of getting intelligence community whistleblowers out of the U.S. (a pursuit perhaps best left to performance artists), Arjen told friends he was going to take a break in Svalbard. He went instead to Bodø, Norway, where his kayak — but not his body — was recovered from a fjord.
We retraced his steps and spoke with police, who eventually decided that he had probably died in the icy water after improperly constructing his vessel. Having repeatedly examined his hand where a circular saw had permanently shortened one claw-like finger during what he described as an enthusiastic hard drive destruction session, I judged this to be a plausible story — though not a foregone conclusion. (One doesn’t ask how the police investigation’s Bayesian updating was done in light of his friend Julian’s ongoing predicament.) Arjen’s opsec was certainly better than his handiwork — on which count, he scored at least one better than me. He had new molars and warm hands. He loved meat and kindness. He took these pictures.
Like a lot of my tribe, it seems, Arjen was an infrastructure nerd. Infrastructure is game. If you can’t get people food, water, medicine, movement, and information, you’re not in charge. And if, by chance, you don’t care about power but about people, your people aren’t living well; and some of them aren’t living at all.
But infrastructure isn’t just rails and Tails and fiberoptic trails. Infrastructure is culture — people helping other people — too. Or not helping other people, or actively hurting other people, or accidentally hurting themselves and others, as the case may be. Legal and ethical regimes, formal and informal, are part of essential societal infrastructure. Myths, ideologies, and even the most rabid Positivism are all socially constructed systems of making sense of the world by putting facts and theories together; infrastructures of sense.
I’m reading the best book I’ve encountered on such infrastructure, Paul Feyerabend’s Against Method; but it will take me a while to finish it, and that’s not the subject of this post. I’m also thinking a lot lately about publication bias in the misinformation literature, which is partly a story about how we don’t see huge swaths of what the most powerful institutional actors — states, militaries, and corporations — learn about dark patterns, disinfo ops, corrosive psychology (Zersetzung), and the like. Call it dark knowledge.
Both journeys-in-progress contribute flavoring to this soup: Feyerabend, from what I’ve read so far and first encountered through the lens of Greenland, says we have to do what works in context. Rote checklists don’t make quality science; going through critical thinking processes including counterinduction do that. Having rebuked large swaths of scientific practice as secular religion with no more empirically sound basis than any other mythology, Feyerabend might (I worry) have criticized the simple mathematical argument against mass screenings for low-prevalence problems (or any other intervention) as rationalist. Dark knowledge, too, could fit neatly in a complementary rationalist paradigm; but it’s not published in the scientific literature, having often been gained through illegal and/or unethical research and for evil purposes (e.g., causing people to do things that are not in their interests, be it buying cigarettes or discrediting their own political causes — and calling it free will).
Art has always been one answer to both the rationalist pressure-cooker of how many moderns (and perhaps most institutionally wedded ones) misunderstand science, and the problem of political repression of civil liberties including free speech. Feyerabend offers a different approach to method, so that this dialectic is no longer necessary — creative, critical thinking being reintegrated as part of science itself. In this reintegration, he preserves a possible place for myth and nonsense in that creative process. He notes this feature is, in fact, often shared across canonical texts of religion and science (both mythologies) — with both the Bible and Sir Francis Bacon’s Novus Organum progressing from renunciation (of priors) to revelation (of truth; Feyerabend, AM, footnote 11, p. 25). So we can look at fiction like Solzhenitsyn’s as a potentially knowledge-making myth like any other. In this approach, the playing field is leveled for evidence from all realms — if we know how to use it.
Finding a counterinductive mooring from outside the rationalist discourses of science and science communication is arguably important to do when we think about misinformation, in part because we may not get better sources than fiction about dark knowledge. Even when they’re available, literal texts of atrocities like Charles Reznikoff’s prose poems of Holocaust and late 19th-early 20th U.S. crime victim accounts can overwhelm with their brutality, when the point of learning about inhumanity is arguably to feel more. We need these testimonies, of course, to hope to avoid repeating the past. But we also need help making sense of the facts with theories, be they scientific, religious, or anything else. Solzhenitsyn’s fictionalized accounts of Soviet inhumanity offer one such mooring.
Outline
This post proceeds as follows. First, I sketch a brief history of contemporary U.S. mass telecommunications surveillance; this history suggests that, if such infrastructure exists, it will likely be abused. Then, I suggest that a purely statistical critique of mass surveillance (such as the one put forward by Fienberg that I have been extending for years) is necessary but not sufficient to stop the imminent threat of a next-level Western mass surveillance infrastructure. Next, I look to Solzhenitsyn to add a counterinductive mooring for a potentially effective critique. Finally, I briefly note this critique extends to science itself, as some contemporaries have been busy doing. The implications of this critical-reflective science for rationalist critiques of mass surveillance are both deferential and sobering.
The point is to have this conversation, even though the topic is sensitive and there are good reasons for avoiding sensitive topics. But the current debate around the possibility of the Western world (re)instituting a mass surveillance infrastructure (under the auspices of the UK Online Security Bill, EU Chat Control, and US EARN It Act) is first insufficiently scientific. The math of the matter is clear: These programs backfire according to probability theory (more again on this later). And it is thus also insufficiently humanitarian. The mythology of Soviet social pathology functions here as a distilled historiography of totalitarianism — a counterpoint to those who would denigrate humanitarian concerns — in the Western imagination. Here is the Other we don’t want to be, but are in danger of becoming — in government and in science alike.
Debunking the Myth of No U.S. Mass Surveillance, Domestic Spying, or Abusive Defense Bloat Thereof
The dominant Western narrative about mass surveillance is simultaneously that we don’t do it (that would be evil); but that if it were good for security, we would (that would be sensible). Both parts of this argument are based on false premises. The first is factually inaccurate, as briefly established below. The second assumes that we would know the effect of the intervention (mass surveillance) on the outcome of interest (security) before implementing it — a basic scientific evidentiary bar to which a very small minority of public policies are actually held, most of them outside security contexts. It would be a very good idea to hold mass interventions to scientific evidentiary standards before implementing them; that is unfortunately not how the world works (different topic).
Here is a brief, incomplete history of contemporary U.S. mass telecommunications surveillance. It is U.S.-centric because the world is dominated by American military power, and because I’m an American who used to study this sort of thing in America. It’s my understanding that contemporary Western mass surveillance followed related and similar trajectories, conditioned by U.S. dominance, in the other Five Eyes countries, Germany (known as the sixth eye), and France — where encryption is already illegal.
1940: The Army asked the phone company (RCA) president if a nice lieutenant could be assigned to the company. He proceeded to photograph all international cables. No warrants, no courts, no problem.
1945+: Continuity gave way to expansion. World War II ended, but U.S. mass telecom surveillance continued under the auspices of Projects SHAMROCK and MINARET. Shamrock continued targeting all international telegraphic data (aka bulk collection, or dragnet/mass surveillance). The NSA kept intercepting messages, and passing them onto other agencies (e.g., FBI, CIA, DOD) as appropriate…
1950s: New Shamrock expanded telegram tapping to target foreign embassies.
1960s: Social change movements caused cultural changes that may have made it more difficult for telecommunications company employees to keep quietly helping the government conduct warrantless mass surveillance and domestic spying with no judicial or legislative oversight, and for no readily apparent purpose other than the surely always legitimate exercise of power.
1967: Project Minaret spied on selected U.S. citizens’ electronic communications (targeted surveillance, esp. targeting activists seeking to prove they were Commies acting under foreign influence). The film “The President’s Analyst” satirized state-telecom corruption and privacy violations.
1968: MLK, Jr., a target of Minaret and more extensive FBI surveillance, was assassinated.
1970s: Oh no, somebody talked to the feds! J/k, the feds were the ones breaking the law already. So somebody talked to the media, which triggered Senate investigations and more investigative media reporting on intelligence community abuses including domestic spying. Eventually this culminated in…
1975: The Church Committee investigating domestic surveillance of Americans including civil rights activists. On one hand, it wasn’t until the 1980s that public trust in the U.S. government had taken such a bad turn that most Americans thought their government was involved in the 1963 JFK assassination; at the time, most believed the official narrative instead (or at least they reported that they did). So we have to be careful to not retrospectively read too much cynicism or mistrust into the public reception of Church and related hearings. On the other hand, there was already some public evidentiary basis for mistrusting state narratives about police conduct toward dissidents, and widespread suspicion of state-telecom lawbreaking (see, e.g., “The President’s Analyst”). So in 1975, it may or may not have come as a surprise to a lot of Americans that some form of due process-free mass surveillance was in force (Shamrock), and security agencies were targeting domestic activists without warrants or court oversight, too (Minaret).
What was almost certainly shocking to the wider public at the time, however, was the infiltration of political groups to sew mistrust (COINTELPRO), mainstream media cooperating to disseminate CIA propaganda to the American public (MOCKINGBIRD), the CIA plotting to assassinate foreign leaders (Family Jewels), and the CIA drugging and torture of unsuspecting Americans in mind control experiments (MKULTRA). Together, these violations spelled totalitarianism. A state that can read every message, infiltrate every association, control the news, and even change your mind without your knowledge or consent — this is the opposite of a state vested by the people with limited powers for a particular purpose, the liberal conception of legitimate government theorized by John Locke and enshrined in the U.S.’s founding documents. These are, to repurpose Senator McCarthy’s language, un-American activities.
Worse, it’s hard to see how to constrain a state that does this. If you don’t like it, maybe they can (mysteriously) change your mind.
As Senator Frank Church famously warned Americans on NBC:
In the need to develop a capacity to know what potential enemies are doing, the United States government has perfected a technological capability that enables us to monitor the messages that go through the air…We have a very extensive capability of intercepting messages wherever they may be in the airwaves. Now, that is necessary and important to the United States as we look abroad at enemies or potential enemies. We must know, at the same time, that capability at any time could be turned around on the American people, and no American would have any privacy left such is the capability to monitor everything—telephone conversations, telegrams, it doesn't matter. There would be no place to hide.
If this government ever became a tyranny, if a dictator ever took charge in this country, the technological capacity that the intelligence community has given the government could enable it to impose total tyranny, and there would be no way to fight back because the most careful effort to combine together in resistance to the government, no matter how privately it was done, is within the reach of the government to know. Such is the capability of this technology.
Now why is this investigation important? I'll tell you why: because I don't want to see this country ever go across the bridge. I know the capacity that is there to make tyranny total in America, and we must see to it that this agency and all agencies that possess this technology operate within the law and under proper supervision so that we never cross over that abyss. That is the abyss from which there is no return.
This speech really showed the intelligence community who was boss, and they never did any of it again.
Unless nothing changed at all, because the state had already crossed the Rubicon. Infrastructure is game. The infrastructure of contemporary U.S. mass surveillance was in place by the late 20th century, and technological capabilities to conduct mass telecom surveillance (and target dissidents accordingly) did nothing but grow. So why should we expect that the practice did anything but grow? Where’s the enforcement teeth of legal and ethical norms constraining totalitarian surveillance state powers? Where’s the proof the constraint worked?
An Alternate Narrative Takes Shape
There is, of course, an alternate narrative to the usual story that “we’re not evil; when there were abuses, we had hearings.” A witness during the German Bundestag’s NSAUA — the only parliamentary investigative committee globally to investigate post-9/11 illegal U.S. global mass and targeted surveillance — said that it was common knowledge in the intelligence community that the U.S. and UK had continued domestic mass surveillance programs continuously in recent history. This alternate account, which is also widely believed in the tech community, assumes that — as long as it’s physically possible to cheaply and somewhat easily collect massive amounts of data, like by using fiber-optic cable splitters to copy all telecoms in a particular network — intelligence agencies have always done it, and are always going to do it. You can say “power corrupts.” You can say “Everyone else is doing it, so we have to.”
Agnosticism works. It doesn’t matter why security agencies will probably conduct politically actionable telecom surveillance whenever they can — evil, or the perceived game theoretically rational pursuit of at least information symmetry, and preferably asymmetry. It matters that there is widespread agreement among intelligence and tech industry professionals that this is reality. It’s just not polite to say so in public.
When it was said in public for a change, during the Church Committee hearing at least, the legislature took notice. By the time of the NSAUA, in contrast, it had already supposedly been made clear to the Germans that access to invaluable U.S. NRO satellite intelligence was predicated on client-state relations, and specifically that German sovereignty ended where Edward Snowden’s body began. But in America in the late 20th century, the conceit remained one of democratic accountability of limited state powers delegated from the public for specific purposes.
1978: Congress passed the FISA (the Foreign Intelligence Surveillance Act), telling the NSA it had to bother with warrants and judicial review.
1980: The intelligence community made up its own sigint (signals intelligence) rules in the form of United States Signals Intelligence Directive 18 (USSID 18), then interpreted them as giving the executive branch unitary authority for warrantless surveillance — the exact abuse that FISA had rebuked. Long live FISA; FISA has been dead for 43 years.
1980s-1990s: Radio silence for telecom whistleblowers. Maybe there’s history I’m missing here (fill me in). Maybe neoliberalism was so ideologically successful that there were no telecom whistleblowers for a few decades. Or maybe people tried to blow the whistle on ongoing abuses, and got blown off — like the WaPo, NYT, and Politico initially blew off WikiLeaks whistleblower Chelsea Manning. Maybe they even disappeared trying. In any case, it’s not a big leap from the well-documented history of domestic U.S. intelligence agency information operations to recognize the possibility of informal censorship or more at work in this apparent long stretch of radio silence for telecom whistleblowers in the face of possible illegal mass surveillance.
2001+: Coordinated domestic terror strikes on September 11 killed thousands. Congress responded with the USA Patriot Act authorizing more surveillance without dismantling Constitutional protections; security agencies proceeded to largely ignore all existing investigative restrictions on surveillance.
The reference here is Athan Theoharis’s “Expanding U.S. Surveillance Powers: The Costs of Secrecy,” Journal of Policy History, Vol. 28, No. 3, 2016, p. 515-534. Theoharis was a Marquette University American history professor, Freedom of Information Act research pioneer, and Church Committee research assistant. This article hits a lot of well-sourced highlights of the general police abuse playbook in the post-9/11 U.S. context: police don’t always keep good records about their policing in order that effective oversight might be better possible (p. 517, p. 524-7, p. 531); many subjects of investigations were set up in the first place (p. 517-8) — or weren’t doing anything wrong other than dissenting (p. 523, p. 530); private companies often illegally help governments get data in times of perceived threat (p. 518); systematic review and independent audit-style inquiries tend to question the evidentiary value of mass screenings for low-prevalence problems of which mass surveillance is one subset (p. 519-520); the executive branch often asks for and gets discretionary power to break the law in the interests of national security despite an extensive historical record suggesting this will generate political abuses (p. 521-530). The fact that critical treatments such as Theoharis’s focus on FBI and NSA abuses post-Church, when used to include related CIA abuses (cf Church), is consistent with possible selection biases including publication bias against dark knowledge.
2002: NSA veterans Bill Binney, Kirk Wiebe, and Edward Loomis blew the whistle on mass communications and Internet surveillance pork program Trailblazer (now Turbulence, a possible reference to precursor program “Cosmic Fart” — I mean, StellarWind — and its public pushback). Binney et al preferred their own surveillance program, ThinThread, better. It targeted fewer people — and better protected Americans.
2005: Widespread recognition crystallized among politically aware, Western technologists (e.g., the CCC) that “we lost the [crypto] war.”
2013: CIA and NSA veteran Edward Snowden blew the whistle on bigger, badder post-9/11 iterations of U.S. mass telecom surveillance.
2013-2023: Major infrastructural reforms with major practical implications ensued. For example, it’s no longer trivially easy for wayward security forces to redirect activists’ phones when they’re trying to call 911 or a lawyer, because cell phones no longer rely on in-the-clear telecommunications networks for pretty much everything anymore. The NSA ended its bulk Internet meta-data collection program. A lot of people switched to end-to-end encrypted messaging apps like Signal and WhatsApp for private communications. And much more.
In practice, post-Snowden reforms mean it’s gotten harder for law enforcement to disable dissidents’ communications on a whim. This happened mostly because corporations decided the state had gone too far, worried about public disapproval of their own lengthy collaboration with obviously illegal surveillance (for which they were swiftly granted immunity that wasn’t technically legally valid, but that couldn’t be effectively contested), and institutionalized information security improvements like mass encryption of digital communications in transit in order to make it a lot harder for bulk data collection to work. If you have to try to crack codes on everybody’s grocery list, surveillance costs more for less.
At the same time, companies (and probably governments) developed more and better technologies for the same old police purposes. Some of them, like the spyware Pegasus, were used to surveil a lot of activists, journalists, and researchers working on things states didn’t like. Encryption doesn’t help if your endpoints (phones, computers) are compromised, and states may always be able to compromise a lot of dissidents’ devices with spyware like this. So Snowden turned up the volume on the public conversation about mass surveillance by directly substantiating common intelligence and tech community knowledge in the press; but this information availability arguably didn’t change the infrastructural game that is characterized by power asymmetries that it is very hard for dissenters to meaningfully resist.
Early 2020s: Apparently someone at Apple forgot the implications of probability theory that doom mass surveillance for low-prevalence problems under conditions of inferential uncertainty to backfire. Maybe it was an intelligence community operative. Maybe it was someone who fell asleep in stats class.
In any event, they developed a tool to mass screen digital media for kiddie porn. Then everyone decided we shouldn’t call it kiddie porn anymore, because that term is politically incorrect. Fine. It’s bad and everyone falls all over themselves to observe social norms hating it more than the next person ever could. This makes it the perfect way to (re)introduce more and better mass surveillance infrastructure. Once this infrastructure is in place, history suggests it’s likely to be abused for other ends, particularly to target political dissidents (see above).
On one hand, of course industry and governmental forces pushed for the surveillance pendulum to swing back immediately, and have kept pushing ever since. On the other hand, maybe this wasn’t a pushback in a pendulum swing at all. Maybe the history of contemporary Western mass surveillance is overwhelmingly one of continuity of state power and progressive infrastructural advances bolstering it. Europol has already said they want to use the proposed new mass surveillance infrastructure to roll out more law enforcement access to data (h/t Elina Eickstädt of the CCC). As Giacomo Zandonini, Apostolis Fotiadis and Luděk Stavinoha wrote recently for Balkan Insight:
Europol officials floated the idea of using the proposed EU Centre to scan for more than just CSAM [child sexual abuse material], telling the Commission, “There are other crime areas that would benefit from detection”. According to the minutes, a Commission official “signalled [sic] understanding for the additional wishes” but “flagged the need to be realistic in terms of what could be expected, given the many sensitivities around the proposal.”
In developing this tool, it’s possible that Apple has already crossed the Rubicon. Any day, the NSA could hand a letter to an Apple executive saying: You made the tech, the government requires you to use it, and you’re not allowed to tell anyone (like with national security letters). There are various mechanisms in place to help tech companies and technologists warn people if something like this happens (like warrant canaries). But we don’t know what we don’t know about their failures. Selection bias against dark knowledge strikes again.
So free societies need to bolster telecom whistleblower support to address this possible threat. There is no good public way of discussing this in practical terms, since any known whistleblower support infrastructure would swiftly become a surveillance target. Maybe there is also no systematic way of addressing the need. It may be that resisting illegitimate authority is like doing science according to Feyerabend: You try to use the best methods in a given context, and it can’t be standardized. But technologists widely agree that sustaining end-to-end encryption, currently under threat worldwide, is one key technological component of a solution to this multi-faceted problem.
Arjen was one of apparently few people working on this crucial social and political problem in his own ways. Infrastructure is game, and he was building it. And he was holding up part of what was already in place, which was not much. No one was jostling to take his place.
After his disappearance, Arjen was often described as typically wearing all black. This was neither relevant nor meaningful. Relevance: He wore all-light clothing outside in the sun to keep cool, and was an avid outdoorsman who disappeared outdoors. Meaning: He wore all-black clothing inside, in the course of regular life, while not outdoor adventures — as many hackers do. The choice was so unremarkable in his social milieu that its repeated description had the appearance (to me) of an ignorant anthropologist’s description of a primitive tribe’s ceremonial garb that was not really ceremonial garb in what was not really a primitive tribe. And anyway, when he went missing, he would have been wearing an off-white (cream, khaki, taupe…)-colored button-down shirt, similarly light-colored pants, and similarly light-colored hat to stay cool even in the Nordic summer sun. Not that anyone expected to find him in those clothes by the time he was in the news for having gone missing.
My notes at the time contradict this statement (retrospective bias): I expected Arjen to walk out of the woods at any moment, shaking his head at us for being silly. And then, when there were no vegetarians around, ask for a steak.
Mass Surveillance Can’t Escape Math (But We Can’t Escape Qualitative Reality)
The simple statistical argument against mass surveillance is that the implications of probability theory may doom mass screenings for low-prevalence problems under conditions of persistent inferential uncertainty to fail due to the inescapable accuracy-error trade-off (see some of my previous writing on this here, here, and here). It’s great that applying Bayes’ theorem demonstrates this. Bayes’ theorem is a wonder of the world. I hope I can do it sufficient justice to help more people understand it someday.
But it’s not clear that explaining the math — even if it’s done really, really well — will work to stop the advancing transnational initiative to end Western end-to-end encryption and make a mass surveillance infrastructure that could be misused to destroy democracy (to the extent that it exists). The quantitative argument against mass surveillance is a rationalist one. I’m in the process of trying to make it as well as I possibly can, because I don’t see it being done enough, for some reason.
At the same time, it’s probabilistic and could be subject to endless debate about scope conditions: What is meant exactly by inferential uncertainty? How can we know that resources spent reducing it to nab some egregious baddies would be better spent following alternative strategies? How rare is rare? For which sub-types of child abuse can we estimate plausible base rates from which to estimate hypothetical accuracy-error spreads? Why can’t we implement a flagged content rating system to at least reliably classify and then investigate flagged cases that multiple reasonable people agree are certain? (The answer is: The burden of proof should be on proponents of new interventions like this to establish benefits outweigh harms under independent review using publicly available data. The political debate might remain endless nonetheless.)
One could also make a qualitative historical argument that, if this infrastructure exists, it will likely be abused — because that’s what’s has happened previously (see the brief history above). This is an equally rationalist argument. It could also be subject to endless debate about what uses of which police powers are appropriate, when, who decides, how, what safeguards against abuse are possible and advisable, etc.
We may need more than rationalist arguments from statistics and history against mass surveillance. Another plank of defense against totalitarianism may be a mythology of totalitarianism that shows instead of telling why living in a police state makes you feel unsafe, and so subverts safety, destroying the very fabric of the society it aims to protect. Solzhenitsyn’s stories from Soviet Russia provide that mythology in a form that members of powerful Western (especially U.S.) social and political networks can probably widely recognize the cautionary tale. This is an enemy everyone abhors; the Other we could never become like — isn’t it?
Oblivion, “the abyss from which there is no return,” may require counterinduction to resist. When someone wants to drink himself to death because it kills the pain of imperfection (although it degrades body and mind — perversely magnifying imperfections), LSD may treat alcoholism more effectively than psychotherapy or other means, because it offers a different experience of reality. Similarly, when the prevailing ethos of professional security culture says that building the ultimate surveillance state mitigates the pain of risk (although it magnifies risks from within and from without — through false negatives and false positives), art may answer totalitarianism more effectively than rational argument. It, too, offers a different experience of reality.
The will to change often comes from feeling different, in order to be able to think outside the preceding system of thought — to think counterinductively. No one is going to stop anyone else from pursuing these sorts of ends (personal or political oblivion) either through force or through logic. People have to want to change. So we need something beyond logic here, or civilization loses. And that larger perspective enriches science as well as society; it’s a crucial part of, not a distraction from, critical-reflective inquiry. So I read some Solzhenitsyn.
The Soul of Solzhenitsyn
We Never Make Mistakes is a little book with two short stories (spoiler alert).
Idealistic Wannabe Soldier Probably Kills Probably Innocent Lost Dude
The first, “An Incident at Krechetovka Station,” centers on misinterpretation of uncertainty as certainty: The young, bespectacled Lieutenant Zotov — his dreams of military glory for the cause repeatedly stymied — questions an older straggler (soldier who got separated from his comrades), Tveritinov. When Tveritinov can’t say what Stalingrad used to be called (Czaritsin), Zotov suspects he’s a spy and has him arrested. He can never find out what happened to the man.
This is a parable about dichotomania in political relief. And in much the same way that there is no sure-thing second-order screening test for being a spy in this story, there is no sure-thing second-order screening test for espionage and other crimes in the real world today. That’s why mass screenings for low-prevalence problems under conditions of inferential uncertainty may be doomed to fail, subverting the security of liberal democratic societies by casting doubt on the goodness of large numbers of innocents.
Crime and defeat are bad, but it’s the paranoia that gets you under totalitarianism. It’s not fancy technological means of control that make totalitarianism, either. It’s the way in which people are made into an infrastructure of doubting and informing on one another. Everyone is a possible spy — to the loyal and fastidious Zotov, who reads Marx for fun and sleeps alone as if in self-punishment for not being at the front.
The book’s title comes from this story’s penultimate lines. After having him arrested, Zotov inquires twice about Tveritinov’s fate, first to the operation center and then to a security investigator visiting on business. He gets no where:
“Why do you ask?” the investigator knitted his brows, significantly.
“Just asking… I was interested… in the outcome.”
“We’ll take care of your Tveritinov. We never make mistakes.”
But afterward, for the rest of his life, Zotov could never forget that man…
Uncertainty led to enhanced screening, which led to certainty — but the reader knows this was a tragically misperceived certainty under conditions of epistemic uncertainty. Zotov never knew if Tveritinov was a spy. But he knew that he had probably killed him.
What if doubt and regret — not confident posturing — are our better angels? Diffidence and acknowledging uncertainty the less wrong exercise of authority?
Who did Norwegian police interview about Arjen’s U.S. “no fly list” designation and un-designation, both without notice or process? Could they have judged it impolitic to ask any powerful security agencies who had actually or likely taken action against Arjen what exactly they did, when, why — and up to when? In all fairness, who would you contact to ask if the CIA had any information about your missing transparency activist? What would that call sound like?
A Brief Conversation with the CIA Suspiciously Missing Persons Line
Brrring brrrring!
“Hello?”
“Hello. Thank you for calling the Central Intelligence Agency Suspiciously Missing Persons Line. How can I help you?”
“Can I ask you about recent U.S. intelligence operations involving a transparency activist who used to be on the ‘ no fly list’?”
“No. [Tapping noises…]”
“Just — just no? Look, he was a WikiLeaks volunteer, and it seems you were hoping to have at least the head of the organization killed, and were reportedly targeting others in various ways. Can’t we just — ”
“No.”
“What about if I submitted a — ”
“Are they dead?”
“Well that’s what we’re wondering.”
“Oh, what a pity. [A button is pressed, and Chopin’s “March Funèbre” plays softly, as if from a tiny violin.] If you knew they were dead, then you could submit a Freedom of Information Act request. But if you don’t know, then that’s protected information. Are you sure you don’t know?”
“No, we were wondering if you knew.”
“Oh. Well in that case, we can’t check our files, in the interests of protecting the universal human right to privacy. So sorry. Anything else?”
“Do you have any suggestions of anyone else I might talk to?”
“Why don’t you give me your name, your mother’s maiden name, any national identity information, home address, and deepest fear, and I’ll have someone contact you when you least suspect it.”
Greedy Drunks Accidentally Kill Virtuous Old Woman After Dismantling Part of Her House
The second short story in We Never Make Mistakes is “Matryona’s [House/Place].” It deals with a sick old woman who helps anyone who asks for free, but is hardly ever never taken care of when she needs help. Her relations decide to partly dismantle her house before she dies. They get drunk (tractor driver and all) before trying to move all the wood at once, even though it’s two loads and requires an improvised second sled that’s not road-ready. They did it for material gain: “Back of the whole affair was the greediness of a few people to get hold of a strip of land, or to spare the expense of making a second trip with the tractor” (p. 132).
The shoddy nature of the second tractor got it stuck on the train tracks. Then, an engine running backwards (blinding its driver with coal dust) and without its lights on hit them, killing the eponymous Matryona and a few others. This being a Russian short story, no one can possibly have been surprised.
General disregard for old ways, fixation on material goods, and lack of community at the community level run throughout. This contrasts with Matryona’s soft spot for her old-fashioned loom, carelessness with her own dress and her own material well-being more generally, and desire to help others and work whenever possible. Ironically perhaps in a communist society (or not), many people including her own relations respond to Matryona’s selflessness and work ethic with disapproval and punishment; she’s an ethical maverick who doesn’t keep a pig to slaughter, in a place and time where valuing creature comforts enough to kill for them is the norm.
The religious sheen becomes explicit at the end. This is a parable about how communities can’t live without good people, but how we can fail to recognize them while they’re alive. It closes:
We all lived beside her, and never understood that she was that righteous one without whom, according to the proverb, no village can stand.
Nor any city.
Nor our whole land (p. 138).
Here Solzhenitsyn seems to reference the story of Sodom and Gomorrah from the Biblical book of Genesis. In it, Abraham begs with God to spare the cities of sin, but not even 10 righteous souls can be found to justify holding the wrath of the Lord. So God razed the sin cities. Solzhenitsyn suggests generalizing from these data points to Russia without Matryona.
If we don’t wake up and take care of the few good people around here, he warns, the whole place will go up in flames. But “the owl of Minerva flies at dusk” (Hegel); in other words, hindsight is 20/20. Critical reflection takes time. The narrator lodged with Matryona, and didn’t understand who she really was until she was already dead.
I understood all too well in the four months I lived with Arjen what sort of role he played and in what sort of world. Precarity has a way of highlighting moral norms, and norm violators. Arjen was a good man.
Solzhenitsyn Comes for Science
Sometimes, one has the sense that society hates scientists for looking down on other experts and people alike; and rightly so. Everyone makes mistakes, and scientists are no exception. But the beautiful, wonderful, vitally important thing about the “science crisis” — the shocking recognition that science is done by scientists who are human beings who are weak apes responding to particular incentive structures — is that science can be seen as a microcosm of the societies of which it’s part. If there’s a lesson here, it’s about humility in the face of limitation; if there’s a hope, it’s about correcting mistakes and learning from them. Notable contemporaries doing this include Greenland and Gelman in meta-science, Prasad and Demasi in science communication. (Reference is not uncritical endorsement.)
Solzhenitsyn’s core message in We Never Make Mistakes is that we do, in fact, make mistakes. Saying so risks coding as a scold. Saying the opposite lets the reader hear the mistake. “You can’t handle the truth” is a much more powerful argument than anything I could say about statistics or rule of law.
Mistakes in scientific and popular discourses can have the same effect, but it helps to consume them critically as literatures, to hear them. And no fish living in the water can see it properly, anyway. So counterinduction may work as well if not better in mythology compared to rationalist discourses, because there’s more of an expectation that this is a created world where something can go wrong — misperception is a norm — even though rationalist discourses also reflect socially constructed entities.
Out of recognizing the “uncertainty as certainty” mistake in “An Incident at Krechetovka Station” come statistics reform efforts to end statistical significance testing misuse, and more broadly to reorient methodologically across disciplines to accept that we know very little. “The Quest for Scientific Certainty,” writes experimental psychologist and misperception expert Adam Mastroianni in yesterday’s New York Times, “Is Futile.” We live by myth, because most of what we think we know is insufficiently evidence-based. As Matroianni notes, “Nobody even knows where butts come from.”
This fact — that we struggle, scientifically speaking, to find our own asses with two hands — creates a logic tension between “Incident” and “Matryona’s”: How do you know when someone is virtuous enough that God should stay His hand against him and his tribe? If we do, in fact, make mistakes, then, perhaps, we should hesitate to play God? The burden of proof must then lie not with those who may wish to avoid the overwhelming force of powerful social and political networks with unfettered (perhaps unfetterable) access to technological mechanisms of total control — but with those who wish to rain destruction on others, be they foreigners in war, or dissidents in peace. This is sometimes called rule of law and is widely considered to be a necessary condition for liberal democratic societies, aspirational though it may be.
Remembrance and Deference
This time of year, my favorite, I always remember Arjen. He went missing in August, but I couldn’t believe he might really be gone until October. It got cold. I got out the soft, dark blue blanket he had given me for my birthday one year. It still didn’t seem right to mourn.
So the public remembrance doesn’t entirely match the private one, and that’s the way it should be. Arjen loved cats, cold, and privacy. He was my first friend in a new world. He helped me to the Continent, and I stayed. If I have made any mistakes in representing him, they are my own. Everyone makes mistakes.
In deference to those subject-area experts who hold or access powers of overwhelming force, and who disagree with my position on mass surveillance programs including polygraphs, I acknowledge that there may be many factors and much evidence that I haven’t considered. Like Stephen Fienberg and the rest of the National Academy of Sciences polygraph report committee, I can’t access classified or otherwise restricted material on any of this (except what I got released and published in a past life). Of course it has been explained to Fienberg, me, and other researchers that we just don’t get to see the evidence that these sorts of programs work. And it has been explained to security agencies that they don’t seem to understand the properties of the mass screening tests they’re administering, or the implications of probability theory for the net results of these interventions.
At the risk of presumption under ignorance, this appears to me to be a conversation in which two voices that seem to disagree are really talking past each other. Scientists are not arguing that “absence of evidence is evidence of absence” of a benefit from mass screenings for low-prevalence problems — a logical fallacy best addressed by Oxford statistics education rockstar Doug Altman. Rather, we are arguing that harms from mass screenings for low-prevalence problems likely outweigh benefits under certain, common conditions. That this argument may be expanded to include not just statistical and historical evidence, but also economic logic — articulating the presence of perverse incentives for mass surveillance proponents to inflate accuracy, for instance — may unfortunately create particularly bad feelings. It’s in the public interest to tell that story, anyway.
We should also be explaining in rationalist terms why recent history suggests mass surveillance infrastructure, if further developed, would likely be abused. And in non-rationalist terms how we must not become what we abhor.
At the same time, we should remember that infrastructure is game. Some digital rights activists I respect and admire aspire now to reframe encryption in a positive light, to avoid being on the defensive and depressing people into inaction. And it’s true that encryption, anonymity, and all the rest of the “dark web” components that may effectively not exist in five years — they can protect a lot of beautiful things. Personal and political expression under censorship, democratic activism under authoritarianism, religious community under repression, forbidden love.
But the reality is that Apple may have already crossed the Rubicon by developing a tool to scan all digital communications, showing proponents of mass surveillance what they already knew — that client-side mass scanning can be done, outsourcing surveillance infrastructure to someone else (companies running the infrastructure) like it was usually done in the recent past. History suggests that whistleblowers have a vital role to play in counterbalancing public ignorance of state abuse of this type of power. This is why some people have taken considerable risks to support others who speak truth to power, and why society should think about what sorts of infrastructure might be useful for bolstering that predictably weak institutional support.
It’s difficult to know how to remember someone who may or may not be dead, but is gone. I’m not planning any kayaking trips. In the absence of a known ritual for observing the tentative grief that is such indeterminate absence, I’ll light a candle for Arjen one of these nights, cook a steak to Yo Yo Ma, and read Shakespeare under the big, blue blanket. Despite dedicating his life to helping others navigate the bleeding edges of technology for the public good, Arjen took the time to appreciate the finest inventions civilization has to offer — from fire and computer networks, to good food and great art. He was (or perhaps still is, one way or another) an imminently civilized and well-educated man — with some curious but common enough blind spots. We all have them, and if we’re lucky, grow in time to better hear and see the larger conversations and stories of which we are a part. I’m still training my ear.
Arjen remains part of these stories.