Misinformation Campaign Promotes Mass Surveillance
Spinning propaganda as survey research, illegally microtargeting ads, and deplatforming dissent -- Chat Control proponents and their allies wage a misinformation campaign to manufacture consent
Sexy Intro
Chat Control — a controversial new AI tool designed to scan digital communications for child sexual abuse material (CSAM) — is coming for your dick pics. And propaganda spun as survey research says you want it to. Who would’ve expected mass surveillance to come with lying to the public to manufacture consent while censoring dissent? It’s almost like authoritarians act like authoritarians — lying to manipulate people, breaking the very laws they’re charged to uphold and in some cases even to enforce, and silencing those who express different views.
Nerdy Intro
My three-year-old and I have an ongoing debate about causality. When it’s time to get out of the bath, he likes to press the button that lets you turn the shower dial to set the water hotter than body temperature. He says “I press the button to get out.” I say “You can press the button, but it doesn’t cause you to get out. Correlation doesn’t equal causation.” And he says “correlation does equal causation!” I say “No, it doesn’t.” He says “Yes, it does!” I guess this is why they call them “threenagers.”
At its most benign, misinformation is about bad causal inferences or bad information more generally. Most people know that vaccines don’t cause autism. That misinformation came from a fraudulent paper published in The Lancet and subsequently retracted, authored by former physician Andrew Wakefield — who is now barred from medical practice.
But not all cases of misinformation are this cut-and-dry. When it comes to misinformation in the scientific discourse itself, there is a real problem with hyperpolarization and confirmation bias contributing to biased research on “both sides,” as in the abortion context. This points to a larger epistemological bind that we are in as mere mortals: We all make mistakes, we all have perspectives, and scientists are no exception. So even well-meaning, well-placed experts like the UK’s Independent SAGE, a scientist advisory group that champions stricter Covid policies, appear to commonly distribute misinformation in their own misinformation (myth-debunking) discourse.
With these caveats in mind — neutrality is impossible, and everybody makes mistakes — let’s look at some misinformation in the Chat Control discourse. It’s interesting because it’s going on right now, with more news breaking every day; and because it includes apparent egregious misconduct including invalid and unethical survey research, law-breaking by law-makers, and social media censorship of dissent. If you don’t think this kind of behavior in the context of proposed mass surveillance is high political theater, I don’t know what you’re saving your popcorn for.
Outline
After giving a little context, this post looks at three components of misinformation in the Chat Control discourse:
Recently released survey data and its representation by Chat Control proponents as reflecting overwhelming public support for the proposal. The survey is invalid, because the instrument was biased. The researchers responsible violated widely accepted professional and ethical standards for conducting survey research, including the relevant code of conduct. And the program proponents then misrepresented the survey results, as well.
This survey and its representation constitute a misinformation campaign. The survey misinformed participants, and the misrepresentation of its results misinforms readers further. This misinformation is part of a concerted political campaign to institute mass surveillance infrastructure — which history suggests will likely be abused if it exists — under the auspices of child protection.
Law-makers appear to have broken the law in micro-targeting social media ads to increase public support for the controversial proposal. They deny wrongdoing.
X has blocked (for an unknown reasons) the researcher who proved the micro-targeting. This sort of de-platforming of dissenters can contribute to a biased information environment including by making others afraid of saying “the wrong thing” — and suffering social and professional consequences. This can contribute to what German political scientist Elisabeth Noelle-Neumann called a spiral of silence — the process typically ignited by emotionally and morally weighty issues wherein loud public opinions on one side stoke fear of social isolation or worse if one violates apparent norms by expressing different sentiments.
Because my expertise is in research methods, given my extensive graduate methods training and experience including having conducted National Science Foundation-supported survey research, my focus here is mainly on the first of these three components. But it’s also important to see the big picture here (not claiming I can see the whole picture). That is a picture of the powerful abusing their power to create and promote a preferred narrative about what the public wants, and then trying to use that manufactured narrative politically in order to win a policy victory that remains a contingent matter of democratic processes that are supposed to take place under agreed rules (rule of law).
Then, I summarize the growing political resistance to Chat Control in the face of this ongoing misinformation campaign. This is inspiring.
Finally, I tack back briefly to the empirical reality at the heart of the matter, which highlights the importance of clocking this misinformation for what it is: There is insufficient empirical reason to believe that Chat Control would do what its proponents claim it would do (make kids safer). And there are plenty of reasons to suspect that it would instead backfire (endangering kids).
Context
What is Chat Control? Imagine if, every time you use email, messenger, or chats, governments require the companies running the infrastructure to scan your communications for evidence of abuse, and report hits to the police on the basis of some algorithm that analysts don’t understand well. Proposed laws across Europe (Chat Control), the UK (the Online Safety Bill), and the U.S. (the EARN IT Act) would do just that.
Decades in the making, this transnational initiative made headway when the UK Parliament passed the OSB on September 19, and the U.S. Senate Judiciary Committee for a third time sent the EARN IT Act to Congress for consideration in May. This month, the European Parliament’s Civil Liberties, Justice and Home Affairs Committee is expected to vote on Chat Control. Implementing these programs would destroy end-to-end encryption, create mass surveillance infrastructure – and endanger children.
I’ve previously written about why Chat Control would backfire and endanger kids according to probability theory, how other AI programs the Parliament’s AI Act proposed banning share the same mathematical structure and are similarly doomed to fail, how the usual liberty versus security framing is bogus, and why the regulatory challenge isn’t just about AI — but explaining the math at the heart of the matter in plain language can be challenging. And I’ll write more about all this again in future posts.
Propaganda Spun As Survey Research
On Saturday afternoon, October 14, ECPAT — an international network of advocacy organizations working against child sexual abuse — published this terrible survey. I call it terrible because iInfoQ11 informed participants:
End-to-end encryption is a system of communication where only the communicating users can read the messages. This means that messages are inaccessible to service providers, current detection tools and any third parties, including law enforcement authorities. Therefore, end-to-end encryption increases privacy and security for all users by ensuring that only the intended recipients can read the messages.
At the same time, if no additional protective measure is implemented by online service providers (e.g. user reporting features, development of innovative detecting technology), it makes it impossible to tell if the platform is being used for child sexual abuse and exploitation, such as dissemination of child sexual abuse material and/or grooming. In that sense, end-to-end encryption interferes with the privacy and security of all users, in particular child users, by leaving them unprotected from criminal activity and making it impossible for law enforcement to access evidence to investigate these crimes.
In the above paragraph, this survey provided participants with biased and factually wrong information about what we know about how end-to-end encryption affects privacy and security, especially for vulnerable groups including children. In doing so, the researchers invalidated their results.
The only thing the results of this survey can establish is how well the above misinformation worked to manufacture consent for mass surveillance. This survey cannot establish whether people support Chat Control based on the best available evidence, because it did not uphold basic scientific and professional standards in survey research.
Does End-to-End Encryption Protect or Endanger Vulnerable Groups Like Children?
This is an important empirical question. To the extent that different people may hold different opinions about the answer based on the information at their disposal, maybe one could argue that it is an open empirical question — although I don’t think that the brief review below suggests that it is. But assuming that this is an open empirical question anyway, one would not want to base a survey on a text informing respondents that it is settled.
Yet, that is what the ECPAT survey does. The statement “end-to-end encryption [E2EE] interferes with the privacy and security of all users, in particular child users, by leaving them unprotected from criminal activity and making it impossible for law enforcement to access evidence to investigate these crimes” contains an argument that E2EE net harms children. What do experts say?
Cryptographers and computer/cybersecurity experts tend to agree that E2EE enhances security by guarding users against other people (mis)using of their data. In fact, the UN Human Rights Office on Oct. 12 tweeted:
End-to-end encrypted tools and services keep all of us safe from crime, surveillance and other threats. Govts should promote their use rather than imposing client-side scanning and other measures undermining encryption.
Similarly, in August, Apple said it killed a Chat Control-like scanning tool it had had in development, because it would undermine privacy and security alike. It even gave Wired its letter responding to the Heat Initiative’s pressure about this move. (The Heat Initiative is headed by Sarah Gardner, former vice president of external affairs for Thorn — which is registered as a charity in the EU lobby database, but has made millions selling AI tools much like the one that might be mandated under Chat Control should the proposal be adopted. Quick, name your favorite charity that makes millions from selling tech products. Mine is the Gates Foundation, but they make a little more money and call themselves Microsoft doing it.)
According to Wired, Erik Neuenschwander, Apple's director of user privacy and child safety, told Heat “that after collaborating with an array of privacy and security researchers, digital rights groups, and child safety advocates, the company concluded that it could not proceed with development of a CSAM-scanning mechanism, even one built specifically to preserve privacy.”
Scanning every user’s privately stored iCloud data would create new threat vectors for data thieves to find and exploit. It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types… We decided to not proceed with the proposal for a hybrid client-server approach to CSAM detection for iCloud Photos from a few years ago. We concluded it was not practically possible to implement without ultimately imperiling the security and privacy of our users.
Apple is not alone. In January, the Child Rights International Network (CRIN) and Digital Defend Me co-published a report entitled “Privacy and Protection: A children’s rights approach to encryption.” The report argued that, while E2EE debates are often framed in terms of privacy versus child protection, “Children’s rights are on all sides of the discourse” (more on this below).
So one could theoretically have a debate about whether there is a debate about whether E2EE net increases or decreases vulnerable groups. But the expert consensus seems to be that it protects them by enhancing the “security” in “information security.” There is no neutral, evidence-based perspective from which to say otherwise. A critic might argue that this consensus is just the dominant narrative in the contemporary cybersecurity discourse. But it’s still the dominant narrative. To put out an alternate narrative unopposed as fact — as this recent survey did — is misinformation, plain and simple.
We Can’t Trade Some Liberty for Some Security Here
Still, Chat Control proponents might want to argue that some children are endangered by some uses of E2EE, and stopping those abuses is worth creating other possible abuses by decreasing cybersecurity for everyone by weakening E2EE with client-side scanning. That is an empirical argument about net harms and benefits of alternate policy regimes that currently lacks empirical support. The burden of proof should be on proponents of the new proposed regime to show that their change would not cause net harm — including to the very vulnerable children Chat Control aims to protect. Independent reviews should then evaluate the evidence of claimed costs and benefits in assessing the proposed program. Data relevant to this assessment must be public, including information about its production, storage, analysis, and interpretation. These are basic scientific evidentiary standards to which all mass interventions should arguably be held by politicians and others who understand themselves as doing evidence-based policy.
In the bigger picture, we can have a discussion based in logic and evidence about what works to enhance security — and whether we might be able to trade security for privacy. As I’ve said before, I would take that trade in this context. But that’s irrelevant, because this trade is not on the table here. There are two problems with the argument that we can trade some liberty for some security when it comes to Chat Control: validation and reification.
First, Chat Control is not a validated tool. It’s not possible to validate it under real-world conditions. And trying to disambiguate uncertain results (i.e., true from false positives) implies secondary screenings that may do substantial harm — including to children. This puts the program in a larger class of programs that share the same mathematical structure, mass screenings for low-prevalence problems under persistent inferential uncertainty. According to the implications of probability theory, these programs often endanger the very people they aim to protect.
And second, as security and liberty aren’t being measured in agreed-on ways such that we can evaluate their before and after values with reference to a policy intervention like this. Without doing this kind of measurement, it’s not possible to assess Chat Control’s efficacy in terms of net costs and benefits even if it were to be implemented. The reason is that we should expect there to be security and liberty costs and benefits across all four screening categories of true and false negatives and positives. We would need to define these terms, think about these possible effects and how to measure them, and then do it in pilot studies in order to begin to empirically assess the program’s real-world consequences.
These effects would need to include possible harm to a large number of innocents, including minors. They would also need to include possible harm resulting from misuse of sexual images and texts, no longer protected by end-to-end encryption. And possible harm to minors whose consent for their private communications’ use in training this AI is unclear. There is no indication that Chat Control proponents have done this work. And even if they did, they would still (if they were doing evidence-based policymaking) have to throw up their hands and acknowledge that we don’t know a lot. We don’t know how many cases of child sexual abuse there really are. So we don’t know how many false negatives and positives there really are. The validation problem pervades the whole enterprise.
Meanwhile, enhancing security by strengthening instead of weakening E2EE — the opposite of Chat Control’s approach here — may net protect children from sexual exploitation. This is part of what many cryptographers and scientists have argued in opposing the program. These arguments are a central part of the discourse, and omitting them in favor of presenting the opposing viewpoint as established fact sets the terms in a way that clearly favors Chat Control. This resulted in anti-encryption propaganda being put out under the auspices of scientific survey results… With predictable effects.
The Results Are In: Propaganda Works!
Let’s start with this two-paragraph executive summary:
In 2023, ECPAT International and NSPCC conducted a large survey on child safety online, in partnership with Savanta. Over 25,000 adults were surveyed across 15 countries in the EU and the UK on their understanding and views on the balance between personal privacy and the protection of children from sexual abuse and exploitation on the internet.
The results show unwavering support for child safety online and the need for new legislation to protect children from online sexual abuse and exploitation, including regulation of online platforms. A large majority support the use and development of automated tools to detect child sexual abuse and exploitation across a variety of platforms, including end-to-end encrypted environments. Most importantly, most respondents recognise and understand the importance of compromise between online privacy and child safety online.
tl;dr: Someone just paid to expose 25,000+ Europeans to misinformation about encryption in an attempt to manufacture consent for mass surveillance. And it worked.
Who are these people? Is there a way for researchers (or rogue ninja methodologists) to contact them again, to correct their mistakes? Is there a right to cognitive liberty or some other form of integrity of thought and opinion formation, where when you’ve been manipulated for a political purpose, you at least have a right to know?
Probably not. Everyone is being manipulated all the time, and we just have to learn better critical thinking skills, stay in that process, and keep in mind that there are probably mistakes embedded in lots of our assumptions. Still, this seems wrong. A Mafia state blows you up if you tell the truth about corruption; a totalitarian state tells you 2+2 is 5 until you believe it. Along with feeding you survey results showing that everyone else already believes it, too.
This is a great trick. It uses what we know about social pressure to conform to potentially increase public support for Chat Control based on lies. For example, Solomon Asch’s conformity line experiment found that most people conformed at least once to a clearly wrong majority view of how long a line was. Maybe that experiment was a product of its time (conformist 1950s America) or its subjects (college-aged males at Swarthmore). But at least it seems like this sort of thing could help, and it couldn’t possibly hurt the cause… Could it?
Unprofessional Survey Research Conduct
Spinning propaganda as survey research is unprofessional. The relevant survey research company, Savanta, appears to have violated its Market Research Society Code of Conduct. For instance, the code states “MRS members shall”:
Ensure that their professional activities are not used to unfairly influence views and opinions of participants.
…
Ensure that individuals are not harmed or adversely affected by their professional activities.
…
Exercise independent professional judgement in the design, conduct and reporting of their professional activities.
This survey instrument clearly violated #4. Savanta let its survey be used to unfairly influence participants’ views on encryption, whether it protects or threatens children and other human beings, and whether mass surveillance would make children and other people safer or less safe by trading some liberty for some security — assuming without evaluating logic or evidence that this trade is not only possible, but also that it has been proven to be on the table in this context. So what?
Survey research companies are not supposed to propagandize for the people who pay them. It makes the field look bad. That’s why codes of conduct like this exist. But complaining about the violation is unlikely to have an effect — other than being a time and attention resource capture for the complainant. (If you’d like to complain anyway, go right ahead: step 1, step 2.) The reason is that perverse incentives often keep people including scientists, journal editors, and professional societies from admitting their mistakes and trying to ameliorate harm they may have caused. For example, professional reputational costs may accrue to all associated — and disgruntled associates may in turn litigate their displeasure. So what should happen here?
Savanta should issue an apology and retraction, because they violated professional standards and participated in a misinformation campaign instead of doing science. But undoing the damage that their misinformation may have done would take more than that. Because, once misinformation is out, it’s hard to debunk it. Repeating it to correct it, for instance, only cements the lie. So you have to teach people to think critically, instead. Any such effort would be costly. For instance, contacting 25,000 survey participants with a correction, apology, and lateral reading tutorial would not come free.
Therein lies the case for a violation of #7: Participants may have had their minds changed without their knowledge or consent by being fed misinformation, and they may be harmed again by states instituting mass surveillance infrastructure under the auspices of their manufactured consent.
Relatedly, the case for a violation of #9 is that the survey researchers in this case failed to exercise independent professional judgment in the survey design, instead permitting mass surveillance proponents to disseminate misinformation to 25,000 unsuspecting people about how good totalitarian infrastructure would be for the children. This is not how survey research is done as a matter of widely accepted professional standards.
Given the biased nature of the survey, let’s take a critical look in brief at how its published results compare with ECPAT’s longer summary/ Oct. 13 press release for the survey results, and identical references from Chat Control’s embattled political godmother, EU Home Affairs Commissioner Ylva Johansson, in her Oct. 15 European Commission blog. For here, we find more misinformation…
Misrepresentation of Survey Findings
ECPAT:
In a groundbreaking revelation close to the Council of the European Union vote on the Child Sexual Abuse Regulation, fresh polling data from ECPAT and NSPCC (National Society for the Prevention of Cruelty to Children) reveals that a staggering 95% of Europeans say it is important that there are laws in place to regulate online service providers to combat online child sexual abuse.
Johansson:
Figures supported by a new poll released just now showing 95 per cent of Europeans say it’s important there are laws that regulate online service providers to fight child sexual abuse. 91% say providers should be required to prevent abuse. 81% support obligations to detect, report and remove child sexual abuse.
By contrast, the survey’s published results (Table 11 - Q2_6) show that only 84% of respondents — who the survey had exposed to biasing misinformation — agreed that “Online service providers (e.g., social media platforms…) should play a role in preventing, detecting, and responding to [CSAM].” In response, only 57% of respondents strongly agreed, 26% tended to agree, 10% neither agreed nor disagreed, 2% strongly disagreed, and 2% didn’t know.
I could go through the survey results more carefully to see if there is also a way to get that 95% out of this data somewhere, somehow. Maybe some survey questions were redundant, such that different but overlapping questions produced contradictory results. In that case, it was a bad survey instrument, and pretesting should have identified those problems before researchers rolled it out to 25,000+ respondents.
But I’m not inclined to bother, first because most people won’t read this far into the details anyway, and second because the point in all this is not that one single mistake produced an element of misinformation in the Chat Control campaign. It is, rather, that all this appears to be part of a misinformation campaign. Other elements go beyond this survey and the apparent misrepresentation of its results. In brief, here are some other elements.
Micro-targeting
Microtargeted ads are increasingly regulated because they work really well and without our knowledge, infringing cognitive liberty and potentially subverting the democratic process. The EU Commission allegedly “used political microtargeting to sway key groups of their controversial CSAM proposal.” As a result , the European Data Protection Supervisor has reached out under the “so-called pre-investigation procedure,” because this may have been illegal, as tech journalist Alexander Fanta reported. Eric Priezkalns at Commsrisk has more on exactly which laws may have been broken, how.
Priezkalns reports that “A comprehensive investigation by technology expert [technologist, jurist, and PhD student] Danny Mekić has conclusively shown that the European Commission used microtargeted adverts to encourage support for its proposals in countries which had voiced doubts about them during a European Council meeting.”
The adverts on X were produced in multiple languages and used emotionally-charged images of children juxtaposed with adults who may be construed to be predators. They also used a tactic often exploited by scammers too: an insistence that ‘time is running out’ and the sound of a clock ticking, thus requiring the viewer to act immediately. The implied action would be lobbying the national governments that had sought reforms of the proposals during a September 14 meeting of the European Council. The adverts are still visible on Twitter; one example can be found here.
The linked ad, by the way, informs viewers that “87% of Europeans support the automatic detection by internet companies of images and videos of child sexual abuse and cases of grooming in messages.” This figure conflicts with the 84% of respondents — who the ECPAT survey had exposed to biasing misinformation — who actually agreed that “Online service providers (e.g., social media platforms…) should play a role in preventing, detecting, and responding to [CSAM].” It also conflicts with ECPAT and Johansson’s other cited figures.
It’s possible that there may be rational, benign explanations for why these kinds of public support figures don’t seem to be consistent either with one another, or with the survey data on which they’re based. We can acknowledge that possibility and ask to hear these explanations — while still noting the proven misinformation campaign that is these inconsistencies’ discursive context. This context gives the appearance of a broader misinformation campaign.
But these are all active instances of misinformation from Chat Control proponents. And misinformation also has another side: The voices that are missing from the discourse…
Selection Bias
Selection bias — that feature of all things remotely human wherein people stubbornly refuse to just act random — may remain a pervasive threat to causal inferences even when researchers use the best research methods for the job, like randomization and Directed Acyclic Graphs (DAGs). It takes many forms in many contexts. One of its best-known facets in discourses ranging from the scientific literature to politics is passive and may be harder to see than the active forms, even though both can generate bias.
Passive selection bias can look like self-selection to not publish your scientific results, e.g., because they were “only” null findings — aka publication bias or the file drawer problem. It can also look like self-selection to not tell your story or share your opinion, e.g., because it’s very personal and painful, and/or because it could do you social/professional harm — which is to say that people’s reactions to it could do you that harm. This sort of bias can contribute to a spiral of silence — a particular danger when it comes to emotionally and morally weighty issues like child abuse.
As Signal CEO Meredith Whitaker has noted on Twitter, as a rule, you don’t know who you’re talking to in child protection debates. The fact that some survivors of child sexual abuse publicly support Chat Control says nothing about the proportion of them who oppose it, but maintain their privacy — a boundary that would conceptually align with opposing the program. It would arguably not generate useful enough information to be ethical to conduct relevant research to ascertain how many such survivors support versus oppose programs like this. It may also be that the program’s critics would not want to use survivors for political purposes even if they could. But all these factors produce an information environment where it may appear that survivors support Chat Control, because organizations like ECPAT and Thorn amplify those voices. That doesn’t mean that these organizations represent that group. It means that they’re willing to look like they do without sufficient evidentiary basis for making such claims.
In addition to these sorts of selection biases, there’s also a more direct one that bears mentioning in the context of the apparent misinformation campaign for Chat Control…
Censorship via Deplatforming
Priezkalns reports that the tech expert who broke the microtargeting story, Danny Mekić, pointed out that “This microtargeting on political and religious beliefs violates X’s advertising policy, the Digital Services Act — which the Commission itself has to oversee — and the General Data Protection Regulation.” After breaking the news about the microtargeting, and the apparent conflict of interest in enforcing relevant legislation, Mekić was blocked on X, Fanta reports. On his Mastodon account, @DannyMekic@mastodon.social, Danny notes “My X account @DannyMekic was censored after publishing a critical article about the European Commission. X/Twitter does not respond to my emails, so I don't know why and at whose request.”
X CEO Elon Musk and his companies, including X and Tesla, have faced multiple allegations of corporate spying, whistleblower retaliation, and spreading misinformation — including repeated misinfo criticism from the European Commission itself. Musk, whose anti-immigrant tweet recently garnered a public rebuke from the German Foreign Office, may also be a natural ally of Johansson — who has described the crisis of mass migration from Africa to Europe across the Mediterranean Sea as “ ‘unsustainable.’ ”
At the same time, this is all just background information. We don’t know if Mekić’s deplatforming was an exercise of Musk’s personal opinions or political alliances through his largely unaccountable political power as a controller of what amounts to a utility (a major social media company) — or something else. However, in the context of so much other lying and rule-breaking, it looks like part of the same larger misinformation campaign whereby powerful people are trying to set the terms of the discourse in favor of their preferred narrative: That we’re getting mass surveillance (again) now, it’s “for the children,” and we like it.
Caveat: In my opinion, there doesn’t have to have been a provable, agreed-on game plan for doing all this — spinning propaganda as survey research, microtargeting ads, and deplatforming dissent — in order for this pattern to constitute a misinformation campaign. The point in calling it that is to alert people to the danger of the pattern. But maybe we should call this a misinformation trend or phenomenon instead, in the absence of evidence that everyone who contributed to it sat down and had tea to coordinate their (perhaps unintentional) mistakes. Maybe there is a better way to communicate about this sort of thing — a set of terms to be agreed on to talk about how misinformation characterizes the Chat Control discourse, and it is misinformation from the powerful.
Back on censorship: Silencing dissent against the preferred narrative of consensual mass surveillance reeks of authoritarianism. That might make some people keep silent or not resist out of fear of retribution. And yet, despite the misinformation campaign, organized and effective political resistance to Chat Control keeps growing…
Political Resistance Grows
In fact, the tide may be turning:
Most recently, on Friday, October 13, the Finnish Parliament rejected Chat Control in a binding decision on account of its proposed mass surveillance.
Last month, on September 26, the Swiss Parliament overwhelmingly (144-24) adopted a motion protecting Swiss Internet users from the proposed mass scanning of private messages.
And last year, on November 16, the Austrian parliament was the first EU government to adopt a binding resolution to oppose the proposed program if it meant mass surveillance and weakening encryption.
Back in 2020, plenty of privacy activists celebrated Brexit because they thought, with the UK out of the EU, we could have nice things now — like meaningful privacy protection. It’s not clear that the interconnected modern world works that way. With the passage of the UK’s Chat Control analogue, the Online Safety Bill, on September 19, we need to worry about what legal and ethical frameworks exist and can be bolstered or created to stop governments from forcing tech companies to design, implement, or report on mass surveillance. It may make sense to tie this into the larger project of regulating mass screenings for low-prevalence problems, programs which share the same mathematical structure and often endanger society due to the implications of probability theory.
But for the time being, it’s worth fighting Chat Control in the next few weeks. It’s a just cause — protecting the foundations of liberal democracy, and preventing stupid policy from endangering vulnerable children along the way. It’s a live issue — there’s enough political resistance to win — but enough powerful, effective propaganda to warrant fighting. And it’s time to fight — EU governments reportedly may adopt Chat Control as early as this week. (Let’s hope someone on the Council knows a thing or two about the structural preconditions of state sovereignty vis-à-vis mass surveillance.)
Setting the Terms of the Discourse
Johansson used her European Commission blog platform to emphasize the idea that her Chat Control proposal is about protecting children, and that we should focus on that. As mass surveillance proponents usually do, she framed the political issue as a matter of finding the balance between security and liberty — which here means protecting children and protecting privacy. We should reject this rhetorical move to oppose security and liberty — an opposition which lacks sufficient evidentiary basis. We should focus, instead, on exactly the values Johansson’s post highlighted: resolve, transparency, and humility.
Resolve: Johansson’s political reputation is based in part on her self-definition as an evidence-based policymaker. For instance, at her November 19 Commissioner-designate hearing, she said “I’m committed to a solid and evidence-based… policy-making in the areas covered by my portfolio. My aim is to apply the principles of better regulations to the preparation of future proposals in my portfolio.” That’s a great commitment to be resolved to keep.
In this context, it means that Chat Control proponents bear the burden of proof to establish that their new intervention will do more good than harm. This has not been established. It requires that independent reviewers evaluate the evidence of claimed costs and benefits. This requires…
Transparency: Relevant data must be public, including information about its production, storage, analysis, and interpretation.
After both appreciating and critiquing elements of German law enforcement expert Markus Hartmann’s (Generalstaatsanwaltschaft Köln) German Parliament (Bundestag) hearing prepared statement on Chat Control (news coverage, video, English translation), I wrote to his office asking if I could help translate the statistical information at issue here for a wide audience. One of the missing pieces of the information one would ideally have to do that work is what the posited base rates are of different offenses the scanning tool targets. I received no reply. It was not clear to me (by no fault of Hartmann or his office) who else to ask any scientific questions like this concerning the program. This non-transparency even in who would have the information needed to translate the program’s quoted accuracy figures into frequency format tables — an evidence-based first step in anything having to do with risk literacy — violates the principle of independent review on which evidence-based policymaking is predicated.
Humility: Policymakers should apply scientific evidentiary rules to evaluate the likely impact of proposed interventions before implementation. Applying these rules to policymaking may prevent massive societal damages from well-intentioned programs that are unknowingly trying to break universal mathematical laws that we cannot escape. We human beings — and the physical universe we live in — have limits that no technology can surmount. Humility is about not over-reaching, admitting your mistakes, and recognizing your limits.
No one, perhaps least of all public figures like politicians, likes to be criticized or admit when they are wrong. But when your proposed program would endanger the children it aims to protect, it’s time to stop campaigning — especially with misinformation — and start reflecting. “For the children.”