British intelligence recycles old argument for thwarting strong encryption: Think of the children!
Comment Two notorious characters from the British security services have published a paper that once again suggests breaking strong end-to-end encryption would be a good thing for society.
Nearly four years ago Ian Levy, technical director of the UK National Cyber Security Centre, along with technical director for cryptanalysis at the British spy agency GCHQ Crispin Robinson, published a paper arguing for “virtual crocodile clips” on encrypted communications that could be used to keep us all safe from harm. On Thursday they gave it another shot, with a new paper pushing a very similar argument, while acknowledging its failings.
“This paper is not a rigorous security analysis, but seeks to show that there exist today ways of countering much of the online child sexual abuse harms, but also to show the scope and scale of the work that remains to be done in this area,” they write.
“We have not identified any techniques that are likely to provide as accurate detection of child sexual abuse material as scanning of content, and whilst the privacy considerations that this type of technology raises must not be disregarded, we have presented arguments that suggest that it should be possible to deploy in configurations that mitigate many of the more serious privacy concerns.”
The somewhat dynamic duo argues that to protect against child sexual abuse and the material it produces, it’s in everyone’s interests if law enforcement has some kind of access to private communications. The same argument has been used many times before, usually against one of the Four Horsemen of the Infocalypse: terrorists, drug dealers, child sexual abuse material (CSAM), and organized crime.
Their proposal is to restart attempts at automated filtering, specifically with service providers – who are ostensibly offering encrypted communications – being asked to insert themselves in the process to check that CSAM isn’t being sent around online. This could be performed by AI trained to detect such material. Law enforcement could then be tipped off and work with these companies to crack down on the CSAM scourge.
Apple infamously tried to make the same argument to its users last year before backing down on client-side monitoring. It turns out promising privacy and then admitting you’re going to be scanning users’ chatter and data isn’t a popular selling point.
Apple can’t solve it, neither can we
In their latest paper Levy and Robinson argue that this isn’t a major issue, since non-governmental organizations could be used to moderate the automatic scanning of personal information for banned material. This would avoid the potential abuse of such a scheme, they argue, and only the guilty would have something to fear.
It’s not a new argument, and has been used again and again in the conflict between encryption advocates who like private conversations and governments that don’t. Technology experts mostly agree such a system can’t be insulated from abuse: the scanning could be backdoored, it could report innocent yet private content as false positives, or it could be gradually expanded to block stuff politicians wish to suppress. Governments would prefer to think otherwise, but the paper does at least acknowledge that people seeking privacy aren’t suspects.
“We acknowledge that for some users in some circumstances, anonymity is, in and of itself, a safety feature,” Levy and Robinson opine. “We do not seek to suggest that anonymity on commodity services is inherently bad, but it has an effect on the child sexual abuse problem.”
Which is a soft way of saying conversations can be used to plan crimes so they should be monitored. No one’s denying the incredible harm that stems from the scum who make CSAM, but allowing monitoring of all private communications – albeit by a third party – seems a very high price to pay.
Apple backed down on its plans to automatically scan users’ data for such material in part because it has built its marketing model around selling privacy as a service to customers – although this offer does not apply in China. Therein lies the point: if Apple is willing to let Middle Kingdom mandarins interfere, there’s no guarantee that it won’t do the same for other nations if it’s in the corporate interest.
Cupertino’s technology would search for images using the NeuralHash machine-learning model to identify CSAM, a model the Brit duo say “should be reasonably simple to engineer.” The problem is that the same tech could also be used to identify, filter out, and report other images – such as pictures mocking political leaders or expressing a viewpoint someone wanted to monitor.
Levy and Robinson think this is a fixable problem. More research is needed. Also, human moderators should be able to intervene to catch false positives before suspected images are passed to law enforcement to investigate.
Not my problem
Interestingly, the two make the point repeatedly that this is going to be the service providers’ responsibility to manage. While insisting the paper is not official government doctrine, it’s clear Her Majesty’s Government has no intention of picking up the tab for this project, nor overseeing its operation.
“These safety systems will be implemented by the service owner in their app, SDK or browser-based access,” they say. “In that case, the software is of the same standard as the provider’s app code, managed by the same teams with the same security input.”
And allowing private companies to filter user data with government approval has always worked so well in the past. This is an old, old argument – as old as encryption itself.
We saw it first crop up in the 1970s when Whitfield Diffie and Martin Hellman published on public-key encryption (something GCHQ had apparently developed independently years before.) Such systems were labelled munitions, and their use and export severely limited – PGP creator Phil Zimmerman suffered three years of investigations in the 1990s over trying to enable private digital conversations.
As recently as 2019, someone at the US Department of Justice slipped the leash and suggested they didn’t want a backdoor, but a front one – again using the CSAM argument. Some things never change. ®
READ MORE HERE