Apple thinks it has found the balance between protecting the privacy of its users and fighting the spread of child pornography. The tech giant unveiled a new automatic tracking system (coming in September) that will continuously check users’ personal images for illegal photos and videos. Another system will warn parents if a minor receives or sends explicit material. Despite the anonymization and protection mechanisms provided, the implications for the personal privacy of the end user are very serious.
The company assures that the process will be encrypted, minimally invasive and secure, but the prospect alone has made privacy experts and digital rights activists stand on end. Without detracting from the fight against child pornography, many have lashed out against what is essentially a sophisticated mass surveillance system, a systemic “hole” in the privacy of citizens that can potentially be reconfigured to report other content, from terrorist to terrorism. anti-government or “seditious”.
The future danger is that Apple will deliver a backdoor in people’s cell phones into the hands of authorities around the world, flanked by a very sophisticated and at the same time malleable surveillance tool. Not to mention that this precedent could increase the pressure on other technology companies, which would be forced to follow the (influential) example of Apple, the same company that refuses to unlock a terrorist’s phone in order to defend the privacy of its users. and ended up in court against the US government.
“It doesn’t matter whether [Apple] is right or wrong,” tweeted Matthew Green, professor of cryptography at Johns Hopkins University. “All this will break the dam: governments will demand it of everyone. And when we discover that it was a mistake, it will be too late ”. And if some governments (including the US and India) have been pushing for years to obtain almost indiscriminate access to the digital life of their citizens in the name of security, for others (China) systems like this are already operational – also thanks to the facilitation of Apple itself. How Apple Tracking Works
Almost all major social media and cloud storage solutions are already working to identify child pornography on their services. The crucial difference of the Apple system, arriving in September with the next updates (iOS 15, iPadOS 15, watchOS 8 and macOS Monterey), and which acts directly on the device, actively scanning the content library to find correlations with the stowed material in a government database. At least initially the project will start only in the USA.
The Californian company calls its monitoring algorithm “neuralMatch”, a clear reference to neural networks, that is a machine learning technique for which an artificial intelligence independently learns to distinguish and correlate images by training on immense databases, and then operate decisions even on material never seen before (the best known applications are in public surveillance and facial recognition). In this case, the system trained on an archive of over 200,000 sexual abuse images collected by some American NGOs, including the National Center for Missing and Exploited Children.
Of course, not all is clear. The personal photos and videos of users are converted into strings of numbers with a process called hashing, and the comparison with the government database takes place on the basis of these codes, already saved in advance on each device in a “blind” form. Each photo (or video) will be marked as suspicious or not until, once a certain threshold of suspicious material is reached, Apple allows a team of human analysts to decrypt it. Only then, if the team deems it illegal, is everything sent to the authorities along with the account information.
One more thing. The Californian house has also prepared a way to actively monitor private conversations on its messaging application, but without Apple being able to access it and only if the device belongs to a minor. If he receives content that the system marks as sexually explicit, he will see it blurred along with a series of warnings and explanations; if you decide to view it in its entirety, parents receive a warning. Ditto if the minor decides to send explicit material even after a warning (“are you sure
“) of the system. Finally, all users will be served information and support links if they launch a web search using keywords associated with child pornography. Criticalities
Let’s start from the technological level and the risk of a false positive response, for which (for example) the legal and private photos of a user would be taken and sent to the government. Apple ensures that the chances of an account being misreported are “less than one in a trillion a year”. But putting this assertion aside for a moment, we should concentrate on neural networks and their tendency (demonstrated several times) to be less accurate the more the material under examination diverges from those of the database. Without going into too much detail, it is enough to know that in the US these software have already led to the erroneous arrest of people of color (see the cases concerning Clearview AI).
Second, implementing a backdoor such as active message monitoring nullifies the industry standard that private communications, to date, are end-to-end encrypted. The Center for Democracy & Technology, a think tank, writes that “the mechanism that will allow Apple to scan images in Messages is not an alternative to a backdoor: it’s a backdoor. Scanning [from the customer’s side] on one “end” of the communication violates the security of the transmission and informing a third party (the parent) of the content of the communication undermines its privacy. Organizations around the world have warned against such scanning because it could be used by governments and companies to check the content of private communications ”.
The argument is that regardless of the good intentions behind the tool, and also wanting to overlook the fact that every citizen would be monitored as a criminal suspect without presumption of innocence, the tool itself represents a potentially devastating “hole”, especially if in hands of an autocratic but equally dangerous (and illegitimate) state in a democratic environment. Because tomorrow the database could be expanded according to the wishes of the authorities and become a very efficient profiling system.
There is already talk of expanding the American database to include terrorism-related content, such as beheading videos, but it’s worth remembering that the definition of terrorism is up to governments. One need only look to Russia and Belarus to see how these definitions can be addressed against dissidents and political activists. Researcher Sarah Jamie Lewis called Apple’s update a “Rubicon moment for end-to-end privacy and encryption […] How long do you think it will take before the database is expanded to include“ terrorist ”
content“ harmful ”content but legal ”
State censorship
I hate talking about slippery ground, but I’m looking at the slope, and governments around the world are covering it in oil, and Apple just pushed its customers over the edge. ”
After years of progress in the (especially Western) confrontation between privacy and security, the monitoring system proposed by Apple is certainly a victory for the associations (and for the governments) that for years have been asking for assistance from technology companies in identifying the criminals who are stain with child pornography crimes. But it is also a step backwards as regards the possibility of intrusion by the authorities into the private life of citizens, even with all the necessary security systems, with potentially Orwellian repercussions. Considering that such measures are also being studied in Europe, it is appropriate that the boundary between privacy and security be drawn and decided also by virtue of a sustained public discourse.
