In early August, Apple announced some new child safety features, slated to arrive in the upcoming updates to iOS, macOS and iPadOS. The first change is that the Messages app warns minors (and their parents) of sexually explicit images and gives parents the option to be alerted if a child views or sends such an image. The second involves Siri and Search being tweaked to intervene when someone makes queries related to Child Sexual Abuse Material (CSAM) . The last, and most major, introduces automatic on-device matching of photos (stored inside iCloud Photos) against a database of known CSAM content. If the system discovers enough flagged pictures, a report will be sent to a moderator for evaluation. If the moderator confirms the assessment, Apple will decrypt the photos and share them with the relevant authorities.
These new features were announced with the stated intent of protecting children from sexual predators, and they do sound like measures with great intentions behind them. But the changes have been met with considerable backlash and feelings of betrayal.
Of the three features, the changes to Siri and Search have been generally uncontroversial. The others, however, have seen massive opposition, ranging from discontent about the tweak to the Messages app to outrage about the CSAM scanning. So much so that Apple was forced to delay (but not stop) the implementation of these features.
It may still be unclear to you why there even is opposition or why I’m asking you to be scared.
Even if well-intended, these new features are a massive invasion of privacy and have the potential to inflict serious damage. Coming from Apple, a company that prides itself on taking customer privacy seriously (extending even into their advertisement’s music choices), this is a huge disappointment.
The largest change is the monitoring of peoples’ Photos app. Some might be tempted to think the detection process itself is novel and problematic. This emotion is a natural spillover from tech’s well-documented issues with image scanning and detection. For example, studies show facial recognition software from IBM, Amazon and Microsoft have all underperformed for people of color and women. Recognition software is only as good as its training dataset. Train it on a homogenous dataset, and it will struggle with diversity when used.
These are valid concerns, but they are not the whole picture. While it is not common knowledge, it is commonplace for major cloud service providers to scan for CSAMs hosted on their platform. This occurs with the help of a database of known CSAM content maintained by the National Center for Missing and Exploited Children and a few checks to prevent false reports.
If this is an established procedure with checks in place to avoid false flagging, why is there backlash at all? The answer lies in where Apple is conducting these scans.
Traditionally, all of this happens on a company’s servers which cannot see the contents of end-to-end encrypted data. Apple’s purported scanning will occur on-device, with an option to decrypt photos if need be. That’s very dangerous since end-to-end encryption doesn’t hide information from the device itself, which could lead to a potential backdoor for cyberattacks.
Speaking of cyberattacks, Apple only recently came out with a security patch to protect iPhones from spyware attacks, which could turn on the camera and microphone on-demand and read messages and other local data, all without any visible sign. Apple, like any other tech giant, is only a few steps ahead of attackers at any given time (and perhaps a few steps behind as well in some cases).
This leads to other issues. Countries could put pressure on Apple to report photos it finds objectionable, such as photos of protests or dissenters. Will Apple always be able to say no?
Even if Apple does manage to resist these demands, many companies sell software exploits that give access to devices to governments. These are all scary scenarios.
So while the cause for Apple’s new software updates may be noble, the risks are too high to be considered safe.
Siddharth Parmar is an Opinion Columnist and can be reached at firstname.lastname@example.org.