Privacy considerations with Apple's planned CSAM changes


Last week Apple revealed that it is implementing new features in the next versions of its operating systems to address the spread of child sexual abuse material (CSAM). Reaction to this has been strong and divisive.

iCloud Photo Library received the most attention. I'll try to distill into bullets my understanding of where things are currently at around photos and iCloud.

The feature will affect iOS and iPadOS devices using iCloud Photo Library. Devices not using iCloud Photo Library are not affected.

This feature uses cryptographic hashes to match photos synced against data from known images of CSAM.

The hash match scheme is a way to search for known images without revealing or examining actual images.

Essentially, two copies of the same image have matching hashes. Significantly modified versions of the same image or different images of the same child will have different hashes.

Apple devices will have a data table of hashes for comparison. Actual images will not be on devices.

This feature will not detect CSAM created on Apple devices. It will only flag known images.

The matching occurs on the device. Apple is not scanning photos stored on iCloud.

It appears many cloud services hosting photos perform this matching, including, to a limited extent, Apple. The New York Times reported that Facebook flagged and reported over 20 million images last year. Apple only reported 265. This should provide some context as to why Apple is implementing new protective features.

Apple says it will review images in people's iCloud accounts to verify before alerting authorities. Apple says it will use a threshold of flagged images before investigating but does not document this threshold.

If Apple deemed photos to be CSAM, it will report this to the National Center for Missing and Exploited Children, who presumably will report it to the appropriate law enforcement agency.

With that laid out, I have mixed feelings. I strongly prefer Apple not implement this, but pragmatically, I don't see this specific implementation as an issue. I have different thoughts about the future, but more on that in a moment.

The methods used by Apple at face value seem to be reasonable. It's possible but very unlikely to result in false matches and, even if it did, there needs to be a certain number of matches before triggering a review. And then that review would presumably catch false positives before alerting anyone. Of course, the fact that your photos are subject to automated evaluation, an undisclosed threshold, and potentially inspection is alarming, but I don't believe it to be an uncommon practice. This system appears to provide at least some privacy by occurring on the device up to a threshold before any action.

Also, it's important to know that Apple already has access to photos that are uploaded to iCloud. While it is stored encrypted, Apple has keys to your data. As a result, your data is subject to lawful searches, data breaches, and inappropriate access by employees. The ability to flag and inspect criminal activity on its services is within Apple's purview. This is something everyone should consider when they evaluate using any online service, including iCloud. This move by Apple likely changes the privacy/convenience equation for some, and I think that's understandable. For those who object, the result of this change should be to stop using iCloud. Keep in mind; however, you'll likely face CSAM policies from other major cloud providers.

So, again, I don't like it, but I have a pragmatic view here. People can avoid this by using the old-fashioned way of storing photos on devices and backing them up to local storage. My assumption here is Apple is under pressure from law enforcement, lawmakers, and regulators. I wouldn't be surprised if this change attempts to dissuade laws for more intrusive government inspection. If this is the case, I think Apple needs to be more transparent. While protecting children is noble, it has made limited efforts to date. Apple should better explain why it's now making this significant shift in handling customer data.

While I'm not too concerned over Apple's CSAM implementation, I am worried about what this means for the future. Now that Apple has demonstrated it can scan, evaluate, flag, and report content on customer devices, what does it mean? Will governments, perhaps secretly, compel Apple to alert authorities of content other than CSAM? Will innocent people get swept up in someone's efforts to get reelected or promoted? What about what a government du jour deems dangerous to those in power? Would journalists or political dissenters be targets? Would Apple itself ever use this technology against its employees to prevent leaks to competitors and reporters? Also, the rationale here appears to be limited to what is submitted to iCloud photos, but what about other things synced to iCloud? Additionally, once these scanning functions occur on devices, it seems only a step further to implement functionality unrelated to iCloud. There are a lot of things to consider, with perhaps only Apple's good graces to stop it.

While iCloud photos and CSAM got a lot of attention, and I wanted to dig into that, Apple is making two other changes. One of the two is related to Siri searches that seem pretty straightforward. Siri will provide resources on reporting CSAM when requested, and if you query Siri material considered CSAM, essentially, it sounds like Siri will try to talk you out of it. It doesn't appear there's any reporting activity here.

The second feature is with Messages. Apple says it will use on-device AI to detect incoming and outgoing content that may be sexually explicit. This is a feature that is only available for children's accounts set up on a family account. A child will be alerted of possible images, and if viewed or sent, it will notify parents. Apple says it will not have access to the Messages content, and nothing will be reported to Apple.

As a parent, I think I welcome this type of feature; however, it's unsettling that devices will be able to analyze photos sent and received. Granted, this feature is supposed to be limited to children on family accounts, but again the technology now exists to evaluate and flag personal information. Furthermore, this goes beyond the cryptographic system of matching content with known inappropriate content. Even though there is no reporting to Apple, the system still evaluates actual images and decides rather than "simply" comparing hashes.

It's worth mention that I feel like Apple may have made a communication mistake by announcing these Messages and iCloud Photo Library together features. I've seen the two features conflated more times than not.

To wrap this up, there has been considerable pushback on these changes, but also a lot of support. It will be interesting to see how things play out and whether Apple capitulates in any way or is encouraged by those voicing support. In a Q/A with the media, Apple indicated it's open to extending these capabilities to third-party apps, suggesting it may have bigger plans and not planning on retreating.