Apple’s New CSAM Detection Feature Could’ve Been Communicated Better, Craig Federighi Admits In Interview

The big news over the last week has undoubtedly been Apple’s upcoming CSAM detection system. Designed to help identify photos that depict child abuse, the system will check images that are being uploaded to iCloud Photos to ensure they aren’t illegal.

The way the system works is infinitely more complicated than that, and it’s that complication that has caused great confusion. Now, Apple head of software Craig Federighi has spoken about that confusion.

In an interview with the Wall Street Journal’s Joanna Stern, Federighi admitted that the new feature could have been explained “a little more clearly” but went on to reiterate that images will only be checked for CSAM content if iCloud Photos is enabled. Disable it, and no checking will be carried out.

Federighi also confirmed that the CSAM threshold at which point manual checks of images will be carried out is “in the order of 30.” Apple has since confirmed that 30 number, saying that it is likely that people with CSAM content on their iPhones and iPads have much more than 30.

Federighi also confirmed that Apple’s system is designed to be one that can be audited when needed, something that isn’t possible when companies have the photo checking completed on their servers instead — a method that Apple tried to avoid.

The full Journal interview is well worth a watch and we’ve embedded it above.

You may also like to check out:

You can follow us on Twitter, or Instagram, and even like our Facebook page to keep yourself updated on all the latest from Microsoft, Google, Apple, and the Web.