Apple 'intercepts' emails to flag child abuse: Here's how
Apple follows strict security protocols for iPhones and all its key products. But, when the matter is as serious as the sharing of child pornography, the company can 'intercept' emails to help the authorities with investigations. It does not read all the emails of its customers but follows a unique detection technique, which has been highlighted by Forbes. Here's all about it.
Search warrant reveals Apple's way of handling child abuse
Even though Apple keeps a tight lid over how it handles criminal cases, the folks at Forbes managed to recover a search warrant associated with flagging child abuse to provide more insight into its operation. The document revealed that instead of reading every single email for illegal material, the Cupertino giant scans the messages for 'hashes' or digital signatures of previously-identified child abuse photos/videos.
How this method works?
Like Facebook and Google, Apple has also trained its systems to detect hashes. This way, as and when a person shares an email containing one such hash (image previously confirmed as illegal), the system automatically flags the message - with the illegal content - and quarantines it for further inspection by dedicated teams. They analyze the flagged content to see if it's really illegal.
Then, the relevant authorities are contacted
If the content flagged by the system is confirmed to be illegal by Apple's teams, the company notifies relevant authorities about the case. They send the content in question to the authority, which is the National Center for Missing and Exploited Children (NCMEC) in most of the cases. Then, NCMEC looks into the matter, prompting law enforcement to launch a criminal investigation.
Here's what Apple employee said after flagging illegal emails
"When we intercept the email with suspected images, they don't go," an Apple employee said in the Forbes-uncovered search warrant. Here, the "individual . . . sent eight emails that we intercepted. [Seven] of those emails contained 12 images. All emails and images were same."
Hashing has proved successful in flagging cases
According to the New York Times, the technique of hashing helped tech companies flag over 45 million photos/videos of child abuse over the last year. Not to mention, over 18 million reports went to the national center, with 12 million of those being detected by Facebook Messenger alone. Evidently, technology is playing a big role in spreading (and preventing) the online exploitation of children.