Apple recently announced a new set of child safety features coming to its devices, including the iPhone, to help limit the spread of Child Sexual Abuse Material (CSAM).

According to Apple, the next iOS and iPadOS update will bring new capabilities for devices to use new apps of cryptography to help limit the spread of CSAM online, with user privacy in focus.

However, privacy advocates perceives the CSAM detection as rather Apple rolling out a "mass surveillance" features, by surveillance of every image sent on the platform.

How Apple intends to detect CSAM on its platform?



Apple is harping on what it calls "NeuralHash", a system powered by a cryptographic technology known as private set intersection, which scans iCloud photos automatically, when a user turns on iCloud photo sharing.



The CSAM detection involves on-device matching of images using a database of known CSAM image hashes provided by the National Center for Missing and Exploited Children (NCMEC) and perhaps, other child safety organizations before uploading to the cloud.

Also, the Messages app will use on-device machine learning to warn about sensitive content, while keeping communications unreadable by Apple.


Additionally, the virtual assistant, Siri will get an update to enable it provide parents and children expanded information and Search will intervene when users search for CSAM-related topics.

Apple, finally will use another cryptographic technology called threshold secret sharing to "interpret" the contents when an iCloud Photos account passes a threshold of known child abuse mark, after which the content will be manually reviewed to confirm there is actually a match, and if so, Apple will disable the user's account, with the material reported to NCMEC, and passed to law enforcement.

Why Privacy furore over Apple's plan to scan devices for CSAM?



As noble as the intention may be, privacy advocates are of the fear that it could be manipulated to detect other kinds of content for political and personal safety implications, or even employed to frame innocent individuals by sending inappropriate images designed to appear as matches for child sexual content.

Albeit, Apple users who feels that their account has been mistakenly flagged can file an appeal to have the issue resolved and their account reinstated.

Privacy furore over Apple's plan to scan devices for Child Abuse Content

Apple recently announced a new set of child safety features coming to its devices, including the iPhone, to help limit the spread of Child Sexual Abuse Material (CSAM).

According to Apple, the next iOS and iPadOS update will bring new capabilities for devices to use new apps of cryptography to help limit the spread of CSAM online, with user privacy in focus.

However, privacy advocates perceives the CSAM detection as rather Apple rolling out a "mass surveillance" features, by surveillance of every image sent on the platform.

How Apple intends to detect CSAM on its platform?



Apple is harping on what it calls "NeuralHash", a system powered by a cryptographic technology known as private set intersection, which scans iCloud photos automatically, when a user turns on iCloud photo sharing.



The CSAM detection involves on-device matching of images using a database of known CSAM image hashes provided by the National Center for Missing and Exploited Children (NCMEC) and perhaps, other child safety organizations before uploading to the cloud.

Also, the Messages app will use on-device machine learning to warn about sensitive content, while keeping communications unreadable by Apple.


Additionally, the virtual assistant, Siri will get an update to enable it provide parents and children expanded information and Search will intervene when users search for CSAM-related topics.

Apple, finally will use another cryptographic technology called threshold secret sharing to "interpret" the contents when an iCloud Photos account passes a threshold of known child abuse mark, after which the content will be manually reviewed to confirm there is actually a match, and if so, Apple will disable the user's account, with the material reported to NCMEC, and passed to law enforcement.

Why Privacy furore over Apple's plan to scan devices for CSAM?



As noble as the intention may be, privacy advocates are of the fear that it could be manipulated to detect other kinds of content for political and personal safety implications, or even employed to frame innocent individuals by sending inappropriate images designed to appear as matches for child sexual content.

Albeit, Apple users who feels that their account has been mistakenly flagged can file an appeal to have the issue resolved and their account reinstated.

No comments