A cryptography expert Matthew Green claims that Apple will soon release a new client-side photo hashing system to detect child abuse images on users’ photo libraries. The new system is designed as an effort by the company to flag child abusers or pedophiles.
Hashing is a process to map one piece of data to another piece of data via a hash code or a hash. The function is to match or map objects to hash codes known as a hash function, cryptographic and non-cryptographic.
Apple’s alleged client-side hashing system to report child abuse content might not be a good idea
As reported, Apple will introduce a set of fingerprints/ hashcodes representing illegal content in the iPhone. The hashcode will scan each photo in the users’ camera roll to identify illicit content like child pornography and other abusive material. To maintain users’ privacy, all mapping will be done on the device like machine learning features, and only matched IDs will be reported for human review.
However, Green argues that the new system can be easily exploited and flag false positives. In a detailed Twitter thread, Green highlights that Apple’s new hashing system would open flood gates for public surveillance by miscreants and government agencies in the name of looking for “harmful” content.
These images are from an investigation using much simpler hash function than the new one Apple’s developing. They show how machine learning can be used to find such collisions. https://t.co/N4sIGciGZj
— Matthew Green (@matthew_d_green) August 5, 2021
Users’ messages and most data are end-to-end encrypted which provides a necessary layer of protection against unwarranted spying. And Apple’s hashing system will give anyone creating a hashing list access to encrypted data by “surveilling every image anyone sends.” And in the wrong hands, it can provide material for blackmail and defamation purposes.
But there are worse things than worrying about Apple being malicious. I mentioned that these perceptual hash functions were “imprecise”. This is on purpose. They’re designed to find images that look like the bad images, even if they’ve been resized, compressed, etc.
This means that, depending on how they work, it might be possible for someone to make problematic images that “match” entirely harmless images. Like political images shared by persecuted groups. These harmless images would be reported to the provider.
In conclusion, Green believes it is a mistake which will blow out of proportion a little too late for any remedy. What is your opinion on the new hashing system? Let us know in the comments.
Whether they turn out to be right or wrong on that point hardly matters. This will break the dam — governments will demand it from everyone.
And by the time we find out it was a mistake, it will be way too late.
— Matthew Green (@matthew_d_green) August 5, 2021
Read More:
6 comments
Comments are closed.