• 1 Post
  • 144 Comments
Joined 1 year ago
cake
Cake day: November 26th, 2023

help-circle
















  • There was no “huge privacy issue”.

    First of all: You could turn off the local scanning by turning off iCloud sync - which would’ve sent the images to the cloud for scanning anyway. That’s it, nothing else, nobody at Apple would’ve touched a single super-private file on your device.

    The local scanning required MULTIPLE (where n>3, they didn’t say the exact number for obvious reasons) matches to known and human-verified CSAM. This database is the one that would’ve been loaded from iCloud if you had it turned on. This is the exact same database all cloud providers are using for legal reasons. Some have other algos on top - at least Microsoft had an is_penis algorithm that shut down a German dude’s whole Live account for his kid’s pics being on OneDrive.

    After the MULTIPLE matches (you can’t get flagged by “accidentally” having one on your phone, nor would pics of your kids in the pool trigger anything) a human checker would have had enough data to decrypt just those images and see a “reduced resolution facsimile” (Can’t remember the exact term) of the offending photos. This is where all of the brainpower used to create false matches would’ve ended up in. You would’ve had to create multiple matches of known CP images that looks enough like actual CP for the human to make an erroneous call multiple times to trigger anything.

    If after that the human decided that yep, that’s some fucked up shit, the authorities would’ve been contacted.

    Yes, a Bad Government could’ve forced Apple to add other stuff in the database. (They can do it right now for ALL major cloud storage providers BTW) But do you really think people wouldn’t have been watching for changes in the cloud-downloaded database and noticed any suspicious stuff immediately?

    Also according to the paper the probability of a false match was 1 in 1 trillion accounts - and this was not disputed even by the most hardcore activists btw.

    tl;dr If you already upload your stuff to the cloud (like iOS does automatically) the only thing that would’ve changed is that nobody would’ve had a legit reason to peep at your photos in the cloud “for the children”. But if you’ve got cloud upload off anyway, nothing would’ve changed. So I still don’t understand the fervour people had over this - the only reason I can think of is not understanding how it worked.


  • Yep, it’s a legal “think of the children” requirement. They’ve been doing CSAM scanning for decades already and nobody cared.

    When Apple did a system that required MULTIPLE HUMAN-VERIFIED matches of actual CP before even a hint would be sent to the authorities, it was somehow the slippery slope to a surveillance state.

    The stupidest ones were the ones who went “a-ha! I can create a false match with this utter gibberish image!”. Yes, you can do that. Now you’ve inconvenienced a human checker for 3 seconds, after the threshold of local matching images has been reached. Nobody would’ve EVER get swatted by your false matches.

    Can people say the same for Google stuff? People get accounts taken down by “AI” or “Machine learning” crap with zero recourse, and that’s not a surveillance state?