Hugo Hacker News

Hash collision in Apple NeuralHash model

dang 2021-08-18 19:11:40 +0000 UTC [ - ]

Ongoing related threads:

Apple defends anti-child abuse imagery tech after claims of ‘hash collisions’ - https://news.ycombinator.com/item?id=28225706

Convert Apple NeuralHash model for CSAM Detection to ONNX - https://news.ycombinator.com/item?id=28218391

topynate 2021-08-18 12:58:51 +0000 UTC [ - ]

Second preimage attacks are trivial because of how the algorithm works. The image goes through a neural network (one to which everyone has access), the output vector is put through a linear transformation, and that vector is binarized, then cryptographically hashed. It's trivial to perturb any image you might wish so as to be close to the original output vector. This will result in it having the same binarization, hence the same hash. I believe the neural network is a pretty conventional convolutional one, so adversarial perturbations will exist that are invisible to the naked eye.

This is useful for two purposes I can think of. One, you can randomize all the vectors on all of your images. Two, you can make problems for others by giving them harmless-looking images that have been cooked to give particular hashes. I'm not sure how bad those problems would be – at some point a police officer does have to look at the image in order to get probable cause. Perhaps it could lead to your Apple account being suspended, however.

Kalium 2021-08-18 13:08:16 +0000 UTC [ - ]

A police raid on a person's home, or even a gentler thorough search, can be enough to quite seriously disrupt a person's life. Certainly having the police walk away with all your electronics in evidence bags will complicate trying to work remotely.

Of course, this is assuming everything works as intended and they don't find anything else they can use to charge you with something as they search your home. If you smoke cannabis while being in the wrong state, you're now in several more kinds of trouble.

topynate 2021-08-18 13:16:51 +0000 UTC [ - ]

This could happen with a perturbed image, but I doubt it. Apple will send the suspicious images to the relevant authorities. Those authorities will then look at the images. The chances are low that they will then seek a search, even though the images are innocent upon visual inspection. But maybe in some places a ping from Apple is good enough for a search and seizure.

falcolas 2021-08-18 14:48:26 +0000 UTC [ - ]

FWIW, they won't send the images. Even in the pursuit of knocking back CSAM, there are strict restrictions on the transmission and viewing of CSAM - in some cases even the defendant's lawyers don't usually see the images themselves in preparation for a trial, just a description of the contents. Apple employees or contractors will likely not look at the images themselves, only visual hashes.

They will instead contact the police and say "Person X has Y images that are on list Z," and let the police get a warrant based off that information and execute it to check for actual CSAM.

topynate 2021-08-18 15:01:23 +0000 UTC [ - ]

On reflection, yes, there must be warrants involved. I'm raising my estimate of how likely it is that innocent people get raided due to this. The warrant would properly only be to search iCloud, not some guy's house, but I can easily see overly-broad warrants being issued.

gowld 2021-08-18 15:27:43 +0000 UTC [ - ]

> The warrant would properly only be to search iCloud,

iCloud is encrypted, so that warrant is useless.

They need to unlock and search the device.

ggreer 2021-08-18 20:43:18 +0000 UTC [ - ]

The data is encrypted, but Apple has the keys. If they get a warrant, they'll decrypt your data and hand it over. See page 11 of Apple's law enforcement process guidelines[1]:

> iCloud content may include email, stored photos, documents, contacts, calendars, bookmarks, Safari Browsing History, Maps Search History, Messages and iOS device backups. iOS device backups may include photos and videos in the Camera Roll, device settings, app data, iMessage, Business Chat, SMS, and MMS messages and voicemail. All iCloud content data stored by Apple is encrypted at the location of the server. When third-party vendors are used to store data, Apple never gives them the encryption keys. Apple retains the encryption keys in its U.S. data centers. iCloud content, as it exists in the customer’s account, may be provided in response to a search warrant issued upon a showing of probable cause, or customer consent.

1. https://www.apple.com/legal/privacy/law-enforcement-guidelin...

topynate 2021-08-18 15:34:29 +0000 UTC [ - ]

Yes, it's encrypted, but part of this anti-CSAM strategy is a threshold encryption scheme that allows Apple to decrypt photos if a certain number of them have suspicious hashes.

Kalium 2021-08-18 16:17:53 +0000 UTC [ - ]

Apple having any kind of ability to decrypt user contents is disconcerting. It means they can be subpoenaed for that information.

heavyset_go 2021-08-18 20:45:10 +0000 UTC [ - ]

> It means they can be subpoenaed for that information.

They regularly are, and they regularly give up customer data in order to comply with subpeonas[1]. They give up customer data in response to government requests for 150,000 users/accounts a year[1].

[1] https://www.apple.com/legal/transparency/us.html

bbatsell 2021-08-18 19:02:33 +0000 UTC [ - ]

No, the threshold encryption only allows for the 30+ cryptographic “vouchers” to be unlocked, which contain details about the hash matching as well as a “visual derivative” of the image. We don’t know any details about the visual derivative.

sudosysgen 2021-08-18 19:33:34 +0000 UTC [ - ]

Im guessing the visual derivative is a difference of sorts between the image and the CSAM? Of course not sure.

HALtheWise 2021-08-18 19:40:06 +0000 UTC [ - ]

No, since apple _definitely_ isn't sending CSAM photos to your phone so they can be differ. Most likely, the visual derivative is a thumbnail or blurred version of the image.

heavyset_go 2021-08-18 20:42:47 +0000 UTC [ - ]

It's "encrypted", but Apple holds the keys and they regularly give up customers' data in response to government requests for it[1].

[1] https://www.apple.com/legal/transparency/us.html

alfiedotwtf 2021-08-18 19:23:55 +0000 UTC [ - ]

Didn't the FBI stop Apple from encrypting iCloud, and only Messenger is e2e?

__blockcipher__ 2021-08-18 21:05:00 +0000 UTC [ - ]

> in some cases even the defendant's lawyers don't usually see the images themselves in preparation for a trial, just a description of the contents

Man that seems horrible. So you just have to trust the description is accurate? You’d think there’d at least be a “private viewing room” type thing (I get the obvious concern of not giving them a file to take home)

Kalium 2021-08-18 13:27:12 +0000 UTC [ - ]

In broad strokes, I agree with you. I think you're absolutely correct that most trained, educated, technologically sophisticated law enforcement bodies will do exactly as you suggest and decide that there's not enough to investigate.

That said, I'm not willing to say it won't happen. There are too many law enforcement entities of wildly varying levels of professionalism, staffing, and technical sophistication. Someone innocent, somewhere, is likely to have a multi-year legal drama because their local PD got an email from Apple.

And we haven't even gotten to subjects like how some LEOs will happily plant evidence once they decide you're guilty.

cirrus3 2021-08-18 23:50:06 +0000 UTC [ - ]

The police do not show up until a human has compared the matched image to the actual.

Just stop.

teachrdan 2021-08-18 23:52:46 +0000 UTC [ - ]

The more collisions, the more chances of a false positive by a (tired, underpaid) human. I don't envy the innocent person whose home gets raided by a SWAT team convinced they're busting a child sex trafficker.

hda2 2021-08-19 02:57:57 +0000 UTC [ - ]

The database is known to contain non-csam images like porn. I doubt the reviewer will be qualified to discern which is which.

Operyl 2021-08-19 10:53:07 +0000 UTC [ - ]

Not that I doubt your motives, but can I get your source on this? It seems like a huge blunder if so.

e_proxus 2021-08-18 18:59:43 +0000 UTC [ - ]

Wouldn’t it also just be possible to turn a jailbroken iDevice into a CSAM cleaner/hider?

You could take actual CSAM, check if it matches the hashes and keep modifying the material until it doesn’t (adding borders, watermarking, changing dimensions etc.). Then just save it as usual without any risk.

cyanite 2021-08-18 19:51:05 +0000 UTC [ - ]

No it’s not. The client hashes against a blinded set, and doesn’t know whether or not the hash is a hit or miss.

tambourine_man 2021-08-18 13:18:25 +0000 UTC [ - ]

Neuralhashes are far from my area of expertise, but I've been following Apple closely ever since its foundation and have probably watched every public video of Craig since the NeXT take over and here is my take: I've never seen him so off balance before as in his latest interview with Joanna Stern. Not even in the infamous “shaking mouse hand close up” of the early days.

Whatever you say about Apple, they are an extremely well oiled communication machine. Every C-level phrase has a well thought out message to deliver.

This interview was a train wreck. Joanna kept asking: please, in simple terms, to a hesitant and inarticulate Craig. It was so bad that she had to produce infographics to fill the communication void left by Apple.

They usually do their best to “take control” of the narrative. They were clearly caught way off guard here. And that's revealing.

cwizou 2021-08-18 13:59:09 +0000 UTC [ - ]

I think they clearly didn't anticipate that people would perceive it as anything but a breach of trust, that their device was working against them (even for a good cause, against the worst people).

And because of this they calibrated their communication completely wrong, focusing on the on device part as being more private. Using the same line of thinking they use for putting Siri on device.

And the follow up was an uncoordinated mess that didn't help either (as you rightly pointed out with Craig's interview). In the Neuenschwander interview [1], he stated this :

> The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled.

This still has me confused, here's my understanding so far (please feel free to correct me)

- Apple is shipping a neural network trained on the dataset that generates NeuralHashes

- Apple also ships (where ?) a "blinded" (by an eliptic curve algo) table lookup that match (all possible?!) NeuralHashes to a key

- This key is used to encrypt the NeuralHash and the derivative image (that would be used by the manual review) and this bundle is called the voucher

- A final check is done on server using the secret used to generate the elliptic curve to reverse the NeuralHash and check it server side against the known database

- If 30 or more are detected, decrypt all vouchers and send the derivative images to manual review.

I think I'm missing something regarding the blinded table as I don't see what it brings to the table in that scenario, apart from adding a complex key generation for the vouchers. If that table only contained the NeuralHashes of known CSAM images as keys, that would be as good as giving the list to people knowing the model is easily extracted. And if it's not a table lookup but just a cryptographic function, I don't see where the blinded table is coming from in Apple's documentation [2].

Assuming above assumptions are correct, I'm paradoxically feeling a tiny bit better about that system on a technical level (I still think doing anything client side is a very bad precedent), but what a mess did they put themselves into.

Had they done this purely server side (and to be frank there's not much difference, the significant part seems to be done server side) this would have been a complete non-event.

[1] : https://daringfireball.net/linked/2021/08/11/panzarino-neuen...

[2] This is my understanding based on the repository and what's written page 6-7 : https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

brokenmachine 2021-08-19 00:45:47 +0000 UTC [ - ]

Thanks for the description.

That's a *huge* amount of crypto mumbo-jumbo for a system to scan your data on your own device and send it to the authorities.

They must really care about children!!

If only this system was in place while Trump, Jeffrey Epstein, and Prince Andrew were raping children, surely none of that would have happened!! /s

HatchedLake721 2021-08-18 15:09:18 +0000 UTC [ - ]

Can you link the interview please?

brandon272 2021-08-18 15:36:34 +0000 UTC [ - ]

I assume it is this one: https://www.youtube.com/watch?v=OQUO1DSwYN0

This was painful to watch.

tambourine_man 2021-08-18 17:12:06 +0000 UTC [ - ]

That's the one, yes. Thank you.

FabHK 2021-08-18 10:20:30 +0000 UTC [ - ]

How can you use it for targeted attacks?

This is what would need to happen:

1. Attacker generates images that collide with known CSAM material in the database (the NeuralHashes of which, unless I'm mistaken, are not available)

2. Attacker sends that to innocent person

3. Innocent person accepts and stores the picture

4. Actually, need to run step 1-3 at least 30 times

5. Innocent person has iCloud syncing enabled

6. Apple's CSAM detection then flags these, and they're manually reviewed

7. Apple reviewer confuses a featureless blob of gray with CSAM material, several times

Note that other cloud providers have been scanning uploaded photos for years. What has changed wrt targeted attacks against innocent people?

fsloth 2021-08-18 11:01:10 +0000 UTC [ - ]

"How can you use it for targeted attacks?"

Just insert a known CSAM image on target's device. Done.

I presume this could be used against a rival political party to ruin their reputation - insert bunch of CSAM images on their devices. "Party X is revealed as an abuse ring". This goes oh-so-very-nicely with Qanon conspiracy theories which even don't require any evidence to propagate widely.

Wait for Apple to find the images. When police investigation is opened, make it very public. Start a social media campaign at the same time.

It's enough to fabricate evidence only for a while - the public perception of the individual or the group will be perpetually altered, even though it would surface later that the CSAM material was inserted by hostile third party.

You have to think about what nation state entities that are now clients of Pegasus and so on could do with this. Not how safe the individual component is.

kreitje 2021-08-18 11:28:50 +0000 UTC [ - ]

FL Rep Randy Fine filed a report with the Florida Department of Law Enforcement that the sheriff was going to plant CSAM on his computer and arrest him for it.

They are even in the same political party.

https://www.reddit.com/r/321/comments/jt32rs/fdle_report_bet...

JimBard2 2021-08-18 12:58:59 +0000 UTC [ - ]

Indeed this is not new and has probably been happening for many years already. There are services advertising on dark net markets to “ruin someone’s life” which means you pay some Ukrainian guy $300 in Bitcoin and he plants CSAM on a target’s computer.

JKCalhoun 2021-08-18 14:39:51 +0000 UTC [ - ]

> Just insert a known CSAM image on target's device.

Or maybe thirty. You have to surpass the threshold.

Also, if Twitter, Google, Microsoft are already deploying CSAM scanning in their services .... why are we not hearing about all the "swatting"?

faeriechangling 2021-08-19 01:34:48 +0000 UTC [ - ]

Their implementations are not on-device and thus it's actually significantly more difficult to reverse engineer. Apples unique implementation of on-device scanning is much easier to reverse engineer and thus exploit.

Now apple is in this crappy situation where they can't claim their software is secure because it's open source and auditable, but they also can't claim it's secure because it's closed source and they fixed the problems in some later version because this entire debacle has likely destroyed all faith in their competence. If apple is in the position of having to boast "Trust us bro, your iPhone won't be exploited to get you SWATTED over CSAM anymore, we patched it" the big question is why is apple voluntarily adding something to their devices where the failure mode is violent imprisonment and severe loss of reputation when they are not completely competent?

This entire debacle reminds me of this video: https://www.youtube.com/watch?v=tVq1wgIN62E

romwell 2021-08-18 18:25:31 +0000 UTC [ - ]

>Also, if Twitter, Google, Microsoft are already deploying CSAM scanning in their services .... why are we not hearing about all the "swatting"?

>their services

>T H E I R S E R V I C E S

Because it's on their SERVICES, not on their user's DEVICES, for one.

Also, regardless of swatting, that's why we have an issue with Apple.

stephenr 2021-08-18 19:11:25 +0000 UTC [ - ]

It only happens on Apple devices right before the content is uploaded to the service.

How is that a meaningful difference for the stated end goals, that can explain the lack of precedent.

croutonwagon 2021-08-18 21:26:25 +0000 UTC [ - ]

I think this is where a disconnect is occurring.

In this specific case yes. That is what is supposed to happen.

But Apple also sets the standard that this is just the beginning, not the end. They say as much on page 3 in bold, differentiated color ink

https://www.apple.com/child-safety/pdf/Expanded_Protections_...

And there’s nothing to stop them from scanning all images on a device. Or scanning all content for keywords or whatever. iCloud being used as a qualifier is a red herring to what this change is capable of.

Maybe someone shooting guns is now unacceptable, kids have been kicked from schools for posting them on Facebook or having them in their rooms on zoom. What if it’s kids shooting guns? There are so many possibilities of how this could be misused, abused or even just an oopsie, sorry I upended your life to solve a problem that is so very rare.

Add to that their messaging has been muddy at best. And it incited a flame war. A big part of that is iCloud is not a single thing. It’s a service, it can sync snd hold iMessages, it can sync backups, or in my case We have shared iCloud albums that we use to share images with family. Others are free to upload and share. In fact that’s our only use of iCloud other than find my. They say iCloud photos as if that’s just a single thing but it’s easy to extrapolate that to images in iMessages, backups etc.

And the non profit that hosts this database is not publicly accountable. They have public employees on their payroll but really they can put whatever they want in that database. They have no accountability or public disclosure requirements.

So even I, when their main page was like 3 articles was a bit perturbed and put off. I’m not going to ditch my iPhone, mainly because it’s work assigned but I have been keeping a keen eye on what’s happening, how it’s happening and will keep an eye out for their chnages they are promising. I’m also going to guess they won’t nearly be as high profile in the future.

cyanite 2021-08-18 19:58:15 +0000 UTC [ - ]

> Because it's on their SERVICES, not on their user's DEVICES, for one.

Effectively the same for Apple. It’s only when uploading the photo. Doing it on device means the server side gets less information.

short_sells_poo 2021-08-18 14:51:49 +0000 UTC [ - ]

Who cares that Bad Company XYZ already well known for not caring about customer privacy does it? Wouldn't you want to push back against even more increasing surveillance? Apply was beating the drum of privacy while it was convenient, wouldn't you want to hold their feet to the fire now that they seemed to do a U-turn?

sweetheart 2021-08-18 14:58:57 +0000 UTC [ - ]

Their point is that the attack vector being described isn’t new, as CSAM could already be weaponized against folks, and we never really ever hear if that happening. So the OP is simply saying that perhaps it’s not an issue we need to worry about. I happen to agree with them.

short_sells_poo 2021-08-18 15:10:39 +0000 UTC [ - ]

So in your mind, because so far we've seen no evidence that this has been abused, it's nothing to worry about going forward? And that making an existing situation even more widespread is also completely OK?

sweetheart 2021-08-18 18:22:07 +0000 UTC [ - ]

> So in your mind, because so far we've seen no evidence that this has been abused, it's nothing to worry about going forward?

Yeah, basically. It doesn't seem like people actually use CSAM to screw over innocent folks, so I don't think we need to worry about it. What Apple is doing doesn't really make that any easier, so it's either already a problem, or not a problem.

> And that making an existing situation even more widespread is also completely OK?

I don't know if I'd say any of this is "completely OK", as I don't think I've fully formed my opinion on this whole Apple CSAM debate, but I at least agree with OP that I don't think we need to suddenly worry about people weaponizing CSAM all of a sudden when it's been an option for years now with no real stories of anyone actually being victimized.

colejohnson66 2021-08-18 15:57:29 +0000 UTC [ - ]

It’s that the situation can’t be a “slippery slope” if there’s no evidence of there being one prior

hannasanarion 2021-08-18 18:44:03 +0000 UTC [ - ]

> I presume this could be used against a rival political party to ruin their reputation - insert bunch of CSAM images on their devices.

Okay, and then what? You think people will just look at this cute picture of a dog and be like "welp, the computer says it's a photo of child abuse, so we're taking you to jail anyway"?

russdpale 2021-08-18 17:29:06 +0000 UTC [ - ]

You don't have to add a real picture, just add the hash of a benign, (new) cat picture to the db, then put the cat picture on the phone, then release a statement saying the person has popped up on your list. By the time the truth comes out the damage is done.

FabHK 2021-08-18 20:48:19 +0000 UTC [ - ]

> "How can you use it for targeted attacks?" > Just insert a known CSAM image on target's device. Done.

Yes, but then the hash collision (topic of this article) is irrelevant.

sam0x17 2021-08-18 19:48:07 +0000 UTC [ - ]

> Just insert a known CSAM image on target's device. Done.

You don't even need to go that far. You just need to generate 31 false positive images and send them to an innocent user.

cyanite 2021-08-18 19:55:58 +0000 UTC [ - ]

But how will you do that without access to the non-blinded hash table? Then you need access to that first.

Also, “sending” them to a user isn’t enough; they need to be stored in the photo library, and iCloud Photo Library needs to be enabled.

cyanite 2021-08-18 19:54:28 +0000 UTC [ - ]

> Just insert a known CSAM image on target's device. Done.

What do you mean “just”? That’s not usually very simple. It needs to go into the actual photo library. Also, you need like 30 of them inserted.

> I presume this could be used against a rival political party

Yes, but it’s not much different from now, since most cloud photo providers scan for this cloud-side. So that’s more an argument against scanning all together.

lowkey_ 2021-08-18 20:23:13 +0000 UTC [ - ]

WhatsApp for example automatically stores images you receive in your Photos library, so that removes a step, and those will thus be automatically uploaded to iCloud.

The one failsafe would be Apple's manual reviewers, but we haven't heard much about that process yet.

tick_tock_tick 2021-08-18 20:18:33 +0000 UTC [ - ]

> It needs to go into the actual photo library.

iMessage photos received are automatically synced so no. Finding 30 photos take zero time at all on Tor. Hell finding a .onion site that doesn't have CP randomly spammed is harder.....

bsql 2021-08-18 21:24:28 +0000 UTC [ - ]

iMessage photos do not automatically add photos to your photo library. Yes they’re synced between devices but afaik Apple isn’t deploying this hashing technology on iMessages between devices. Only for iCloud photos in the photo library.

y7 2021-08-18 10:53:09 +0000 UTC [ - ]

Cross-posting from another thread [1]:

1. Obtain known CSAM that is likely in the database and generate its NeuralHash.

2. Use an image-scaling attack [2] together with adversarial collisions to generate a perturbed image such that its NeuralHash is in the database and its image derivative looks like CSAM.

A difference compared to server-side CSAM detection could be that they verify the entire image, and not just the image derivative, before notifying the authorities.

[1] https://news.ycombinator.com/item?id=28218922

[2] https://bdtechtalks.com/2020/08/03/machine-learning-adversar...

FabHK 2021-08-18 11:11:56 +0000 UTC [ - ]

Right. So, sending actual CSAM would also work as an attack, but would be detected by the victim and could be corrected (delete images).

But a conceivable novel avenue of attack would be to find an image that:

1. Does not look like CSAM to the innocent victim in the original

2. Does match known CSAM by NeuralHash

3. Does look like CSAM in the "visual derivative" reviewed by Apple, as you highlight.

avianlyric 2021-08-18 11:46:02 +0000 UTC [ - ]

Reading the imagine scaling attack article, it’s looks like it’s pretty easy to manufacture an image that:

1. Looks like an innocuous image, indeed even an image the victim is expecting to receive.

2. Downscales in such a way to produce a CSAM match.

3. Downscales for the derivative image to create actual CSAM for the review process.

Which is a pretty scary attack vector.

zepto 2021-08-18 15:11:30 +0000 UTC [ - ]

Where does it say anything that indicates #1 and #3 are both possible?

FabHK 2021-08-18 11:53:28 +0000 UTC [ - ]

Depends very much on the process Apple uses to make the "visual derivative", though. Also, defence by producing the original innocuous image (and showing that it triggers both parts of Apple's process, NeuralHash and human review of the visual derivative) should be possible, though a lot of damage might've been done by then.

avianlyric 2021-08-18 11:59:35 +0000 UTC [ - ]

> Also, defence by producing the original innocuous image

At this point you’re already inside the guts of the justice system, and have been accused of distributing CSAM. Indeed depending on how diligent the prosecutor is, you might need to wait till trial before you can defend yourself.

At that point you’re life as you know is already fucked. The only thing proving your innocence (and the need to do so is itself a complete miscarriage of justice) will save you from is a prison sentence.

ta988 2021-08-18 13:04:25 +0000 UTC [ - ]

And now you will be accused of trying to hide illegal material in innocuous images.

zepto 2021-08-18 15:13:39 +0000 UTC [ - ]

This isn’t true at all.

If the creation of fakes is as easy as claimed, Neuralhash evidence alone will become inadmissible.

There are plenty of lawyers and money waiting to establish this.

IncRnd 2021-08-19 02:11:57 +0000 UTC [ - ]

> This isn’t true at all.

> If the creation of fakes is as easy as claimed, Neuralhash evidence alone will become inadmissible.

Okay. https://github.com/anishathalye/neural-hash-collider

spurgu 2021-08-19 03:14:09 +0000 UTC [ - ]

Uh? So his if statement is true?

IncRnd 2021-08-19 03:21:16 +0000 UTC [ - ]

Please read what is written right before that... You are taking something out of context.

GeckoEidechse 2021-08-18 14:05:13 +0000 UTC [ - ]

> So, sending actual CSAM would also work as an attack, but would be detected by the victim and could be corrected (delete images).

What if they are placed on the iDevice covertly? Say you want to remove politician X from office. If you got the money or influence you could use a tool like Pegasus (or whatever else there is out there that we don't know of) to place actual CSAM images on their iDevice. Preferably with an older timestamp so that it doesn't appear as the newest image on their timeline. iCloud notices unsynced images and syncs them while performing the CSAM check, it comes back positive with human review (cause it was actual CSAM) and voilà X got the FBI knocking on their door. Even if X can somehow later proof innocence by this time they'll likely have been removed from office over the allegations.

Thinking about it now it's probably even easier: Messaging apps like WhatsApp allow you to save received images directly to camera roll which then auto-syncs with iCloud (if enabled). So you can just blast 30+ (or whatever the requirement was) CSAM images to your victim while they are asleep and by the time they check their phone in the morning the images will already have been processed and an investigation started.

zepto 2021-08-18 15:14:36 +0000 UTC [ - ]

If you are placing images covertly, you can just use real CSAM or other compromat.

Sebb767 2021-08-18 12:28:35 +0000 UTC [ - ]

> but would be detected by the victim and could be corrected (delete images).

I doubt deleting them (assuming the victim sees them) works once the image has been scanned. And, given that this probably comes with a sufficient smear campaign, deleting them will be portraye. as evidence of guilt

bostonsre 2021-08-18 13:53:52 +0000 UTC [ - ]

Why would someone do that? Why not just send the original if both are flagged as the original?

notriddle 2021-08-18 14:00:21 +0000 UTC [ - ]

The victim needs to store the image in their iCloud, so it needs to not look like CSAM to them.

Bjartr 2021-08-18 14:26:31 +0000 UTC [ - ]

Because having actual CSAM images is illegal.

mmcwilliams 2021-08-18 15:44:05 +0000 UTC [ - ]

Doesn’t that make step 1 more dangerous for the attacker than the intended victim? And following this through to its logical conclusion; the intended victim would have images that upon manual review by law enforcement would be found to be not CSAM.

2021-08-18 11:09:51 +0000 UTC [ - ]

2021-08-18 11:01:30 +0000 UTC [ - ]

st_goliath 2021-08-18 13:02:29 +0000 UTC [ - ]

> 7. Apple reviewer....

This part IMO makes Apple itself the most likely "target", but for a different kind of attack.

Just wait until someone who wasn't supposed to, somewhere, somehow gets their hands on some of the actual hashes (IMO bound to happen eventually). Also remember that with Apple, we now have an oracle that can tell us. And with all the media attention around the issue, this might further incentivize people to try.

From that I can picture a chain of events something like this:

1. Somebody writes a script that generates pre-image collisions like in the post, but for actual hashes Apple uses.

2. The script ends up on the Internet. News reporting picks it up and it spreads around a little. This also means trolls get their hands on it.

3. Tons of colliding image are created by people all over the planet and sent around to even more people. Not for targeted attacks, but simply for the lulz.

4. Newer scripts show up eventually, e.g. for perturbing existing images or similar stunts. More news reporting follows, accelerating the effect and possibly also spreading perturbed images around themselves. Perturbed images (cat pictures, animated gifs, etc...) get uploaded to places like 9gag, reaching large audiences.

5. Repeat steps 1-4 until the Internet and the news grow bored with it.

During that entire process, potentially each of those images that ends up on an iDevice will have to be manually reviewed...

Grustaf 2021-08-18 13:15:51 +0000 UTC [ - ]

Do you think Apple might perhaps halt the system if the script get wide publication?

dylan604 2021-08-18 13:32:27 +0000 UTC [ - ]

I've only seen Apple admit defeat once, and that was regarding the trashcan MacPro. Otherwise, it's "you're holding it wrong" type of victim blaming as they quietly revise the issue on the next version.

Can anyone else think of times where Apple has admitted to something bad on their end and then reversed/walked away from whatever it was?

jtmarl1n 2021-08-18 14:50:34 +0000 UTC [ - ]

The Apple AirPower mat comes to mind although there are rumors they haven't abandoned the effort completely. Butterfly keyboard seems to finally be acknowledged as a bad idea and took several years to get there.

The next Macbook refresh will be interesting as there are rumors they are bring back several I/O ports that were removed when switching to all USB-C.

I agree with your overall point, just some things that came to mind when reading your question.

dylan604 2021-08-18 15:46:41 +0000 UTC [ - ]

ah yes, the butterfly keyboard. i must have blocked that from my mind after the horror it was. although, they didn't admit anything on that one. that was just another "you're holding it wrong" silent revision that was then touted as a new feature (rather than oops we fucked up).

The trashcan MacPro is still the only mea culpa I am aware of them actually owning the mistake.

The Airpower whatever was never really released as a product though, so it is a strange category. New question, is the Airpower whatever the only product offically announced on the big stage to never be released?

Grustaf 2021-08-18 13:34:39 +0000 UTC [ - ]

Do you really care what they "admit"? I thought you were worried about innocent people being framed. Obviously if a way to frame people gets widespread, Apple will stop it. They don't want that publicity.

dylan604 2021-08-18 13:58:17 +0000 UTC [ - ]

You clearly have me confused with someone else, as I never mentioned anything about innocent people being framed.

With Apple, nothing is "obvious".

Grustaf 2021-08-18 14:06:22 +0000 UTC [ - ]

The comment above that I responded to seemed to talk about that. But in any case, I for one don't care what Apple admits.

But I am certain they will not want all the bad publicity that would come if the system was widely abused, if you worry about that. That much is actually "obvious", they are not stupid.

the8472 2021-08-18 10:25:16 +0000 UTC [ - ]

> 7. Apple reviewer confuses a featureless blob of gray with CSAM material, several times

A better collision won't be a grey blob, it'll take some photoshopped and downscaled picture of a kid and massage the least significant bits until it is a collision.

https://openai.com/blog/adversarial-example-research/

sitharus 2021-08-18 10:42:55 +0000 UTC [ - ]

So the person would have to accept and save an image that when looks enough like CSAM to confuse a reviewer…

formerly_proven 2021-08-18 10:59:04 +0000 UTC [ - ]

Yes, the diligent review performed by the lowest-bidding subcontractor is an excellent defense against career-ending criminal accusations. Nothing can go wrong, this is fine.

saynay 2021-08-18 11:23:05 +0000 UTC [ - ]

I would think there is way easier ways to frame someone with CSAM then this. Like dump a thumbdrive of the stuff on them and report them to the police.

Sebb767 2021-08-18 12:32:03 +0000 UTC [ - ]

The police will not investigate every hint and a thumbdrive still has some plausible deniability. Evidence on your phone looks far worse and, thanks to this new process, law enforcement will receive actual evidence instead of just a hint.

hdjjhhvvhga 2021-08-18 11:07:18 +0000 UTC [ - ]

My WhatsApp automatically saves all images to my photo roll. It has to be explicitly turned off. When the default is on, it's enough that the image is received and the victim has CP on their phone. After the initial shock they delete it, but the image has already been sent to Apple, where a reviewer marked it as CP. Since the user already gave them their full address data in order to be able to use the app store, Appla can automatically send a report to the police.

tpush 2021-08-18 13:39:14 +0000 UTC [ - ]

> [...] Appla can automatically send a report to the police.

Just to clarify, Apple doesn't report anyone to the police. They report to NCMEC, who presumably contacts law enforcement.

dannyw 2021-08-18 17:13:49 +0000 UTC [ - ]

FBI agents work for NCMEC. NCMEC is created by legislation. They ARE law enforcement, disguised as a non-profit.

zimpenfish 2021-08-18 11:25:04 +0000 UTC [ - ]

> but the image has already been sent to Apple, where a reviewer marked it as CP

No, the images are only decryptable after a threshold (which appears to be about 30) is breached. If you've received 30 pieces of CSAM from WhatsApp contacts without blocking them and/or stopping WhatsApp from automatically saving to iCloud, I gotta say, it's on you at that point.

basilgohar 2021-08-18 11:55:52 +0000 UTC [ - ]

Just a side point, a single WhatsApp message can contain up to 30 images. 30 is the literal max of a single message. So ONE MESSAGE could theoretically contain enough images to trip this threshold.

zimpenfish 2021-08-18 12:05:11 +0000 UTC [ - ]

> a single WhatsApp message can contain up to 30 images

A fair point, yes, and somewhat scuppering towards my argument.

avianlyric 2021-08-18 11:55:09 +0000 UTC [ - ]

You’re aware that people sleep at night, and phones for the most part don’t, right?

BrianOnHN 2021-08-18 11:43:51 +0000 UTC [ - ]

Victim blaming because of failure to meet some seemingly arbitrary limit, ok.

kleene_op 2021-08-18 13:07:08 +0000 UTC [ - ]

Not necessarily.

If you know the method used by Apple to scale down flagged images before they are sent for review, you can make it so the scaled down version of the image shows a different, potentially misleading one instead:

https://thume.ca/projects/2012/11/14/magic-png-files/

At the end of the day:

- You can trick the user into saving an innocent looking image

- You can trick Apple NN hashing function with a purposely generated hash

- You can trick the reviewer with an explicit thumbnail

There is no limit to how devilish one can be.

avianlyric 2021-08-18 11:54:19 +0000 UTC [ - ]

The reviewer may not be looking at the original image. But rather the visual derivative created during the hashing process and sent as part of the safety voucher.

In this scenario you could create an image that looks like anything, but where it’s visual derivative is CSAM material.

Currently iCloud isn’t encrypted, so Apple could just look at the original image. But in future is iCloud becomes encrypted, then the reporting will be don’t entirely based on the visual derivative.

Although Apple could change this by include a unique crypto key for each uploaded images within their inner safety voucher, allowing them to decrypt images that match for the review process.

the8472 2021-08-18 10:49:59 +0000 UTC [ - ]

Depending on what algorithm apple uses to generate the "sample" that's shown to the reviewer it may be possible to generate a large image that looks innocent unless downscaled with that specific algorithm and to a specific resolution

labcomputer 2021-08-18 15:42:22 +0000 UTC [ - ]

So here's something I find interesting about this whole discussion: Everyone seems to assume the reviewers are honest actors.

It occurs to me that compromising an already-hired reviewer (either through blackmail or bribery) or even just planting your own insider on the review team might not be that difficult.

In fact, if your threat model includes nation-state adversaries, it seems crazy not to consider compromised reviewers. How hard would it really be for the CIA or NSA to get a few of their (under cover) people on the review team?

Grustaf 2021-08-18 13:17:01 +0000 UTC [ - ]

Not one image. Ten, or maybe 50, who knows what the threshold is.

spoonjim 2021-08-18 20:22:31 +0000 UTC [ - ]

I don't see how a perfectly legal and normal explicit photograph of someone's 20-year-old wife would be indistinguishable to an Apple reviewer from CSAM, especially since some people look much younger or much older than their chronological age. So first, there would be the horrendous breach of privacy for an Apple goon to be looking at this picture in the first place, which the person in the photograph never consented to, and second, could put the couple in legal hot water for absolutely no reason.

brokenmachine 2021-08-19 01:12:58 +0000 UTC [ - ]

The personal photo is unlikely to match a photo in the CSAM database though, or at least that's what is claimed by Apple with no way to verify if it's true or not.

goohle 2021-08-18 12:02:56 +0000 UTC [ - ]

tubbs 2021-08-18 12:21:14 +0000 UTC [ - ]

I would suggest not clicking this link on a work device.

FabHK 2021-08-18 10:29:28 +0000 UTC [ - ]

Remains to be shown whether that is possible, though.

michaelt 2021-08-18 10:38:18 +0000 UTC [ - ]

Just yesterday, here on HN there was an article [1] about adversarial attacks that could make road signs get misread by ML recognition systems

I'd be astonished if it wasn't possible to do the same thing here.

[1] https://news.ycombinator.com/item?id=28204077

FabHK 2021-08-18 10:48:21 +0000 UTC [ - ]

But the remarkable thing there (and with all other adversarial attacks I've seen) is that the ML classifier is fooled, while for us humans it is obvious that it is still the original image (if maybe slightly perturbed).

But in the case of Apple's CSAM detection, the collision would first have to fool the victim into seeing an innocent picture and storing it (presumably, they would not accept and store actual CSAM [^]), then fool the NeuralHash into thinking it was CSAM (ok, maybe possible, though classifiers <> perceptual hash), then fool the human reviewer into also seeing CSAM (unlike the innocent victim).

[^] If the premise is that the "innocent victim" would accept CSAM, then you might as well just send CSAM as an unscrupulous attacker.

marcus_holmes 2021-08-18 12:22:49 +0000 UTC [ - ]

Hmm, not quite:

step 1 - As others have pointed out, there are plenty of ways of getting an image onto someone's phone without their explicit permission. WhatsApp (and I believe Messenger) do this by default; if someone sends you an image, it goes onto your phone and gets uploaded to iCloud.

step 2 - TFA proves that hash collision works, and fooling perceptual algorithms is already a known thing. This whole automatic screening process is known to be vulnerable already.

step 3 - Humans are harder to fool, but tech giants are not great at scaling human intervention; their tendency is to only use humans for exceptions because humans are expensive and unreliable. This is going to be a lowest-cost-bidder developing-country thing where the screeners are targeted on screening X images per hour, for a value of X that allows very little diligence. And the consequences of a false positive are probably going to be minimal - the screeners will be monitored for individual positive/negative rates, but that's about it. We've seen how this plays out for YouTube copyright claims, Google account cancellations, App store delistings, etc.

People's lives are going to be ruined because of this tech. I understand that children's lives are already being ruined because of abuse, but I don't see that this tech is going to reduce that problem. If anything it will increase it (because new pictures of child abuse won't be on the hash database).

labcomputer 2021-08-18 15:48:24 +0000 UTC [ - ]

> then fool the human reviewer into also seeing CSAM (unlike the innocent victim).

Or just blackmail and/or bribe the reviewers. Presumably you could add some sort of 'watermark' that would be obvious to compromised reviewers. "There's $1000 in it for you if you click 'yes' any time you see this watermark. Be a shame if something happened to your mum."

faeriechangling 2021-08-19 01:45:42 +0000 UTC [ - ]

Yes but the reviewers are not going to be viewing the original image, they are going to be viewing a 100x100 greyscale.

>If the premise is that the "innocent victim" would accept CSAM, then you might as well just send CSAM as an unscrupulous attacker.

This adds trojan horses embedded in .jpg files as an attack vector, which while maybe not overly practical, I could certainly imagine some malicious troll uploading "CSAM" to some pornsite.

UncleMeat 2021-08-18 11:55:28 +0000 UTC [ - ]

NN classifiers work differently than perceptual hashes and the mechanism to do this sort of attack is entirely different, though they seem superficially similar.

nabakin 2021-08-18 10:38:10 +0000 UTC [ - ]

Unfortunately, it is very likely to be possible. Adversarial ML is extremely effective. I won't be surprised if this is achieved within the day, if not sooner tbh.

mannerheim 2021-08-18 10:31:42 +0000 UTC [ - ]

It's been done for image classification.

thinkingemote 2021-08-18 10:35:19 +0000 UTC [ - ]

the issue is step 6 - review and action

Every single tech company is getting rid of manual human review towards an AI based approach. Human-ops they call it - they dont want their employees to be doing this harmful work, plus computers are cheaper and better at

We hear about failures of inhuman ops all the time on HN. people being banned, falsely accused, cancelled, accounts locked, credit denied. All because the decisions which were once by humans are now made by machine. This will happen eventually here too.

It's the very reason why they have the neuralhash model. To remove the human reviewer.

mannerheim 2021-08-18 10:30:22 +0000 UTC [ - ]

> 7. Apple reviewer confuses a featureless blob of gray with CSAM material, several times

Just because the PoC used a meaningless blob doesn't mean that collisions have to be those. Plenty of examples of adversarial attacks on image recognition perturb real images to get the network to misidentify them, but to a human eye the image is unchanged.

nyuszika7h 2021-08-18 10:44:43 +0000 UTC [ - ]

The whole point flew over your head. If it's unchanged to the human eye then surely the human reviewer will see that it's a false positive?

mannerheim 2021-08-18 11:25:36 +0000 UTC [ - ]

No, it's important to point that out lest people think collisions can only be generated with contrived examples. I haven't studied neural hashes in particular, but for CNNs it's extremely trivial to come up with adversarial examples for arbitrary images.

Anyway, as for human reviewers, depends on what the image being perturbed is. Computer repair employees have called the police on people who've had pictures of their children in the bath. My understanding is that Apple does not have the source images, only NCMEC, so Apple's employees wouldn't necessarily see that such a case is a false positive. One would hope that when it gets sent to NCMEC, their employees would compare to the source image and see that is a false positive, though.

int_19h 2021-08-19 00:54:52 +0000 UTC [ - ]

Which would still be a privacy violation, since an actual human is looking at a photo you haven't consented to share with them.

brokenmachine 2021-08-19 01:20:12 +0000 UTC [ - ]

That will be clearly laid out on page 1174 of Apples ToS that you had to click to be able to use your $1200 phone for anything but a paperweight.

jsdalton 2021-08-18 10:33:35 +0000 UTC [ - ]

For #4, I know for a fact that my wife’s WhatsApp automatically stores pictures you send her to her iCloud. So the grey blob would definitely be there unless she actively deleted it.

gambiting 2021-08-18 11:12:15 +0000 UTC [ - ]

I don't know why you'd even go through this trouble. At least few years ago finding actual CP on TOR was trivial, not sure if the situation has changed or not. If you're going to blackmail someone, just send actual illegal data, not something that might trigger detection scanners.

>> What has changed wrt targeted attacks against innocent people?

Anecdote: every single iphone user I know has iCloud sync enabled by default. Every single Android user I know doesn't have google photos sync enabled by default.

Nextgrid 2021-08-18 10:36:01 +0000 UTC [ - ]

> the NeuralHashes of which, unless I'm mistaken, are not available

Given the scanning is client-side wouldn't the client need a list of those hashes to check against? If so it's just a matter of time before those are extracted and used in these attacks.

brokenmachine 2021-08-19 01:22:16 +0000 UTC [ - ]

I think there's some crypto mumbo-jumbo to make it so you can't know if an image matched or not.

visarga 2021-08-18 20:01:21 +0000 UTC [ - ]

How can we know that the CSAM database is not already poisoned with adversarial images that actually target other kinds of content for different purposes? It would look like CSAM to the naked eye, and nobody can tell the images have been doctored.

When reports come in the images would not match, so they need to intercept them before they are discarded by Apple, maybe by having a mole in the team. But it's so much easier than other ways to have an iOS platform scanner for any purpose. Just let them find the doctored images and add them to the database and recruit a person in the Apple team.

anonymousab 2021-08-18 13:28:17 +0000 UTC [ - ]

> 7. Apple reviewer confuses a featureless blob of gray with CSAM material, several times

I find it hard to believe that anyone has faith in any purported manual review by a modern tech giant. Assume the worst and you'll still probably not go far enough.

cm2187 2021-08-18 10:33:22 +0000 UTC [ - ]

Don’t imessage and whatsapp automatically store all images received in the iphone’s photo library?

vermilingua 2021-08-18 10:41:29 +0000 UTC [ - ]

iMessage no, WA by default yes, but can be disabled.

varajelle 2021-08-18 11:07:24 +0000 UTC [ - ]

So no need of hash collisions then. One can simply directly send the child porn images to that person via WhatsApp and send her to jail.

philote 2021-08-18 12:58:53 +0000 UTC [ - ]

But then you're searching and finding child porn images to send.

GeckoEidechse 2021-08-18 14:11:05 +0000 UTC [ - ]

I'd be surprised if there won't be a darknet service that does exactly that (send CSAM via $POPULAR_MESSENGER) the moment Apple activates scanning.

johnla 2021-08-18 16:54:47 +0000 UTC [ - ]

I don't think this can be used to harm an innocent person. It can raise a red flag but it would be quickly unraised and perhaps an investigation into the source of the fakeout images because THAT person had to have had the real images in possession.

If anything, this gives weapons to people against the scanner as we can now bomb the system with false positives rendering it impossible to use. I don't know enough about cryptography but I wonder if there is any ramifications of the hash being broken.

2021-08-18 14:18:37 +0000 UTC [ - ]

beiller 2021-08-18 19:29:49 +0000 UTC [ - ]

Maybe they could install malware that makes all camera images taken using a technique like stenography to cause false positive matches for all the photos taken by the device. Maybe they could share one photo album where all the images are hash collisions.

cirrus3 2021-08-18 23:52:33 +0000 UTC [ - ]

Are you being serious? #7 is literally "Apple reviewer confuses a featureless blob of gray with CSAM material, several times"

30 times.

30 times a human confused a blob with CSAM?

Johnny555 2021-08-18 15:31:42 +0000 UTC [ - ]

Actually, need to run step 1-3 at least 30 times

You can do steps 2-3 all in one step "Hey Bob, here's a zip file of those funny cat pictures I was telling you about. Some of the files got corrupted and are grayed out for some reason".

f3d46600-b66e 2021-08-18 14:26:08 +0000 UTC [ - ]

What makes CSAM database private?

It's my understanding that many tech companies (Microsoft? Dropbox? Google? Apple? Other?) (and many people in those companies) have access to the CSAM database, which essentially makes it public.

cyanite 2021-08-18 20:09:50 +0000 UTC [ - ]

Well, the actual hash table on the device is blinded so the device doesn’t know if an image is a match or not. The server doesn’t learn the actual hash either, unless the threshold of 30 images is reached.

spicybright 2021-08-18 10:27:46 +0000 UTC [ - ]

If you're in close physical contact with a person (like at a job) you just wait for them to put their phone down while unlocked, and do all this.

FabHK 2021-08-18 10:32:01 +0000 UTC [ - ]

Then, with all due respect, the attacker could just download actual CSAM.

> If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say “It wasn’t us” as they wear t-shirts that say “IT WAS DEFINITELY US,” and then they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them.

https://www.usenix.org/system/files/1401_08-12_mickens.pdf

mrits 2021-08-18 15:14:28 +0000 UTC [ - ]

"Then, with all due respect, the attacker could just download actual CSAM."

If you didn't have Apple scanning your drive trying to find a new way for you to go to prison then it wouldn't be a problem.

FabHK 2021-08-18 20:41:09 +0000 UTC [ - ]

Yes. But this whole discussion is about potential problems/exploits with hash collisions (see title).

cyanite 2021-08-18 20:05:10 +0000 UTC [ - ]

It’s misleading to say that they scan your “drive”. They scan pictures as they are uploaded to iCloud Photo Library.

Also, most other major cloud photo providers scan images server side, leading to the same effect (but with them accessing more data).

incrudible 2021-08-18 11:16:01 +0000 UTC [ - ]

Also, this XKCD:

https://xkcd.com/538/

People are getting nerd-sniped about hash collisions. It's completely irrelevant.

The real-world vector is that an attacker sends CSAM through one of the channels that will trigger a scan. Through iMessage, this should be possible in an unsolicited fashion (correct me if I'm wrong). Otherwise, it's possible through a hacked device. Of course there's plausible deniability here, but like with swatting, it's not a situation you want to be in.

UseStrict 2021-08-18 13:51:13 +0000 UTC [ - ]

Plausible deniability or not, it could have real impact if Apple decides to implement the policy of locking your account after tripping the threshold, which you then have to wait or fight to get unlocked. Or now you have police records against you for an investigation that lead nowhere. It's not a zero impact game if I can spam a bunch of grey blobs to people and potentially have a chain of human failures that leads police knock down your door.

deelowe 2021-08-18 13:28:38 +0000 UTC [ - ]

FWIW, this sort of argument may not trigger a change in policy, but a technical failure in the hashing algorithm might.

cyanite 2021-08-18 20:06:24 +0000 UTC [ - ]

> Through iMessage, this should be possible in an unsolicited fashion

Sure, but those don’t go into your photo library, so it won’t trigger any scanning. Presumably people wouldn’t actively save CSAM into their library.

aardvarkr 2021-08-18 12:41:04 +0000 UTC [ - ]

Love the relevant xkcd! And to reply to your point, simply sending unsolicited CSAM via iMessage doesn’t trigger anything. That message has to be saved to your phone then uploaded to iCloud. Someone else above said repeat this process 20-30 times so I presume it can’t be a single incident of CSAM. Seems really really hard to trigger this thing by accident or maliciously

ziml77 2021-08-18 12:54:45 +0000 UTC [ - ]

People are saying that, by default, WhatsApp will save images directly to your camera roll without any interaction. That would be an easy way to trigger the CSAM detection remotely. There are many people who use WhatsApp so it's a reasonable concern.

spicybright 2021-08-18 11:19:39 +0000 UTC [ - ]

Have you not worked a minimum wage job in the US? It's incredibly easy to gain phone access to semi-trusting people.

If you don't like someone (which happens very often in this line of work) you could potentially screw someone over with this.

superjan 2021-08-18 11:22:46 +0000 UTC [ - ]

Great article!

Jcowell 2021-08-18 20:49:59 +0000 UTC [ - ]

one vector you can use to skip step 3 is to send on WhatsApp. I believe images sent via WhatsApp are auto saved by default last I recalled.

xucheng 2021-08-18 11:29:05 +0000 UTC [ - ]

> 4. Actually, need to run step 1-3 at least 30 times

Depending on how the secret sharing is used in Apple PSI, it may be possible that duplicating the same image 30 times would be enough.

dang 2021-08-18 18:47:31 +0000 UTC [ - ]

We detached this subthread from https://news.ycombinator.com/item?id=28219296 (it had become massive and this one can stand on its own).

eptcyka 2021-08-18 11:23:42 +0000 UTC [ - ]

I'm sure the reviewers will definitely be able to give each reported image enough time and attention they need, much like the people youtube employs to review videos discussing and exposing animal abuse, holocaust denial and other controversial topics. </sarcasm>

shadowgovt 2021-08-18 11:30:18 +0000 UTC [ - ]

Difference in volume. Images that trip CSAM hash are a lot rarer than the content you just described.

eptcyka 2021-08-18 15:02:27 +0000 UTC [ - ]

I personally am not aware of how the perceptual hash values are distributed in it's keyspace. Perceptual hashes can have issues with uniform distribution, since they are somewhat semantic and most content out there is too. As such, I wouldn't make statements about how often collisions would occur.

soziawa 2021-08-18 10:29:42 +0000 UTC [ - ]

> 6. Apple's CSAM detection then flags these, and they're manually reviewed

Is the process actually documented anywhere? Afaik they are just saying that they are verifying a match. This could of course just be a person looking at the hash itself.

FabHK 2021-08-18 10:35:44 +0000 UTC [ - ]

They look at the contents of the "safety voucher", which contains the neural hash and a "visual derivative" of the original image (but not the original image itself).

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

hughrr 2021-08-18 10:44:10 +0000 UTC [ - ]

If it’s a visual derivative, whatever that means, then how does the reviewer know it matches the source image? Sounds like there’s a lot of non determinism in there.

halflings 2021-08-18 10:08:26 +0000 UTC [ - ]

Apple's scheme includes operators manually verifying a low-res version of each image matching CSAM databases before any intervention. Of course, grey noise will never pass for CSAM and will fail that step.

The fact that you can randomly manipulate random noise until it matches the hash of an arbitrary image is not surprising. The real challenge is generating a real image that could be mistaken for CSAM at low res + is actually benign (or else just send CSAM directly) + matches the hash of real CSAM.

This is why SHAttered [1] was such a big deal, but daily random SHA collisions aren't.

[1] https://shattered.io/

lifthrasiir 2021-08-18 10:14:05 +0000 UTC [ - ]

But you can essentially perform DoS attack to human checkers, effectively rendering the entire system grind to a halt. The entire system is too reliant on the performance of NeuralHash which can be defaced in many ways. [1]

(Added later:) I should note that the DoS attack is only possible with the preimage attack and not the second preimage attack as the issue seemingly suggests, because you need the original CSAM to perform the second preimage attack. But given the second preimage attack is this easy, I don't have any hope for the preimage resistance anyway.

(Added much later:) And I realized that Apple did think of this possibility and only stores blinded hashes in the device, so the preimage attack doesn't really work as is. But it seems that the hash output is only 96 bits long according to the repository, so this attack might still be possible albeit with much higher computational cost.

[1] To be fair, I don't think that Apple's claim of 1/1,000,000,000,000 false positive rate refers to that of the algorithm. Apple probably tweaked the threshold for manual checking to match that target rate, knowing NeuralHash's false positive rate under the normal circumstances. Of course we know that there is no such thing like the normal circumstances.

shapefrog 2021-08-18 12:00:15 +0000 UTC [ - ]

I have seen it suggested that everyone should flood the system with flagged images to overwhelm it in protest to this move by apple.

Sounds pretty stupid to me to fill your phone with kiddie porn in protest, but you do you internet people.

mannerheim 2021-08-18 12:58:39 +0000 UTC [ - ]

You don't need to do that, just use images that collide with the hashes.

2021-08-18 14:12:43 +0000 UTC [ - ]

robertoandred 2021-08-18 14:43:39 +0000 UTC [ - ]

How will you know something collides?

mannerheim 2021-08-18 15:17:35 +0000 UTC [ - ]

> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

robertoandred 2021-08-18 16:11:01 +0000 UTC [ - ]

Yeah, so how would you know something would collide with the hash?

mannerheim 2021-08-18 16:17:48 +0000 UTC [ - ]

Because the hashes are stored on the user's device?

robertoandred 2021-08-18 16:18:52 +0000 UTC [ - ]

No, encrypted and blinded hashes are stored. You can’t extract them.

fastball 2021-08-19 04:19:05 +0000 UTC [ - ]

Right, but your device is checking against the hashes, no?

So at some point you generate an image which triggers a subroutine and you know that image collides.

cyanite 2021-08-18 20:11:30 +0000 UTC [ - ]

This also means that the device doesn’t know if a given image is a match.

robertoandred 2021-08-18 20:39:39 +0000 UTC [ - ]

Correct, only the server does.

HALtheWise 2021-08-18 19:53:46 +0000 UTC [ - ]

Presumably, someone on the internet will download a dump _actual_ CP and run it through NeuralHash, then post the resulting hashes to GitHub. Doing so will be illegal for them because they are downloading CP, but it should be perfectly legal for you to then download the hashes they posted, randomly select one, and generate an innocuous image that matches it. The fact that Apple isn't posting hashes doesn't actually make it all that hard to get your hands on them.

falcolas 2021-08-18 14:52:45 +0000 UTC [ - ]

Because the algorithm and list will be on your phone, and can (has, per TFA) be extracted.

robertoandred 2021-08-18 14:54:38 +0000 UTC [ - ]

You cannot extract or reverse the CSAM hashes. They've been encrypted and blinded using server-side-only keys. If TFA said that, it's lying.

lifthrasiir 2021-08-18 16:00:36 +0000 UTC [ - ]

After more reading of the whitepaper I think you are right. As I understand it, given the image hash H0 and CSAM hashes H[1]...H[n] (some might be duplicates in disguise) the algorithm proceeds like this:

- The device generates a secret X and divides it into X[1]...X[m] with the secret sharing algorithm. m is some large number and any k (but no less) copies out of X[i] are enough to reconstruct X.

- The device stores blinded hashes f(H[1])...f(H[n]). The function f itself is not known to the client.

- The image hash H0 is compressed with another function g to the range between 1 and n.

- The downscaled image data (for the human check) is encrypted with X and appended with (probably) random X[i].

- The result is then encrypted again with a key derived from f(H0) and sent to the server with an associated data g(H0).

- The server tries to decrypt it with a key derived from f(H[g(H0)]). This is only possible when H[g(H0)] = H0, i.e. H0 represents some known CSAM.

- You can only decrypt the second layer with at least k copies of X[i] then.

At this stage Apple can still learn the number of CSAM images less than k. The fix is described in an overly technical document and I can't exactly follow, but supposedly the client can inject an appropriate amount of synthetic data where only the first layer can be always decrypted and the second layer is bogus (including the presumed X[i]).

---

Assuming this scheme is correctly implemented, the only attack I can imagine is the timing attack. As I understand a malicious client can choose not to send false data. This will affect the number of items that pass the first layer of encryption, so the client can possibly learn the number of actual matches by adjusting the number of synthetic data since the server can only proceed to the next step with at least k such items.

This attack seems technically possible, but is probably infeasible to perform (remember that we already need 2^95 oracle operations, which is only vaguely possible even in the local device). Maybe the technical report actually has a solution for this, but for now I can only guess.

falcolas 2021-08-18 16:41:59 +0000 UTC [ - ]

That synopsis disagrees with Apple's own descriptions - or rather it goes into the secondary checks, which confuses the issue that the initial hash checks are indeed performed on-device:

> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

falcolas 2021-08-18 14:58:25 +0000 UTC [ - ]

One does not need to reverse the CSAM hashes to find a collision with a hash. If the evaluation is being done on the phone, including identifying a hash match, the hashes must also be on the phone.

robertoandred 2021-08-18 15:17:05 +0000 UTC [ - ]

No, matches are not verified on the phone. On the phone, your image hash is used to look up an encrypted/blinded (via the server's secret key) CSAM hash. Then your image data (the hash and visual derivative) is encrypted with that encrypted/blinded hash. This encrypted payload, along with a part of your image's hash, is sent to Apple. Then on the server, Apple uses that part of your image's hash and their secret key to create a decryption key for the payload. If your image hash matches the CSAM hash, the decryption key would unlock the payload.

In addition, they payload is protected at another layer by your user key. Only with enough mash matches can Apple put together the user decryption key and open the very innards of your image's payload containing the full hash and visual derivative.

falcolas 2021-08-18 15:27:00 +0000 UTC [ - ]

To quote a sibling comment, who looked into the horses' mouth:

> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

mannerheim 2021-08-18 15:02:40 +0000 UTC [ - ]

I believe the hash comparisons are made on Apple's end. Then the only way to get hashes will be a data breach on Apple's end (unlikely but not impossible) or generating it from known CSAM material.

falcolas 2021-08-18 15:04:51 +0000 UTC [ - ]

That's not what Apple's plans state. The comparisons are done on phone, and are only escalated to Apple if there are more than N hash matches, at which point they are supposedly reviewed by Apple employees/contractors.

Otherwise, they'd just keep doing it on the material that's actually uploaded.

mannerheim 2021-08-18 15:16:08 +0000 UTC [ - ]

Ah, never mind, you're right:

> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

cyanite 2021-08-18 20:14:07 +0000 UTC [ - ]

He is not right, though. The system used will not reveal matches to the device, only to the server and only if the threshold is reached.

cyanite 2021-08-18 20:13:27 +0000 UTC [ - ]

> That's not what Apple's plans state. The comparisons are done on phone

Yes but as stated in the technical description, this match is against a blinded table, so the device doesn’t learn if it’s a match or not.

2021-08-18 14:13:17 +0000 UTC [ - ]

floatingatoll 2021-08-18 14:16:57 +0000 UTC [ - ]

It’s incredibly stupid because your Apple ID will get terminated for abusing Apple services.

falcolas 2021-08-18 14:53:49 +0000 UTC [ - ]

There are applications which automatically save images sent to you to your camera roll (such as Whatsapp, IIRC). How can Apple prove you put them there intentionally?

Granted, they most likely won't care, but it's a legitimate attack vector.

floatingatoll 2021-08-18 15:11:48 +0000 UTC [ - ]

You’re right that it’s a valid attack upon the people Apple pays to review matched images before invoking law enforcement, but no harm comes to the recipient in that model, unless they receive real legitimate CSAM and don’t report it to the authorities themselves.

Attempted entrapment and abuse of computing systems, which is an uncomfortable way to phrase the WhatsApp scenario, would be quite sufficient cause for a discovery warrant to have WhatsApp reveal the sender’s identity to Apple. Doesn’t mean they’d be found guilty, but WhatsApp will fold a lot sooner than Apple, especially if the warrant is sealed by the court to prevent the sender from deleting any CSAM in their possession.

A hacker would say that’s all contrived nonsense and anyways it’s just SWATting, that’s no big deal. A judge would say that’s a reasonable balance of protecting the sender from being dragged through the mud in the press before being indicted and permitting the abused party (Apple) to pursue a conviction and damages.

I am not your lawyer, this is not legal advice, etc.

Majromax 2021-08-18 12:45:59 +0000 UTC [ - ]

> The fact that you can randomly manipulate random noise until it matches the hash of an arbitrary image is not surprising.

It is, actually. Remember that hashes are supposed to be many-bit digests of the original; it should take O(2^256) work to find a message with a chosen 256-bit hash and O(2^128) work to find a "birthday attack" collision. Finding any collision at all with NeuralHash so soon after its release is very surprising, suggesting the algorithm is not very strong.

SHAttered is a big deal because it is a fully working attack model, but the writing was on the wall for SHA-1 after the collisions were found in reduced-round variations of the hash. Attacks against an algorithm only get better with time, never worse.

Moreover, the break of NeuralHash may be even stronger than the SHAttered attack. The latter modifies two documents to produce a collision, but the NeuralHash collision here may be a preimage attack. It's not clear if the attacker crafted both images to produce the collision or just the second one.

Ajedi32 2021-08-18 13:50:19 +0000 UTC [ - ]

NeuralHash is a perceptual hash, not a cryptographically secure hash. Perceptual hashes have trivially findable second preimages by design, as the entire point is for two different images which appear visually similar to return the same result.

It's not particularly surprising to me that a perceptual hash might also have collisions that don't look similar to the human eye, though if Apple ever claimed otherwise this counterexample is solid proof that they're wrong.

cyanite 2021-08-18 20:15:06 +0000 UTC [ - ]

The problem is that you’d need the original NeuralHash, which isn’t stored on the device. The device only has a blinded version.

rdli 2021-08-18 10:15:22 +0000 UTC [ - ]

Are you pinning your hopes that a false positive like this will be appropriately caught because of an army of faceless, low wage workers who stare at CSAM cases all day will immediately flag?

rootusrootus 2021-08-18 14:31:11 +0000 UTC [ - ]

Apple has pretty deep pockets. Think how much that judgement is going to be when they find themselves in court for letting someone get raided over gray images.

Not that it's going to happen, since it would also require NCMEC to think the images match, but whatever. Attack me! Attack me! I want to retire.

hda2 2021-08-19 02:41:08 +0000 UTC [ - ]

> Apple has pretty deep pockets.

For now, sure. What happens when their money runs short? What about the other tech companies that will inevitably be forced to deploy this shit? Will they also have Apple's pretty deep pockets?

Blind faith in this system will not magically fix how flawed it is nor the abuse and harm it will allow. This is going to hurt a lot of innocent people.

brokenmachine 2021-08-19 03:49:31 +0000 UTC [ - ]

>Attack me! Attack me! I want to retire.

If you post your whatsapp address, I'm sure someone will oblige.

icelancer 2021-08-18 10:45:26 +0000 UTC [ - ]

> Of course, grey noise will never pass for CSAM and will fail that step.

Never? You sure that one or more human operators will never make this mistake, dooming someone's life / causing them immense pain?

shapefrog 2021-08-18 12:03:39 +0000 UTC [ - ]

I can guarantee nobody will see the inside of a courtroom, on charges of possession and distribution of child porn for possessing multiple images of grey noise (unless there is some steganography going on).

falcolas 2021-08-18 14:56:59 +0000 UTC [ - ]

One does not need to go to court to have their life ruined by accusations. Ironically, there's quite a few examples of this over the years for alleged CSAM.

One example that sticks out in my mind is a pair of grandparents who photographed their grandchildren playing in the back yard. A photo tech flagged their photo, they were arrested, and it took their lawyer going through the hoops to get a review of the photo for the charges to be dropped.

shapefrog 2021-08-18 15:38:44 +0000 UTC [ - ]

Sure, there are always cases - but was their photo of a grey blob that matched a hash but is clearly a grey blob or of a naked child?

If the photo was a grey blob and they had to go through a judicial review for someone to look at the photo and confirm 'yes that is a grey blob' then color me wrong.

falcolas 2021-08-18 15:43:05 +0000 UTC [ - ]

I'd view a "grey blob" to be the MVP of hash collisions. I doubt that this will end with grey blobs - I see it ending with common images (memes would be great for this) being invisibly altered to collide with a CSAM hash.

shapefrog 2021-08-18 15:54:11 +0000 UTC [ - ]

If you need a judicial review to confim that a slightly altered Bernie in Coat and Gloves meme is not the same image as the picture of a child being raped that they have on file then we have way bigger problems.

falcolas 2021-08-18 16:36:54 +0000 UTC [ - ]

Here's the thing with CSAM - it's illegal to view and transmit. So nobody, until the police have confiscated your devices, will actually be able to verify that it is a "child being raped."

They'll view visual hashes, look at descriptions, and so forth, but nobody from Apple will actually be looking at them, because then they are guilty of viewing and transmitting CSAM.

I noted in another comment, even the prosecutors and defense lawyers in the case typically only get a description of the content, they don't see it themselves.

shapefrog 2021-08-18 17:15:51 +0000 UTC [ - ]

This is just not true. Human review is conducted. Apple will conduct human review, facebook conduct human review, NCMEC will conduct human review, law enforcement will conduct human review, lawyers and judges will conduct human review.

Over the years there have been countless articles etc about how fucked up being a reviewer of content flagged at all the tech companies is. https://www.vice.com/en/article/a35xk5/facebook-moderators-a...

hannasanarion 2021-08-18 18:58:16 +0000 UTC [ - ]

Where did you get this idea, scooby doo?

It is not illegal to be an unwilling recipient of illegal material. If a package shows up at your door with a bomb, you're not gonna be thrown in jail for having a bomb.

hnfong 2021-08-18 19:11:55 +0000 UTC [ - ]

In theory, sure.

At the very least, you'd be one of the primary suspects, and if you somehow got a bad lawyer, all bets are off.

https://hongkongfp.com/2021/08/12/judges-criticise-hong-kong...

hannasanarion 2021-08-18 19:25:51 +0000 UTC [ - ]

Okay, and when a cursory look at the bomb actually reveals it to be a fisher-price toy, what then?

What is the scenario where a grey blob gets on your phone that sets off CSAM alerts, an investigator looks at it and sees only a grey blob, and then still decides to alert the authorities even though it's just a grey blob, and the authorities still decide to arrest you even though it's just a grey blob, and the DA still decides to prosecute you even though it's just a grey blob, and a jury still decides to convict you, even though it's still just a grey blob?

You're the one who's off in theory-land imagining that every person in the entire justice system is just as stupid as this algorithm is.

int_19h 2021-08-19 00:56:44 +0000 UTC [ - ]

Possession of CSAM is a strict liability crime in most jurisdictions.

hannasanarion 2021-08-19 16:13:53 +0000 UTC [ - ]

That is simply not true. There is no American jurisdiction where child pornography is a strict liability crime.

On this topic, the Supreme Court has ruled in Dickerson v US that, in all cases, to avoid First Amendment conflicts, all child pornography statutes must be interpreted with at least a "reckless disregard" standard.

Here is a typical criminal definition, from Minnesota, where a defendant recently tried to argue that the statute was strict liability and therefore unconstitutional, and that argument was rejected by the courts because it is clearly written to require knowledge and intent:

> Subd. 4. Possession prohibited. (a) A person who possesses a pornographic work or a computer disk or computer or other electronic, magnetic, or optical storage system ․ containing a pornographic work, knowing or with reason to know its content and character, is guilty of a felony․

notRobot 2021-08-18 12:45:51 +0000 UTC [ - ]

Many people never see the inside of a courtroom when false or unproven rape accusations are made against them, but their lives still get ruined because of the negative publicity.

bscphil 2021-08-18 13:55:31 +0000 UTC [ - ]

Those cases are not comparable, because the whole reason they have that impact is that the accusations are usually made publicly (because the whole point is to harm the reputation of one's rapist and warn others, should a conviction prove to be impossible), while CSAM review goes through a neural hash privately on your phone, then privately and anonymously through an Apple reviewer, then is privately reviewed at NCMEC (who - I think - have access to the full size image), and only then is turned over to law enforcement (which should also have access to the full image).

It only becomes public knowledge if law enforcement then chooses to charge you - and if all that happens on the basis of an obvious adversarial net image, the result is a publicity shitshow for Apple and you become a civil rights hero after your lawyer (even an underpaid overworked public defender should be able to handle this one) demonstrates this.

As others have stated in this thread, I think the real failure case is not someone's life getting ruined by claims of CSAM possession somehow resulting from a bad hash match, but the fact that planted material (or sent via message) can now easily ruin your life because it gets automatically reported; you can't simply delete it and move on any more.

rootusrootus 2021-08-18 14:34:33 +0000 UTC [ - ]

We give way, way too much weight in the legal system to eye witness and victim statements, in absence of any corroborating evidence. That's a problem.

But not really comparable, IMO. You won't even know you got investigated until after the original images have been shipped off to NCMEC for verification.

shapefrog 2021-08-18 13:18:47 +0000 UTC [ - ]

Are you suggesting that perhaps less people should report rape accusations, because it might be awkward for the accused to get negative publicity? Thats messed up.

dannyw 2021-08-18 16:13:50 +0000 UTC [ - ]

What if it is legal pornography of 21 year olds but disturbed to collide with CSAM? You are aware even defence lawyers are not allowed to look at alleged CSAM material in court right?

shapefrog 2021-08-18 16:23:52 +0000 UTC [ - ]

The process would not trigger any action. The NCMEC, who can look at the material, and are the people to whom the matter is reported, would compare the flagged image with the source material and reject it as not matching known CSAM.

What if the legal porn of a 21 year old that triggered the collision match looked really really really close? So close that a human can not distinguish between the image of a 12 year old being raped that they have in their database and your image? Well then you might have a problem, legal and otherwise.

> defence lawyers are not allowed to look at alleged CSAM material in court right

I know this is not true in many countries, but cant speak for your country.

dannyw 2021-08-18 16:42:19 +0000 UTC [ - ]

You are aware that a lot of CSAM are close ups of say pussies for example, and human anatomy can look very similar?

I'm not talking about images of rape here. I'm taking about images that you'd see on a regular porn site, of adults and their body parts.

You are also aware that CSAM covers anywhere from 0 to 17.99 years of age, and the legal obligation to report exists equally for the whole spectrum?

So let's say I download a close up pussy collection of 31 images of what I believe to be consenting 20 year olds, and what are consenting 20 year olds.

But they are actually planted by an attacker (let's say an oppressive regime who doesn't like me) and disturbed to match CSAM, that is, pussy close ups of 17 year olds. They are all just pussy pics. They will look the same.

Should I go to jail?

Do I have a non zero chance of going to jail? Yes.

shapefrog 2021-08-18 17:27:08 +0000 UTC [ - ]

If you have content that has a matching hash value and is identical by all computational and human inspection to content that has been identified as CSAM from which the hash was generate then you have a problem.

Without getting into the metaphysics of what is an image, at that point, you basically have a large collection of child porn.

Your hypothetical oppressive regime has gone to a lot of trouble planting not illegal evidence on your device. It would be much more effective to just put actual child porn on your device, which you would need to have to conduct the attack in the first place.

cyanite 2021-08-18 20:16:54 +0000 UTC [ - ]

> You are aware that a lot of CSAM are close ups of say pussies for example, and human anatomy can look very similar?

I doubt images that look quite generic will make it into those hash sets, though.

brokenmachine 2021-08-19 03:47:51 +0000 UTC [ - ]

What is the factual basis for you to doubt that?

FabHK 2021-08-18 21:35:49 +0000 UTC [ - ]

Nearly impossible to verify, though, by construction.

icelancer 2021-08-18 21:46:14 +0000 UTC [ - ]

> I can guarantee nobody will see the inside of a courtroom

This wasn't the question I asked.

shapefrog 2021-08-18 23:08:11 +0000 UTC [ - ]

> grey noise will never pass for CSAM > You sure that one or more human operators will never make this mistake.

Yes I can say 100% that no human operator will ever classify a grey image for a child being raped. Happy to put money on it.

meowster 2021-08-18 23:58:20 +0000 UTC [ - ]

It's possible that the operator accidently clicks the wrong button.

When dealing with a monotonous task that the operator is probably getting PTSD from, I think the chance is greater than 0%.

Articles about content moderators and PTSD:

https://www.businessinsider.com/youtube-content-moderators-a...

https://www.bbc.com/news/technology-52642633

https://www.theverge.com/2019/2/25/18229714/cognizant-facebo...

SXX 2021-08-18 14:35:56 +0000 UTC [ - ]

> The real challenge is generating a real image that could be mistaken for CSAM at low res + is actually benign (or else just send CSAM directly) + matches the hash of real CSAM.

Why do you have an idea that image have to be benign? Almost everyone watch porn and it's will be so much easier to find collisions by manipulating actual porn images which are not CSAM.

Also this way you'll more likely to trigged false-positive from Apple staff since they aren't suppose to see how actual CSAM looks like.

bo1024 2021-08-18 12:26:04 +0000 UTC [ - ]

> The fact that you can randomly manipulate random noise until it matches the hash of an arbitrary image is not surprising.

Strongly disagree. (1) The primary feature of any decent hash function is that this should not happen. (2) Any preimage attack opens the way for further manipulations like you describe.

brokensegue 2021-08-18 14:37:02 +0000 UTC [ - ]

cryptographic hashes are different from image fingerprints

bo1024 2021-08-18 14:47:04 +0000 UTC [ - ]

That's true, one way to put it is that traditionally non-cryptographic hashes are supposed to prevent accidental collisions, while cryptographic ones should prevent even collisions on purpose.

But hashing is used in many places that could be vulnerable to an attack, so I think the distinction is blurry. People used MD5 for lots of things but are moving away for this reason, even though they're not in cryptographic settings.

nabakin 2021-08-18 10:27:58 +0000 UTC [ - ]

I don't think that is far away either. I won't be surprised if that is achieved within the day, if not sooner.

Also, generating images that look the same as the original and yet produce a different hash.

varispeed 2021-08-18 11:29:57 +0000 UTC [ - ]

> Apple's scheme includes operators manually verifying a low-res version of each image

The reviewer, likely on a minimum wage, will report images just in case. Nobody would like to be dragged through the mud because they didn't report something they thought it is innocent.

Edd314159 2021-08-18 11:46:48 +0000 UTC [ - ]

I think I am in dire need of some education here and so I have questions:

* Is this a problem with Apple's CSAM discriminator engine or with the fact that it's happening on-device?

* Would this attack not be possible if scanning was instead happening in the cloud, using the same model?

* Are other services (Google Photos, Facebook, etc.) that store photos in the cloud not doing something similar to uploaded photos, with models that may be similarly vulnerable to this attack?

I know that an argument against on-device scanning is that people don't like to feel like the device that they own is acting against them - like it's snitching on them. I can understand and actually sympathise with that argument, it feels wrong.

But we have known for a long time that computer vision can be fooled with adversarial images. What is special about this particular example? Is it only because it's specifically tricking the Apple CSAM system, which is currently a hotly-debated topic, or is there something particularly bad here, something that is not true with other CSAM "detectors"?

I genuinely don't know enough about this subject to comment with anything other than questions.

bo1024 2021-08-18 12:32:18 +0000 UTC [ - ]

Not complete answers but background: apple’s system works by having your device create a hash of each image you have. The hash (a short hexadecimal string) is compared to a list of known CP image hashes, and if it matches, then your image is uploaded to Apple for further investigation.

A devastating scenario for such a system is if an attacker knows how to look at a hash and generate some image that matches the hash, allowing them to trigger false positives any time. That appears to be what we are witnessing.

Edd314159 2021-08-18 12:56:35 +0000 UTC [ - ]

> A devastating scenario for such a system is if an attacker knows how to look at a hash and generate some image that matches the hash, allowing them to trigger false positives any time.

This is my understanding too. But is this not also true for other (cloud-based) CSAM scanning systems? Why is Apple's special in this regard?

nonbirithm 2021-08-18 16:14:56 +0000 UTC [ - ]

They aren't.

Apple could have saved themselves so much backlash and not have caused the outrage to be focused exclusively on them if they hadn't tried to be novel with their method of hashing, and had just announced that they were about to do exactly what all the other tech companies had already been doing for years - server side scanning.

Apple would still be accused of walking back on its claims of protecting users' privacy, but for a different reason - by trying to conform. Instead of wasting all the debate on how Apple and only Apple is violating everyone's privacy with its on-device scanning mechanism, which was without precedent, this could have been an educational experience for many people about how little privacy is valued in the cloud in general, no matter who you choose to give your data to, because there is precedent for such privacy violations that take place on the server.

Apple could have been just one of the companies in a long line of others whose data management policies would have received significant renewed attention as a result of this. Instead, everyone is focused on criticizing Apple.

There is a significant problem with people's perception of "privacy" in tech if merely moving the scan on-device causes this much backlash while those same people stayed silent during the times that Google and Facebook and the rest adopted the very same technique on the server in the past decade. Maybe if Apple had done the same, they would have been able to get away with it.

cyanite 2021-08-18 20:20:05 +0000 UTC [ - ]

Perception aside, Apple’s system is somewhat better for privacy, since Apple needs to access much less data server side.

floatingatoll 2021-08-18 14:35:25 +0000 UTC [ - ]

There’s a lot of overlap between Apple product buyers and “fight the thoughtcrime slippery slope” hackers. The CSAM abusers are presumably (if they’re not stupid) also fanning the flames on that slippery slope perception, because it’s to their benefit if the hackers defeat Apple.

bo1024 2021-08-18 14:48:26 +0000 UTC [ - ]

I don't know. I don't know what other systems use, I just know what I've read recently about Apple's.

cyanite 2021-08-18 20:20:39 +0000 UTC [ - ]

This pretty much sums this entire drama up, I think.

rollinggoron 2021-08-18 14:17:27 +0000 UTC [ - ]

But how does this exact attack and scenario not also apply to Google, Facebook, Microsoft etc... who are also doing the same thing on their clouds servers?

bo1024 2021-08-18 14:52:02 +0000 UTC [ - ]

I don't know what those companies do, hopefully someone who does know will chime in and answer.

cyanite 2021-08-18 20:18:58 +0000 UTC [ - ]

They essentially do the same, just in the cloud meaning they access all images directly.

cyanite 2021-08-18 20:18:28 +0000 UTC [ - ]

That’s an oversimplified and misleading description of how the system works, but ok. I recommend reading the technical description, or even the paper linked from that: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

fortenforge 2021-08-18 18:18:31 +0000 UTC [ - ]

No, I don't think that's what we're witnessing. No one has yet demonstrated that they can take a hash alone and produce an image that matches the hash. It's true that that would be bad, but right now you still need the original colliding image.

magpi3 2021-08-18 12:39:01 +0000 UTC [ - ]

But how would the attacker get the generated image on a person's phone?

bo1024 2021-08-18 14:50:57 +0000 UTC [ - ]

I didn't mean to suggest a targeted attack. If the goal is to just overwhelm Apple's system, the attacker doesn't need to target a particular phone, they just need to distribute lots of colliding images to some phones. Even their own phones, since the colliding images wouldn't be illegal.

Koffiepoeder 2021-08-18 12:45:58 +0000 UTC [ - ]

E.g. send them a whatsapp message that looks innocent

endisneigh 2021-08-18 13:24:24 +0000 UTC [ - ]

They can see the image - why would they import a random image into their library from someone they don’t know?

kemayo 2021-08-18 13:58:16 +0000 UTC [ - ]

WhatsApp has a really weird default behavior: it imports all images you're sent into your photo library.

This is a smart thing to disable, even outside this recent discussion of CSAM.

https://faq.whatsapp.com/android/how-to-stop-saving-whatsapp...

endisneigh 2021-08-18 14:23:05 +0000 UTC [ - ]

In that scenario this attack is unnecessary. Someone could just send you legitimate child pornography and then immediately tell the authorities that you possess said things.

kemayo 2021-08-18 14:25:49 +0000 UTC [ - ]

Yup! It's a legitimately crazy default.

Edit: Though, to be fair, the specific hash-collision scenario would be that someone could send you something that doesn't look like CSAM and so you wouldn't reflexively delete it.

endisneigh 2021-08-18 14:28:13 +0000 UTC [ - ]

If it doesn’t look like it wouldn’t a human reviewer disregard it once it gets to that point?

Personally I don’t really see the issue.

MisterSandman 2021-08-18 14:57:39 +0000 UTC [ - ]

We don't know how the human review is going to work. Countries with fewer resources are going to find it easier to just arrest/detain any suspects, instead of spending time and money on figuring out which reports are true.

All you have to be is accused of CP for your life to be destroyed. It doesn't matter if you did it or not.

endisneigh 2021-08-18 17:20:35 +0000 UTC [ - ]

What you’re describing is possible even if you didn’t receive anything.

If the government wants to get you they don’t need this Apple scanning tech, or anything at all really

kemayo 2021-08-18 14:31:11 +0000 UTC [ - ]

Yeah, you'd need a very specific set of processes for it to really be a problem.

Kliment 2021-08-18 20:35:56 +0000 UTC [ - ]

Whatsapp imports received images into camera roll by default, and icloud sync is on by default. So you just need to get into a WA group the victim is a member of, and have the victim use default settings.

markus92 2021-08-18 13:48:22 +0000 UTC [ - ]

Auto import could be turned on?

bumblebritches5 2021-08-18 15:32:27 +0000 UTC [ - ]

Seriously?

Look into Pegasus and Candiru

gitgud 2021-08-19 00:59:45 +0000 UTC [ - ]

Basically you are right. There's nothing that special about hash collisions with image recognition.

I think this is blowing up because it's cathartic to see a technology you disagree with get undermined and basically broken by the community...

vesinisa 2021-08-18 10:16:48 +0000 UTC [ - ]

Now this offers Apple a very delicate opportunity to back out of the whole scanning controversy due to technological vulnerabilities.

notquitehuman 2021-08-18 13:56:41 +0000 UTC [ - ]

There’s no going back now. They revealed themselves to be enthusiastic participants in a creeping global surveillance system. They’d have to fire every single executive who passed on an opportunity to tank this thing before they get the opportunity to make the case that they’ve changed.

nonbirithm 2021-08-18 19:39:07 +0000 UTC [ - ]

The idea that because the state of the art right now is flawed that Apple should give up misses the point.

Even if it is flawed, in a few years Apple will just release a new version or a different technique that works more effectively for its stated purpose. Other companies will silently improve the server-side scanning techniques they already use after the public outcry started by Apple blows over.

So long as companies feel the need to protect themselves from liability for hosting certain kinds of data, be it criminal or political or moral or anything else, there are no societal checks in place to stop them from doing so.

This is not a technological issue like so many people are trying to frame it as; it is a policy issue.

kleene_op 2021-08-18 12:11:09 +0000 UTC [ - ]

10 bucks they don't.

rootusrootus 2021-08-18 14:38:31 +0000 UTC [ - ]

The only way they will back out is if this misunderstanding of perceptual hashes results in significant blowback from average customers. Apple is certainly not going to be surprised that a perceptual hash collision is technically easy.

Croftengea 2021-08-18 13:20:42 +0000 UTC [ - ]

This! Apple almost never admit their mistakes ("You're holding it wrong"), so the technical argument would allow leaving this feature in an indefinite and perhaps opt-in beta.

SXX 2021-08-18 14:59:52 +0000 UTC [ - ]

Some people here in comments believe that whoever gonna check reported material on Apple side will never ever flag false-positive. We already know that NCMEC database itself don't exclusively contain child porn, but also some other photos that closely related ot CSAM. Even if those photos don't have actual CSAM on them. But let's ignore this fact.

Do people who believe in behevolent Apple understand that CSAM don't all have some big red "CHILD PORN" sign on it? No demonic feel included. Like any porn we can suppose that many of such images might not even have any faces or literally anything that make them different from images of 18yo person.

When you think well about it brute-forcing actual porn or some jailbait images doesn't sound that impossible. All you need is a lot of totally legal porn and some compute power. Both we have in abundance.

dannyw 2021-08-18 16:16:42 +0000 UTC [ - ]

I guarantee you out of millions of alleged CSAM images hapzardly added by local police and intetnet reports, thousands are legal porn of consenting adults.

SXX 2021-08-18 18:51:32 +0000 UTC [ - ]

I totally sure it's very much possible, but even if this database only exclusively contained CSAM it's still very much possible for human to match false-positive since Apple can only show their snoops your photos and likely they will be compressed to some 360x360 or whatever.

int_19h 2021-08-19 00:59:14 +0000 UTC [ - ]

Quite frankly, I don't even care whether the reviewers are 100% perfect or not. I did not consent to having some random stranger out there reviewing my private photos. Even if it doesn't actually have any legal consequences, it's still unacceptable.

UncleMeat 2021-08-18 11:52:10 +0000 UTC [ - ]

Why is this meaningfully different than, say, what Google Photos has been doing for years?

If you can get rooting malware on the target device then you could

1. Produce actual CSAM rather than a hash collision

2. Produce lots of it

3. Sync it with Google Photos

This attack has been available for many years and does not need convoluted steps like hash collisions if you have the means to control somebody's phone with a RAT.

UseStrict 2021-08-18 13:56:24 +0000 UTC [ - ]

The _initial_ implementation of client-side scanning has the same initial result, but vastly different futures. Up until now it could be (foolishly or otherwise) assumed that Apple had your personal privacy in mind. Now they have demonstrated a willingness to compromise your local device privacy. Right now it's iCloud only, but I'm sure it's a boolean configuration away somewhere to make it scan everything, regardless of whether it's marked for iCloud upload or not. And I don't think Apple is strong enough to withstand a gagged demand to flip that boolean.

On their cloud services (Apple, Google, MS, etc) they can only get to content that a user has implicitly agreed to share (whether or not they understand the ramifications is another conversation). On your device, there's no more separation.

UncleMeat 2021-08-18 14:33:38 +0000 UTC [ - ]

But the discussion here is about malicious actors fraudulently inserting files that trigger alarms. If they have control over your device, "content that a user has implicitly agreed to share" is not a meaningful category.

cyanite 2021-08-18 20:23:18 +0000 UTC [ - ]

How have they demonstrated that they will compromise your privacy? They compromise it less than the systems scanning on the cloud, since like this most of the match is only known to the device and not the cloud.

If you don’t trust Apple to not lie, then the entire discussion becomes a bit moot, I think, since then they could pretty much do anything, with or without this system.

grishka 2021-08-18 14:17:11 +0000 UTC [ - ]

The big difference is that cloud providers do the scanning on their own infrastructure. If you don't want something scanned, you don't upload it to the could, that simple. But here, your own device is snitching on you.

UncleMeat 2021-08-18 14:35:14 +0000 UTC [ - ]

But the discussion at hand is about malicious actors causing innocent people to trip the alarms. Given the clear capabilities of tools like Pegasus, "just don't enable cloud syncing" is obviously not sufficient to protect against a malicious actor who wants to plant illegal content on your device and trip the alarms.

hannasanarion 2021-08-18 19:02:15 +0000 UTC [ - ]

OK, and so what? "Tripping the alarms" in this case means somebody looks at the pictures that were put on your device without your knowledge, sees that they're all grey blobs, and flags it as a false alarm, case closed.

cyanite 2021-08-18 20:24:05 +0000 UTC [ - ]

No. Only pictures being uploaded to iCloud Photo Library are being scanned.

dang 2021-08-18 18:48:55 +0000 UTC [ - ]

(We detached this subthread from https://news.ycombinator.com/item?id=28219296.)

short_sells_poo 2021-08-18 12:08:22 +0000 UTC [ - ]

Google was not standing on a pedestal preaching privacy. In contrast Apple was trying to appear as the privacy conscious hardware/software vendor. To now implement such a blatantly obvious stepping stone to dragnet surveillance of actual devices is such a hypocritical move that it beggars belief.

Ignoring the whataboutism in your question, we know that privacy once lost is practically impossible to get back. Once the genie is out of the bottle and Apple is doing on device scanning, what's to stop 3 letter agencies and governments around the world to start demanding ever more access? Because that's exactly what's going to happen. "Oh it's not a big deal, we just need slightly more access than we have now."

10 years down the line, we'd have people's phones datamining every piece of info they have and silently reporting to an unknowable set of entities. All in the name of fighting crime.

There needs to be pushback on this crap. Every single one of these attempts must be met with absolute and unconditional refusal. Mere inaction means things will inevitably and invariably get worse over time.

UncleMeat 2021-08-18 12:41:18 +0000 UTC [ - ]

It just seems to me that there are two very different conversations happening at the same time, with people swapping back and forth between them

1. There can be false positives or other mechanisms for innocent people to get flagged.

2. It is bad to do this sort of check on the local disk.

The discussion at hand started as entirely #1. But now you've swapped to #2, talking about government spying on local files. It makes it very difficult to have a conversation because any pushback against arguments made for one point is assumed to be pushback against arguments made for the other point.

stetrain 2021-08-18 13:52:15 +0000 UTC [ - ]

This exactly.

I am mostly convinced based on the technical details that have (slowly) come out from Apple that they have made this system sufficiently inconvenient to use as a direct surveillance system by a malicious government.

Yes a government could secretly order Apple to make changes to the system, but they could also order them to just give them all of your iCloud photos and backups, or send data directly from the Camera or Messages app, or any number of things that would be easier and more useful for the government. If you don't trust your government don't use a mobile device on a public cell network or store your data on servers.

But all of that said there is a still a line being crossed on principle and precedent for scanning images locally on device and then reporting out when something "bad" is found.

Apple thinks this is more private than just directly rifling through your photos in iCloud, but I can draw a line between what is on my device and what I send to Apple's servers and be comfortable with that.

short_sells_poo 2021-08-18 14:40:14 +0000 UTC [ - ]

Yes of course it is more private than outright scanning all photos but how does that relate to the ground state of private photos simply not being scanned at all?

And please do not bring in whataboutist arguments like "but other cloud providers are already doing it".

I'm honestly taken aback by how many people on HN are completely OK with what Apple is pulling here.

In my mind, it's irrelevant that the current system is "sufficiently inconvenient to use as a dragnet surveillance system", because it's the first step towards one that is convenient to use and if we extrapolate all the other similar efforts, we know full well what is going to happen.

stetrain 2021-08-18 14:46:16 +0000 UTC [ - ]

I don't think "All user data on device and in the cloud is not scannable for CSAM or retrievable with a warrant" is a tenable position in the current US political landscape, and even less so in other countries.

And my point is that if the government wanted a dragnet they could just legislate or secretly order one. Just like they have done in various forms over the last 20 years. And Apple might not even be allowed to tell us it is happening.

short_sells_poo 2021-08-18 15:22:57 +0000 UTC [ - ]

> "All user data on device and in the cloud is not scannable for CSAM or retrievable with a warrant" Who said anything about warrants? As far as I know the proposed system is proactive and requires no warrants at all.

More to the point, anyone remotely sophisticated can just encrypt CSAM into a binary blob and plaster it all over the cloud providers servers.

Ie, this system will possibly catch some small time perverts at the cost of even more potential of government misuse of the current technical landscape.

I consider it a similar situation to governments trying to legislate backdoors into encryption. A remotely sophisticated baddie will just ignore the laws, and all it does is add risks for innocents.

> And my point is that if the government wanted a dragnet they could just legislate or secretly order one. Just like they have done in various forms over the last 20 years. And Apple might not even be allowed to tell us it is happening.

I don't understand how is this an argument against the pushback at all. So just because it could be worse, we should just throw our hands in the air and say "Oh it's all fine, it could be worse so the (so far) mild intrusion into privacy is nothing to worry about."

The thing is, these things seem to happen step by step so the outrage is minimized. Insert your favorite "frog being boiled slowly" anecdote here. You don't push back on the mild stuff, and before you realize things have gotten so much worse.

stetrain 2021-08-18 15:36:24 +0000 UTC [ - ]

I haven’t ever said this is a good thing and that we should like it.

I’m saying if the concern is that a government orders Apple to change it and do something different, then that’s a government problem and maybe we should try fixing that.

short_sells_poo 2021-08-18 16:15:55 +0000 UTC [ - ]

> then that’s a government problem and maybe we should try fixing that.

But why even give governments hints that people are generally OK with their devices being scanned?

We can argue about the technicalities of how abusable or resilient the current implementation is. But we can agree that it's a step towards losing privacy, yes? We didn't have scanning of iDevices before, now we do.

Because in my mind it's not a long shot to argue that once it becomes normalized that Apple can scan people's phones for CSAM when uploading to iCloud, it's just a small extra step to scan all pictures. The capability is basically in place already, it's literally removing a filter.

And then the next small step is not just CSAM but any fingerprint submitted by LE. And so it goes.

Governments can't legally compel Apple to implement this capability. But if the capability is already there, Apple can be compelled to turn over the information collected. Again, they can't do that if the capability and information doesn't exist.

E.g. if Apple can't decrypt data on your phone because they designed it such, then they can't be forced, even with a warrant, to backdoor your phone. They can legally refuse to add such capabilities.

stetrain 2021-08-18 16:55:56 +0000 UTC [ - ]

> We can argue about the technicalities of how abusable or resilient the current implementation is. But we can agree that it's a step towards losing privacy, yes? We didn't have scanning of iDevices before, now we do.

Yes, that was the intention of my original comment. The “slippery slope” is one of principle and precedent, more than this specific technical implementation.

short_sells_poo 2021-08-18 14:35:36 +0000 UTC [ - ]

I'd argue they are both important points that are tightly interconnected.

#2 is bad on it's own and in my mind it shouldn't even get to the merits of discussing #1 because at that point the debate is already lost in favor of surveillance.

But beyond that, #1 is highly problematic, doubly so given the fact that government surveillance is basically a non-decreasing function.

cyanite 2021-08-18 20:26:09 +0000 UTC [ - ]

I’d argue that #2 is good, since it offers much more privacy than scanning in the cloud. This is based on reading and understanding the technical summary and paper linked from there.

HumblyTossed 2021-08-18 14:03:48 +0000 UTC [ - ]

This can also be used to make Apple's system useless, no? If enough (millions?) of people were to, say, go to a web site and save generated gray-blobs to their phones, it would create enough false-positives to kill this system, right? Maybe game it and have everyone convert their various profile pics to these images.

floatingatoll 2021-08-18 14:24:52 +0000 UTC [ - ]

No, because Apple will simply start killing Apple IDs and blocking hardware by serial number for abusive behavior towards Apple when y’all do that, and that’s a very expensive problem to overcome.

“You tried to exploit our production systems and we’re ending our customer relationship with you over it” is a classic Apple move and no one outside of the Hackintosh community realizes that their devices include crypto-signed attestations of their serial number during Apple service sign-ins.

robertoandred 2021-08-18 14:45:23 +0000 UTC [ - ]

And how would anyone know that those gray blobs match child porn?

jug 2021-08-18 16:01:26 +0000 UTC [ - ]

Exactly, that’s the scary abuse scenario where people could send a whole load of this stuff to e.g a political enemy.

robertoandred 2021-08-18 16:18:15 +0000 UTC [ - ]

What? Why do you think gray blobs would hurt anyone?

dannyw 2021-08-18 16:25:02 +0000 UTC [ - ]

No one is going to send gray blobs, they will be finding legal porn (like pussy close ups, tongue pics, whatever) and then disturbing it to trigger a CSAM hit.

The low res derivative will match, perhaps even closely, because pussy closeups look similar to an apple employee when its grayscale 64 by 64 pixels (remember: it's illegal for Apple to transmit CSAM, so it must be so visually degraded to the point where it's arguably not visual).

The victim will get raided, be considered a paedophile by their workplace, media, and family, and perhaps even go into jail.

The attacker in this case can be users of Pegasus unhappy with a journalist.

robertoandred 2021-08-18 17:51:25 +0000 UTC [ - ]

And disturbing it enough to match CSAM would make it obvious it’s been tampered with. And how would the attacker obtain the CSAM hashes they’d need to match?

sierpinsky 2021-08-18 17:53:44 +0000 UTC [ - ]

"According to media reports, the cloud computing industry does not take full advantage of the existing CSAM screening toolsto detect images or videos in cloud computing storage. For instance, big industry players, such as Apple, do not scan their cloud storage. In 2019, Amazon provided only eight reports to the NCMEC, despite handling cloud storage services with millions of uploads and downloads every second. Others, such as Dropbox, Google and Microsoft perform scans for illegal images, but 'only when someone shares them, not when they are uploaded'." [1]

So I guess the question is what exactly "others" are doing, 'only when someone shares them, not when they are uploaded'. The whole discussion seems to center around what Apple intends to do on-device, ignoring what others are already doing in the cloud. Isn't this strange?

[1] https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/6593...

eightysixfour 2021-08-18 23:07:52 +0000 UTC [ - ]

It is a shift in trust. If my things are scanned on my local device, I now must trust that Apple:

* Will not be compelled to change the list of targeted material by government coercion.

* Will not upload the "vouchers" unless the material is uploaded to iCloud.

* Will implement these things in such a way that hackers cannot maliciously cause someone to be wrongly implicated in a crime.

* Will implement these things in such a way that hackers cannot use the tools Apple has created to seek other information (such as state sponsored hacking groups looking for political dissidents).

And for many of us, we do not believe that these things are a given. Here's a fictional scenario to help bring it home:

Let's say a device is created that can, with decent accuracy, detect drugs in the air. Storage unit rental companies start to install them in all of their storage units to reduce the risk that they are storing illegal substances, and they notify the police if the sensors go off.

One storage unit rental company feels like this is an invasion of privacy, so they install the device in your house but promise to only check the results if you move your things into the storage unit.

This sounds crazy, right?

cyanite 2021-08-18 20:59:20 +0000 UTC [ - ]

> The whole discussion seems to center around what Apple intends to do on-device, ignoring what others are already doing in the cloud. Isn't this strange?

Very strange. Especially when this on-device technique means that Apple needs to access far less data than when doing it on the cloud.

magpi3 2021-08-18 12:44:49 +0000 UTC [ - ]

Can someone ELI5? I understand that a person can now generate an image with the same hash as an illegal image (such as child porn), but I don't understand how they can get it on someone's phone and I don't understand why someone would get in trouble for an image, when finally examined, that is clearly not child pornography.

bitneuker 2021-08-18 13:03:28 +0000 UTC [ - ]

A vulnerability by itself is not that dangerous, but in combination with a sophisticated attack, or another vulnerability can be disastrous. State actors have the resources to exploit a number of unknown bugs in combination with this collision to have Apple's systems flag persons of interest.

This, combined with human error during the manual review process might result in someone getting reported. Seeing as twitter (and other social media sites) jump on the bandwagon whenever someone gets accused of being a pedophile, this might destroy someones life.

The entire story might seem a bit to far fetched, but based on past events, you never know how bad something 'simple' as a hash collision can be.

dannyw 2021-08-18 16:28:57 +0000 UTC [ - ]

No one is going to send gray blobs, they will be finding legal porn (like pussy close ups, tongue pics, whatever) and then disturbing it to trigger a CSAM hit.

The low res derivative will match, perhaps even closely, because pussy closeups look similar to an apple employee when its grayscale 64 by 64 pixels (remember: it's illegal for Apple to transmit CSAM, so it must be so visually degraded to the point where it's arguably not visual).

The victim will get raided, be considered a paedophile by their workplace, media, and family, and perhaps even go into jail.

The attacker in this case can be users of Pegasus unhappy with a journalist.

FabHK 2021-08-18 21:29:07 +0000 UTC [ - ]

Ok, so you posit an attacker could find/generate 30+ pictures that are

1. accepted by innocent user,

2. flagged as known CSAM by NeuralHash,

2b. also flagged by the second algorithm Apple will run over flagged images server side as known CSAM,

3. apparently CSAM in the "visual derivative".

That strikes me as a rather remote scenario, but worth investigating. Having said that, if it's a 3-letter adversary using Pegasus unhappy with a journalist, couldn't they just put actual CSAM onto the journalist's phone? And couldn't they have done that for many years?

cyanite 2021-08-18 20:27:17 +0000 UTC [ - ]

There is a lot of speculation about things that haven’t happened in that comment.

rootusrootus 2021-08-18 14:39:51 +0000 UTC [ - ]

> State actors have the resources

You can end the conversation right there. If you are up against a state actor, you have already lost.

dannyw 2021-08-18 16:33:07 +0000 UTC [ - ]

Incorrect. A Chinese state actor can't just go around imprisoning journalists they don't like in America, but they can now do this through planting child porngoraphy via remote malware (Pegasus) and watch their enemies get arrested by the US Feds.

rootusrootus 2021-08-18 20:12:22 +0000 UTC [ - ]

Disagree. It isn't just jurisdiction. It is resource access. If the Chinese gov't were coming after little old me right now, I'd be properly worried, even though I'm safely within the boundaries of the US.

bitneuker 2021-08-18 16:23:52 +0000 UTC [ - ]

Very true, just wanted to paint a picture on how this could be abused.

dannyw 2021-08-18 16:34:10 +0000 UTC [ - ]

It's not true at all. You're assuming state actors are in the same jurisdiction. This isn't always the case - think an oppressive authoritian regime wanting to get an American journalist arrested for child pornography.

It's always possible before, but client-side CSAM detection and alerting has weaponised this.

Previously, you always had to somehow alert an unfriendly jurisdiction. Now, you just use malware like Pegasus to drop CSAM, whether real or disturbed from legal porn, and watch as Apple tips off the Feds on your enemies.

floatingatoll 2021-08-18 14:43:46 +0000 UTC [ - ]

State actors will just Gitmo you, without all this wasteful effort on hashes. This system offers no benefit sufficient to make it worth their time if they want to cull you from the population somehow.

dannyw 2021-08-18 16:31:45 +0000 UTC [ - ]

No, China can't just Gitmo an American journalist on American soil.

But now China can send some legal pornography (eg closeup pussy pictures), disturbed to match a CSAM hit, to a journalist they don't like and get them in jail.

Why can't China do this before? Because previously, they'd still need to tip off authorities, which has an attribution trail and credibility barrier. Now, they can just use Pegasus to plant these images and then watch as Apple turns them into the Feds. Zero links to the attacker.

floatingatoll 2021-08-18 17:27:48 +0000 UTC [ - ]

The scenario you describe has already been extant for the past ten years. Unreported zerodays could have been used at any time to inject a CSAM hit into someone's camera roll, way back in time where they wouldn't see it, in order to get them investigated. Their phone would have uploaded it to iCloud or Google Photos or Dropbox Whatever and the CSAM detections at each place would have fired off. No need for any of this fancy AI static nonsense.

I know of zero instances of this attack being executed on anyone, so apparently even though it's been possible for years, it isn't a material threat to any Apple customers today. If you have information to the contrary, please present it.

What new attacks are possible upon device owners when the CSAM scanning of iCloud uploads is shifted to the device, that were not already a viable attack at any time in the past decade?

vnchr 2021-08-18 12:55:30 +0000 UTC [ - ]

I would think a Message with the attached photo from a burner phone/account would be enough.

manmal 2021-08-18 13:15:18 +0000 UTC [ - ]

Currently, the image would have to be imported into the photos library, and iCloud upload must be enabled.

zug_zug 2021-08-18 13:21:53 +0000 UTC [ - ]

This conversely means that all illegal content can be freely texted and this system won't even catch the distribution of CP unless those pictures are imported into the photos library and icloud updates enabled.

There's a pretty good chance that it was inevitably going to get expanded to handle pictures arriving at the phone through other means.

cyanite 2021-08-18 20:28:10 +0000 UTC [ - ]

We can take that discussion if and when that happens.

Invictus0 2021-08-18 15:34:36 +0000 UTC [ - ]

Whatsapp has a feature where all images are automatically saved to device as they are received. If automatic iCloud upload is on, then all the conditions are there for a person to innocently click on a spammy Whatsapp message, see a bunch of nonsense grayscale images, and continue on with their day--not realizing they are now being monitored for CSAM.

manmal 2021-08-18 20:15:26 +0000 UTC [ - ]

I agree, that will become a problem.

dannyw 2021-08-18 16:36:48 +0000 UTC [ - ]

Pegasus says hi.

varispeed 2021-08-18 12:55:01 +0000 UTC [ - ]

> that is clearly not

For one, you can't know if that's true as the image could have been manipulated to appear as such. For example you wouldn't know if a kind of steganography has been used to hide image in an image and that neuralhash picked on a hidden image.

> but I don't understand how they can get it on someone's phone

There is many vectors. For example you can leave phone unattended and someone can snap a picture of an image or since a collision may look innocent to you, you would overlook it in an email etc...

mukesh610 2021-08-18 13:23:37 +0000 UTC [ - ]

> For example you wouldn't know if a kind of steganography has been used to hide image in an image and that neuralhash picked on a hidden image.

How would NeuralHash pick a "hidden image"? It only uses the pixels of the image to get the hash. Any hidden image in the metadata would not even be picked up and no amount of steganography can fool NeuralHash.

> There is many vectors. For example you can leave phone unattended and someone can snap a picture of an image or since a collision may look innocent to you, you would overlook it in an email etc...

As iterated elsewhere in this thread, random gibberish pixels colliding with CSAM would definitely not be useful in incriminating anyone. The manual process would catch that. Also, if the manual process is overloaded, I'm pretty sure basic object recognition can filter out most of the colliding gibberish .

varispeed 2021-08-18 15:16:04 +0000 UTC [ - ]

The steganography is not to fool the NeuralHash, but to fool the viewer.

The NeuralHash would "see" the planted image, but for the viewer it would appear innocent.

I am trying to say that a person reviewing image manually, without special tools will not be able to tell if the image is a false positive and would have to report everything.

reacharavindh 2021-08-18 12:11:44 +0000 UTC [ - ]

I admit that I have not done enough research to have a strong opinion on this, but why is Apple taking this on themselves? As far as I can see, this outrage is because of "scanning on iPhone" that is wildly out of user's control. Why can't Apple be like others and say we scan the shit out of what you upload to iCloud(and it is in our Terms and Conditions to use iCloud)?

Almost all tech people know that iCloud (or its backups) are not encrypted, so they can decrypt it as they wish.. and those among us that are privacy conscious can comfortable turn iCloud OFF(or not sign up for one) and use the iPhone as a simple private device?

helen___keller 2021-08-18 13:19:38 +0000 UTC [ - ]

My understanding is that iCloud reports orders of magnitudes less CSAM than other cloud services at a similar scale. My guess is that apple wanted a way to report the CSAM they are currently storing without having to decrypt/inspect every person's personal data (which necessarily opens vectors for e.g. rogue employees doing bad things with people's personal data)

Hence why this approach was stated as a privacy win by Apple. They catch the CSAM and they don't have to look at your photos stored online.

cyanite 2021-08-18 20:29:44 +0000 UTC [ - ]

Yes, and if you read (and understand) the technical implementation, it certainly does offer more privacy. But yeah for people who don’t, it often seems much worse.

jefftk 2021-08-18 12:19:27 +0000 UTC [ - ]

The speculation that I've seen here is that Apple is rolling this out ahead of a future announcement that iCloud uploads will be encrypted.

bitneuker 2021-08-18 13:09:22 +0000 UTC [ - ]

Are they not? I would've thought that with Apple advertising privacy and security they would at least encrypt iCloud uploads.

reacharavindh 2021-08-19 09:44:15 +0000 UTC [ - ]

Apple has the decryption keys to iCloud... It is one of those things that many people assume the other way given Apple's marketing and PR stands.

I for one am aware of this with iCloud and am still using iCloud for convenience (and by choice). If I were in need of better privacy, I can always use the iPhone without iCloud, and it will work. After this "in phone" watchdog implementation, that will not be the case. I will assume that Apple is constantly watching all my unencrypted content in the worst case, on behalf of state actors and intelligence agencies.

We have seen Apple give in to the Chinese government spying because it is the law there. US government could easily ask for this and also apply a gag order preventing Apple from being able to tell the user. The best situation is to lack such a tool.

davidcbc 2021-08-18 13:42:08 +0000 UTC [ - ]

iCloud uploads are encrypted in transit and on their servers, but they are not E2E encrypted. Apple has the decryption keys

zug_zug 2021-08-18 13:18:09 +0000 UTC [ - ]

If so they flubbed it.

If they are e2e encrypted they can't be distributed/shared without giving a key of some sort. Why not just scan them at time of distribution, and treat undistributed files the same as any local hard-drive (i.e. not their problem).

cyanite 2021-08-18 20:30:28 +0000 UTC [ - ]

This system could still work with an e2e encrypted cloud library. However, it’s currently not e2e.

neximo64 2021-08-18 10:26:59 +0000 UTC [ - ]

Maybe the best idea is for a sufficient number of people to replicate the hashes with collisions from the CSAM database and make copies of photos with nothing in them like this one and just let Apple deal with it. Maybe it can have text too.

cyanite 2021-08-18 21:09:41 +0000 UTC [ - ]

Best for whom? (Also, we don’t know the exact hash list Apple will use).

jakear 2021-08-18 15:32:18 +0000 UTC [ - ]

Baseless speculation: this is less about ongoing protection against keeping CSAM on their servers and more about a one-time sting operation requested by some sort of acronym’d official. The basic idea being: have a bunch of catfish agents send out these known CSAM images to folks through anonymous channels (sourcing the targets would likely be unfortunately trivial), expect that some of them will save the images and be synced to iCloud, coerce Apple into letting them see who has the material, sting them.

To this end, I would not be at all surprised to see that in some not-too-distant future Apple issues a big ol’ public apology and removes this feature. The operation will of course be long complete by then.

Just writing this out here so I have a “I told you so” link for if/when that time comes :)

azinman2 2021-08-18 16:38:33 +0000 UTC [ - ]

It’s not enough to have collisions — you’ll have to have collisions against ANOTHER hidden model for the same photos, and then have it pass a human looking at it. People are getting unnecessarily angry over this.

avsteele 2021-08-18 12:08:31 +0000 UTC [ - ]

Apple claimed 'one in a trillion' chance of a collision. This is a great example of why you should not trust such an assessment.

manmal 2021-08-18 13:17:34 +0000 UTC [ - ]

That number is based around their plan to only trigger a search if a certain number of CSAM hashes were detected on a phone. A single collision wouldn't trigger such a search. A sufficiently motivated person can of course get access to a phone and put plenty hash-colliding images in the library.

arsome 2021-08-18 12:57:22 +0000 UTC [ - ]

To be fair, log_2(1 trillion) is only about 40 bits.

cyanite 2021-08-18 20:34:10 +0000 UTC [ - ]

That’s a different scenario. That’s for it happening by chance, not on purpose. At any rate, the “matching” images are reviewed (or a “visual derivative” is).

newArray 2021-08-18 15:15:05 +0000 UTC [ - ]

Hash collisions happen by design in the perceptual hash, its supposed to give equal hashes for small changes after all.

Something I find interesting is the necessary consequences of the property of small edits resulting in the same hash. We can show that this is impossible to absolutely achieve, or in other words there must exist an image such that changing a single pixel will change the hash.

Proof: Start with 2 images, A and B, of equal dimension, and with different perceptual hashes h(A) and h(B). Transform one pixel of A into the corresponding pixel of B and recompute h(A). At some point, after a single pixel change, h(A) = h(B), this is guaranteed to happen before or at A = B. Now A and the previous version of A have are 1 pixel apart, but have different hashes. QED

We can also ATTEMPT to create an image A with a specified hash matching h(A_initial) but which is visually similar to a target image B. Again start with A and B, different images with same dimensions. Transform a random pixel of A towards a pixel of B, but discard the change if h(A) changes from h(A_initial). Since we have so many degrees of freedom for our edit at any point (each channel of each pixel) and the perceptual hash invariant is in our favor, it may be possible to maneuver A close enough to B to fool a person, and keep h(A) = h(A_initial).

If this is possible one could transform a given CSAM image into a harmless meme while not changing the hash, spread the crafted image, and get tons of iCloud accounts flagged.

mrits 2021-08-18 15:18:04 +0000 UTC [ - ]

The best part of this is we could all be flagged in a database for things like this and not even realize it. Who knows what kind of review process the governments are actually keeping around these

deadalus 2021-08-18 09:52:59 +0000 UTC [ - ]

Expectation : Political rivals and enemies of powerful people will be taken out because c-ild pornography will be found in their phone. Pegasus can already monitor and exfiltrate every ounce of data right now, it won't be that hard to insert compromising images on the infected device.

Any news about "c-ild porn" being found on someone's phone is suspect now. This has been done before :

1) https://www.deccanchronicle.com/technology/in-other-news/120...

2) https://www.independent.co.uk/news/uk/crime/handyman-planted...

3) https://www.theatlantic.com/notes/2015/09/how-easily-can-hac...

4) https://www.nytimes.com/2016/12/09/world/europe/vladimir-put...

5) https://www.cnet.com/tech/services-and-software/a-child-porn...

bko 2021-08-18 10:21:08 +0000 UTC [ - ]

Isn't it weird how it is weaponized against political enemies but the one person everyone knows did engage in exploitation was protected for decades?

dijit 2021-08-18 10:31:51 +0000 UTC [ - ]

I don't think that's weird.

Those people are not political enemies, they're allies for whatever regime and their support is important.

Better to keep someone powerful in your pocket than to oust them and let someone uncorruptible or uncompromising assume the power vacuum.

LeoNatan25 2021-08-18 10:50:14 +0000 UTC [ - ]

No one is “ uncorruptible or uncompromising”. They just have to start looking from scratch.

dijit 2021-08-18 10:55:11 +0000 UTC [ - ]

There are many people who are incorruptible or uncompromising, we usually dislike them.

Maybe "not sympathetic" would prevent me from being nerd sniped. :)

newsclues 2021-08-18 11:27:55 +0000 UTC [ - ]

Selective prosecution.

draw_down 2021-08-18 10:28:04 +0000 UTC [ - ]

More par for the course, I’d say.

raxxorrax 2021-08-18 12:47:05 +0000 UTC [ - ]

That you felt the need to kill the 'h' in child is telling enough. Let's face it, security policy from the last 20 years was created by idiots. Plain borderline mentally inhibited idiots. All the surviellance introduced so much mistrust in some states that no terrorist could have ever dreamed of. There is no rational cost-benefit analysis, only propaganda about some Ivan being allegedly worse. Yes, great metric...

Sorry for the rant.

Rd6n6 2021-08-18 12:23:33 +0000 UTC [ - ]

That image planting virus concept is the scariest thing. Could you trigger this system and get the police alerted with spam mms with images or spam email with images, or targeted ads that get the gray blob images into your tmp files? Or does it have to be an actual script downloading images, say, one per week in secret until it triggers?

robertoandred 2021-08-18 15:04:00 +0000 UTC [ - ]

No, you couldn't. This is only checking images you upload to your iCloud photo library.

Why would you save tons of gray blobs to your photo library? Why would gray blobs look like child porn? Why would Apple reviewers think gray blobs are child porn? Why would the NCMEC think gray blobs are child porn? Why would law enforcement spend time arresting someone for gray blobs?

nullc 2021-08-18 16:40:20 +0000 UTC [ - ]

The grey blob is a proof of concept. The existence of the original image is proof that not all images which produce the target hash are grey blobs.

Since the grey blob exists, I believe it is fully possible to construct natural(-ish) images that have a selected hash.

So, you should perhaps instead imagine attackers that modify lawful nude images to have matching hashes with child porn images.

With that in mind, most of your questions are answered, except perhaps for "Why would the NCMEC think"-- and the answer would be "because its porn and the computer says its child porn".

Of course, an attacker could just as well use REAL child porn. But there are logistic advantages in having to do less handling of unlawful material, and it's less likely that the target will notice. Imagine: if the target already has nude images on their host the attacker might use those as the templates. I doubt most people would notice if their nude image collection was replaced with noisier versions of the same stuff. And even if they did notice, "someone is trying to frame me for child porn" wouldn't be their first guess. :)

hannasanarion 2021-08-18 18:48:45 +0000 UTC [ - ]

The other thread demonstrates a similar attack with pictures of dogs.

The same questions still apply. Why would you save tons of pictures of dogs to your photo library? Why would pictures of dogs look like child porn? Why would Apple reviewers think pictures of dogs are child porn? Why would the NCMEC think pictures of dogs are child porn? Why would law enforcement spend time arresting someone for pictures of dogs?

What is the scenario where someone can be harmed reputationally or criminally because of a hash collision attack, where the same attack could not be performed more easily and causing more damage by actually using real CSAM images?

nullc 2021-08-18 20:54:50 +0000 UTC [ - ]

I'm really surprised to see a fellow HN poster making such a stark failure to generalize.

You see that they can do this with pictures of dogs. What makes you think they can't do exactly the same with pictures of crotches?

Presumably you don't believe there is some kind inherent dog-nature that makes dog images more likely to undermine the hash. :) People are using pictures of dogs because they are a tasteful safe-for-work example, not because anyone actually imagines using images of dogs.

An actual attack would use ordinary nude images, probably ones selected so that if you were primed to expect child porn you'd believe it if you didn't spend a while studying the image.

> where the same attack could not be performed more easily and causing more damage by actually using real CSAM images?

Actual child porn images are more likely to get deleted by the target and/or reported by the target. The attacker also takes on some additional risk that their possession of the images is a strict liability crime. This means that if the planting party gets found with the real child porn they'll be in trouble vs with the hash-matching-legal-images they'll only be in trouble if they get caught planting it (and potentially only exposed to a civil lawsuit, rather than a felony, depending on how they were planting it).

Personally, I agree that the second-preimage-images are not the most interesting attack! But they are a weakness that makes the system even more dangerous. We could debate how much more dangerous they make it.

hannasanarion 2021-08-18 21:54:01 +0000 UTC [ - ]

What kind of non-child-porn imagery is so similar to child porn that it fools the NCMEC investigator who is looking at it side-by-side with its maliciously-collisioned hash match, but is at the same time so so dissimilar from real child porn that, as you say, the target won't report its sudden appearance on their device and the the sender won't be at legal risk? Your scenario requires the image (not the hash, but the regular human-perceptible image) to be both unremarkably innocent and incontrovertibly damning at the same time.

nullc 2021-08-19 00:17:51 +0000 UTC [ - ]

You're assuming that there is a side-by-side comparison with a hash matching image. I think that is an extremely big assumption which assumes facts not in evidence at all.

As far as what kind of image would be believed to be child porn without a side-by-side with the supposed match: ordinary porn. Without context plenty would be hard to distinguish, especially with the popularity of waxed smooth bodies and explicit close up shots.

Prosecution over "child porn" that isn't is already a proved thing, with instances where the government was happily trotting out 'experts' to claim that the images were clearly a child based on physical characteristics only to be rebuffed by the actual actress taking the stand. ( https://www.crimeandfederalism.com/page/61/ )

hannasanarion 2021-08-19 15:55:27 +0000 UTC [ - ]

You still haven't explained how an image can be at the same time so benign that the user doesn't delete it from their phone and cloud service and the sender is in no legal danger, and yet so obviously pornographic that a jury will be convinced beyond any reasonable doubt that it's child porn.

The entire point of this process is to catch only images from a known catalogue of existing images, of course there will be opportunity for side-by-side comparison. And if the side-by-side comparison can convince a jury that the images are the same, then the result is literally identical to if you had just sent the original image, the hash checking service hasn't impacted the attack whatsoever.

The idea that new images of nudity could be caught in it is because of a mixup by the press after it was announced, because people confused it with a different parental control feature that detects nudity in images taken by your kids' phones. This is not the same thing.

fouric 2021-08-18 17:50:32 +0000 UTC [ - ]

> Since the grey blob exists, I believe it is fully possible to construct natural(-ish) images that have a selected hash.

At the very least, there are large classes of images that, while not realistic photographs, people will willingly download, that should be very easy to craft collisions for:

Memes. In particular, given the distortions of surreal and deep-fried memes, it should be pretty easy to craft collisions there.

nullc 2021-08-18 20:56:00 +0000 UTC [ - ]

Update: The latest results now have second-preimages that are other photos, not just noise. https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/issue... They still look pretty bad, but attacks only get better!

robertoandred 2021-08-18 17:49:27 +0000 UTC [ - ]

And where would they get the CSAM hash they need to match?

nullc 2021-08-18 20:46:08 +0000 UTC [ - ]

The bulk of the CSAM hash database comes widely circulated images, available on sleeze dark web websites (and historically on sketchy porn sites just on the web).

So someone need only find a collection of likely included images and compute their hashes and publish it.

The attack works just as well even if the attacker isn't sure that every hash is in the database, they just need to know enough likely-matches such that they get enough hits.

Consider it this way, some attacker spends a couple hours searching for child porn and finds some stuff. ... what kind of failure would the NCMEC database be if it didn't include that material?

2021-08-18 15:36:39 +0000 UTC [ - ]

Crontab 2021-08-18 11:56:03 +0000 UTC [ - ]

I remember reading the evil handyman story (link 2) a while ago and it scared me into using FDE. (Of course, FDE doesn't solve malware and things like that.)

snarf21 2021-08-18 12:54:53 +0000 UTC [ - ]

I'm against this change from Apple but weren't they already doing this scanning in iCloud? Couldn't someone use it the same way against a "political rival and enemies of powerful people"?

cyanite 2021-08-18 20:35:28 +0000 UTC [ - ]

All major cloud photo library providers do this scan server side, yes. This way offers more privacy for the end user… but evidently most people don’t realize that.

2021-08-18 12:54:14 +0000 UTC [ - ]

coldtea 2021-08-18 12:15:08 +0000 UTC [ - ]

You think such a thing can happen to the best of all possible worlds, where the powerful always abide by the law and never have politicians in their pockets, and the state only serves and never defames, threatens, or blackmails people?

hanklazard 2021-08-18 11:40:17 +0000 UTC [ - ]

And as an extra horrible side-effect, the use of these kinds of attacks in the political arena will “vindicate” believers of the QAnon conspiracy.

pkulak 2021-08-18 16:26:36 +0000 UTC [ - ]

If your first thought, like mine, was "Who cares? The whole point is that there will be plenty of false positives.", this is pre-image:

https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/issue...

fogof 2021-08-18 19:25:21 +0000 UTC [ - ]

That was, indeed, my first thought. Can you explain why reposting the link that the OP posted is supposed to change my mind, or how it contributes to the conversation?

fortenforge 2021-08-18 18:26:37 +0000 UTC [ - ]

I don't think that's true. This was a second pre-image attack.

yayr 2021-08-18 11:54:01 +0000 UTC [ - ]

How long did it take now to make the Apple algorithm ultimately useless or even harmful?

Apple announcement of neural hashing: 5.8.2021.

Generic algorithm to generate a different matching image: 8.8.2021.

one script was already released 10 days ago here https://gist.github.com/unrealwill/c480371c3a4bf3abb29856c29...

cyanite 2021-08-18 21:11:52 +0000 UTC [ - ]

None of this makes the system useless or harmful. Also, it’s not Apple’s algorithm. The actual hash list Apple will use is not accessible to the device.

yayr 2021-08-18 21:29:48 +0000 UTC [ - ]

If anybody with enough motivation can modify any existing harmless image to have the same neural hash as a "tracked database" image this will create too many false positives. Too many false positives make the algorithm useless.

If someone with even more motivation and the means to put those images onto your device via social engineering, exploits or maybe even features and you become the target of a criminal investigation in any jurisdiction you just happen to be at the moment, this makes the algorithm harmful.

FabHK 2021-08-19 00:08:15 +0000 UTC [ - ]

> this will create too many false positives. Too many false positives make the algorithm useless

Apple can (and will) run a second algorithm server side to filter out further false positives.

> If someone with even more motivation and the means to put those images onto your device via social engineering, exploits or maybe even features

Such an attacker could just plant CSAM directly. The hash collision has no bearing on it. If, however, it is hash collision you're worried about, they'd be caught during the manual review.

2021-08-18 14:30:38 +0000 UTC [ - ]

yeldarb 2021-08-18 16:02:35 +0000 UTC [ - ]

I was curious how big of a deal this would be in a real-world attack (eg could someone use this to DDoS Apple's human reviewers?) so I ran the colliding images through another feature extraction model, OpenAI's CLIP, to see if they also fooled it.

They don't; I think it'd be much harder to create an image that both matches the NeuralHash of a CSAM image and also fools a generic model like CLIP as a sanity check for being a CSAM image.

I wrote up my findings here: https://blog.roboflow.com/apples-csam-neuralhash-collision/

dannyw 2021-08-18 16:23:05 +0000 UTC [ - ]

I heard that a lot of content on the CSAM database don't individually look like child porn. For example, a vaginal close up that is of a 17 year old, but could look like a vaginal close up of a 19 year old.

Remember, anything under 18 is treated the exact same way from a legal perspective, and once discovered Apple must report.

So there is an adversarial attack in that you find legal porn of say 18 year old pussy closeups, disturb it to match CSAM with the neural hash, and send it to your enemies.

The Apple employee will verify a match. The Feds will raid your target and imprison them, and reputationally tarnish them for life.

Now think about how Pegasus can remotely modify your photos. And think about Julian Assange's sexual assault allegations that were admitted to be fabrications by the accuser.

proactivesvcs 2021-08-18 14:48:14 +0000 UTC [ - ]

Many of us have been at the sharp end of a large company's supposed "human review" process when it comes to account lockouts, invalid abuse reports and invalid accusations of breach of terms of service. It simply does not work for too many and leaves us with little or no redress; it is a completely untenable fallback.

When you're promised that this will never happen to you because there are humans in the process remember that they're one or more of underpaid, poorly-trained, poorly-treated, under-resourced or just flat-out terrible at their job.

This may well happen to you and you'll have no way back.

nine_k 2021-08-18 14:57:07 +0000 UTC [ - ]

I think this is a great, high-profile example of how keeping things open helped quickly identify a critical problem.

AFAICT the hash function / perceptual model is openly available, or, at least, easy to find online.

bengale 2021-08-18 15:31:00 +0000 UTC [ - ]

I don't understand why this is a concern. Someone would need 30 or so child porn images to generate these. Then they'd need to create these grey blobs and send them to the target. The target would need to save them in their photo library for some reason so they sync to iCloud. Then when all this triggers the review they'll see they are grey blobs and nothing else will happen?

teitoklien 2021-08-18 15:38:57 +0000 UTC [ - ]

Its astonishing , How society went from keeping personal photos private , to thinking its ok , to let a random apple employee scan their photos for suspicion and debate on how human reviewers from a faceless corporation can easily stop these mistakes there.

I’d trust the algorithm more than the human reviewing it , considering the algorithm already shows flaws and loopholes , the human part of it does not bring any confidence.

This isnt facebook , this is your private photos , it shouldnt have reached till this stage , in the first place

bengale 2021-08-18 16:02:40 +0000 UTC [ - ]

But they won't be my private photos, the only way it gets to the point where a human see's them is that I have a bunch of child porn uploaded into iCloud, or someone puts a bunch of these grey blob images into my library. Neither of those are my images. There is no chance 30+ of my personal images have hash collisions with this database of child porn.

> How society went from keeping personal photos private

Also, didn't the local photo development shops used to call the police quite frequently when dodgy images were sent in to be developed? I remember it happening once at our local supermarket.

teitoklien 2021-08-18 16:12:38 +0000 UTC [ - ]

Hash collisions dont need grey images You can take a perfectly normal image , play around with a range of its individual pixels, to get collisions too.

So someone might forward you your personal pics from your meeting with them and its pixels might get edited by the messenger app you use to save the picture into your photos to specifically collide its hash with a csam image, while to you the image will look perfectly normal

This is one example , lets say you 100% trust every developer of every app that you download on apple’s phone.

Fine, but that’s already a lot of trust on a lot of people about something that can ruin your life.

Do you now trust every single image which might get auto downloaded to your gallery by a malicious actor , what if a random anon person messages you with 100 such colliding images , enough to cross threshold to get the authorities knocking your door ,

Is this “feature” really worth it , to have the inconvenience of authorities knocking your door and going through legal trouble ?

For an iphone ?

bengale 2021-08-18 16:16:31 +0000 UTC [ - ]

> So someone might forward you your personal pics from your meeting with them and its pixels might get edited by the messenger app you use to save the picture into your photos to specifically collide its hash with a csam image, while to you the image will look perfectly normal

Ok, but then that image is already not private, if I've been sending it to someone that could do that.

> This is one example , lets say you 100% trust every developer of every app that you download on apple’s phone.

I don't have to, they have to ask me before storing images in my photo library. I'm not going to give some fart generator app access to my library.

> Do you now trust every single image which might get auto downloaded to your gallery by a malicious actor , what if a random anon person messages you with 100 such colliding images , enough to cross threshold to get the authorities knocking your door

Beyond the fact that nothing writes images to my library automatically. If they sent me 100 images that are grey blobs or slightly manipulated normal images they dont get past the check anyway so no police. If they send me 100 CSAM images I'll be on the phone to the police anyway.

teitoklien 2021-08-18 16:24:52 +0000 UTC [ - ]

Great on you , But i dont see my parents disabling the auto feature , nor do my friends.

Youre right indeed , you can protect yourself from this , but all the measures you mentioned are settings which are non-default,

A lot of apps annoyingly turn it on by default , while HN users can turn it off , I doubt my grandparents will go through the same effort or understand it.

Question is , why should they ? Their phone was supposed to help them , not be a snitch and a snitch with flaws at that.

Why should my non-tech friends train to protect themselves from “their” phones.

Human society has always tried to place co-existence and trust on their fore frontier , their ideal wish for society.

This makes us and our devices act as snitches to each other.

Snitch for whom ? Apple ?

But yes youre right , that if its normal image it will stop at review level , (altho thats on the condition you trust apple employees who are reviewing them)

I just still dont think its a good idea to even reach till this step.

This just makes our computers feel more hostile

bengale 2021-08-18 16:32:39 +0000 UTC [ - ]

Ok so your fear is someone will target your grandparents with child porn?

If this was such a major problem why has no-one been targeted like this in the last decade when everyone else was scanning for it?

teitoklien 2021-08-18 16:37:16 +0000 UTC [ - ]

Here’s an example of exactly that

https://news.ycombinator.com/item?id=28221907

bengale 2021-08-18 16:53:13 +0000 UTC [ - ]

It's not an example though. Putting aside a lack of news report we'll assume it's true, it's got nothing to do with cloud scanning for images. A picture of someones children wouldn't trigger these systems at all.

teitoklien 2021-08-18 17:00:46 +0000 UTC [ - ]

> its got nothing to do with cloud scanning for images

Everyone else are cloud services scanning for images , cloud or on-device wont change the math, math stays same and here the math makes mistake

> A picture of someone’s children wouldn’t trigger these systems at all

The main post under which we are conversing , is about someone having generated a collision to trigger this system.

bengale 2021-08-18 17:27:53 +0000 UTC [ - ]

I feel we're circling the same issues here so I'll leave it after this. But what you're saying is the point I'm making, many clouds have been doing this for a decade and we've not seen the issues you're worried about. The one example you gave was a photo tech, so human developing photos, making a mistake. Not someone being targeted with material that would trigger google photos or Facebook to flag them, not algorithms making mistakes with photos of someones kids, none of it.

The post we're commenting under has created hash collisions with grey blob images, if you get 100 of these they won't get past the review step anyway. It'd be a pointless attack.

Like most of the outrage around this system it mostly seems to boil down to FUD.

borplk 2021-08-18 11:27:25 +0000 UTC [ - ]

If you piss off the wrong crowd they're going to drop a small bag with white powder in your photo gallery app if you know what I mean. Along with event logs and all.

Good luck!

newsclues 2021-08-18 11:50:51 +0000 UTC [ - ]

First CP.

Then leaked or unauthorized nudes will get filtered.

Filters for terrorists, and terrorist imagery or symbols.

Start scanning for guns and drugs.

Drug dealers and criminals get added to the list.

How long before before it’s dissidents, political opponents, and minorities in dictatorships?

How long before Tim Cooks CP filters are used against LGBT groups abroad?

zug_zug 2021-08-18 13:31:37 +0000 UTC [ - ]

It's funny how it's always CP or terrorism we need to watch out for.... Never is it the politician who accidentally went to war with wrong country, or the tax return of the congressman, or a cop sleeping with somebody who's under arrest, or EPA employee who took a bribe to declare a chemical safe, or mysteriously malfunctioning cameras in a prison the night of an alleged suicide.

Build surveillance to catch those people and I'll listen.

personlurking 2021-08-18 17:39:53 +0000 UTC [ - ]

>It's funny how it's always CP or terrorism we need to watch out for

Change the name on the box to something else, but same message.

https://i.imgur.com/TAHgUPy.jpg

newsclues 2021-08-18 15:33:34 +0000 UTC [ - ]

I am not surprised censorship is used selectively by the powerful against their enemies.

Crontab 2021-08-18 12:23:45 +0000 UTC [ - ]

> How long before before it’s dissidents, political opponents, and minorities in dictatorships?

Siri: "You seem to be having unpatriotic thoughts. We are dispatching someone to help you."

jnsie 2021-08-18 13:35:26 +0000 UTC [ - ]

By dispatching someone you presumably mean “ok, I found this on the web”?

robertoandred 2021-08-18 14:46:19 +0000 UTC [ - ]

Do you know how many billions of pictures of guns and drugs have ever been taken?

newsclues 2021-08-18 15:31:55 +0000 UTC [ - ]

Sure! Lots!

My Instagram account is mainly weed (legal in Canada) and Instagram censorship hits my content occasionally (because illegal cannabis selling on the platform is rampant).

robertoandred 2021-08-18 16:17:26 +0000 UTC [ - ]

And how many of those billions will Apple hash and check for?

newsclues 2021-08-18 16:22:20 +0000 UTC [ - ]

Not zero

robertoandred 2021-08-18 16:23:51 +0000 UTC [ - ]

And how many of those pictures will you have in your library?

cyanite 2021-08-18 20:37:35 +0000 UTC [ - ]

Maybe we can worry about things when and if they are implemented? If you don’t trust Apple it’s a moot discussion anyway.

plutonorm 2021-08-18 13:34:38 +0000 UTC [ - ]

This is the real danger here. Covid made them bold, it set them up as a judge of content passing through their systems. Now they are consolidating themselves in that role. A boot stamping on a human face for all eternity.

newsclues 2021-08-18 15:30:29 +0000 UTC [ - ]

Fact checking to censoring misinformation then on device content scanning. In less than 5 years we have made multiple leaps towards censorship.

rogers18445 2021-08-18 13:46:09 +0000 UTC [ - ]

The hash was supposed to not be altered by minor perturbations to the input. This requirement assures that the hash can never be cryptographically secure, collision attacks therefore are trivial the moment the "security by obscurity" aspect is broken.

orange_puff 2021-08-18 13:14:41 +0000 UTC [ - ]

Today HackerNews learns about adversarial machine learning, which every ML model is susceptible to.

kuu 2021-08-18 09:51:58 +0000 UTC [ - ]

I don't understand the comment in the issue by an iPhone user. Can you see the hashes that the mobile generates for each image?? Why that is not "obfuscated" / hidden from the user? I mean, I would expect something complicated to validate that you have a collision.

scandinavian 2021-08-18 10:16:06 +0000 UTC [ - ]

It is hidden for the user. Obfuscation doesn't work. They probably just called the private API to generate a hash on a jailbroken phone. There's even a link to another piece of software that can do just that, only on macOS.

https://github.com/KhaosT/nhcalc

cyanite 2021-08-18 20:39:16 +0000 UTC [ - ]

The hash table is blinded on the device, and the device never knows if a given image is a hit or not. This is well documented.

jhugo 2021-08-18 10:22:16 +0000 UTC [ - ]

You're holding the iPhone in your hand. You can inspect everything it does, the only thing that varies is the level of difficulty of inspecting it. Obfuscation doesn't work.

2021-08-18 10:20:50 +0000 UTC [ - ]

nulld3v 2021-08-18 09:55:56 +0000 UTC [ - ]

They could have a jailbreak device.

eptcyka 2021-08-18 10:02:57 +0000 UTC [ - ]

They could just be a dev that work on the code and have toy apps to test it out.

jdlshore 2021-08-18 18:57:00 +0000 UTC [ - ]

You're correct. The amount of misinformation in this thread (and in the other responses to you) is out of control.

The database of CSAM hashes is blinded and no one has the hashes. Without the hashes, this attack is useless.

It's also mitigated by a LOT of checks and balances. First they have to know 30 hashes to target (they're secret). They have to get 30 colliding images on your phone. The images have to be unnoticed by you (why not just infiltrate CSAM, then?) or sufficiently compelling that you don't just delete them. Thirty images have to pass human review at Apple. At least one has to pass human review by law enforcement. Then, and only then, will you be arrested and face a threat.

Short version: If somebody wants to frame you for possessing CSAM, there are much easier ways. There is no new threat here. https://xkcd.com/538/

cyanite 2021-08-18 20:40:23 +0000 UTC [ - ]

> The amount of misinformation in this thread (and in the other responses to you) is out of control.

True it’s not perfect here, but you should see Reddit, then :p. People there hardly even know what they are mad about.

visarga 2021-08-18 19:32:45 +0000 UTC [ - ]

Have you considered that Apple might have a second neural hashing network which is hidden and they let this one network be hacked to throw people off?

faeriechangling 2021-08-19 01:52:27 +0000 UTC [ - ]

What a genius publicity move

raspasov 2021-08-18 13:48:47 +0000 UTC [ - ]

I'm not in favor of assuming that everyone's guilty until proven innocent.

But, as a side note...

I get the feeling that a lot of people assume that the CSAM hashes are going to be stored directly on everyone's phone so it's easy to get a hold of them and create images that match those hashes.

That does not seem to be the case. The actual CSAM hashes go through a "blinding" server-side step.

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

cyanite 2021-08-18 20:38:28 +0000 UTC [ - ]

It seems to me that the vast majority of people in forums don’t actually know how this system is implemented. Although people definitely seem more informed here than on Reddit.

maz1b 2021-08-18 16:56:03 +0000 UTC [ - ]

No longer considering purchasing any more Apple devices or services for any team members in my company. This is unacceptable.

daralthus 2021-08-18 17:37:27 +0000 UTC [ - ]

Putting aside the original intent what else can they scan for that's worth doing in the aggregate with this tech?

nojito 2021-08-18 12:56:02 +0000 UTC [ - ]

Any matches are matched again server side to thwart this type of attack.

>Once Apple's iCloud Photos servers decrypt a set of positive match vouchers for an account that exceeded the match threshold, the visual derivatives of the positively matching images are referred for review by Apple. First, as an additional safeguard, the visual derivatives themselves are matched to the known CSAM database by a second, independent perceptual hash. This independent hash is chosen to reject the unlikely possibility that the match threshold was exceeded due to non-CSAM images that were adversarially perturbed to cause false NeuralHash matches against the on-device encrypted CSAM database. If the CSAM finding is confirmed by this independent hash, the visual derivatives are provided to Apple human reviewers for final confirmation.

They also fuzz this process by sending false positives I think?

notquitehuman 2021-08-18 14:12:20 +0000 UTC [ - ]

It doesn’t matter what they do after “looking for something to report to law enforcement.” Nothing after that makes it less invasive.

floatingatoll 2021-08-18 14:39:01 +0000 UTC [ - ]

Incorrect. The objections here are threefold:

1) This is invasive.

2) Slippery slope.

3) Gets innocents arrested.

Their solution mitigates #3 with review and mitigates #1 with blurring, and those mitigations occur and the point in the process where you claim it doesn’t matter what they do.

Please don’t reductively dismiss facts that don’t support your narrative. Saying that it doesn’t matter is wrong and serves only to boost your particular viewpoint on #2.

nojito 2021-08-18 15:45:31 +0000 UTC [ - ]

Disagree. What companies do now is much more invasive. (Indiscriminately scanning cloud storage)

bugmen0t 2021-08-18 12:57:36 +0000 UTC [ - ]

Note that a hash collision is not the same as a pre-image attack. Though it seems to me that finding the latter is also feasible.

rollinggoron 2021-08-18 14:19:31 +0000 UTC [ - ]

How is this scenario unique to Apple but not everyone else who does scanning? e.g. Google, Facebook, Microsoft etc...

steeleduncan 2021-08-18 14:32:57 +0000 UTC [ - ]

If the entity doing the scanning has a copy of the original image they can verify it is illegal before calling the police. With Apple's system they have to call the police on the basis of the image hash without verifying that anything illegal is on the phone.

You can whatsapp someone an innocent image doctored to have a hash collision with known CSAM. If they have default settings it will be saved to their photo reel, scanned by iOS and the police will be called.

Until the arresting police officer explains to them they are being arrested on suspicion of being a paedophile, they won't even know this has happened.

ec109685 2021-08-18 16:13:46 +0000 UTC [ - ]

A low res copy is available to the reviewer on Apple's end to verify.

cyanite 2021-08-18 20:41:35 +0000 UTC [ - ]

Apple doesn’t “call the police”, though, they contact the center for child abuse or whatever it’s called, who will then presumably verify the picture.

dragonwriter 2021-08-18 20:45:21 +0000 UTC [ - ]

The National Center for Missing and Exploited Children, a “private” nonprofit created by the US government and primarily funded by the Department of Justice.

floatingatoll 2021-08-18 14:20:58 +0000 UTC [ - ]

It’s not. The hacker community just never knew or cared about CSAM scanning before Apple did it.

dgb99 2021-08-18 14:56:40 +0000 UTC [ - ]

can someone upvote me please? I need to add my account to my keybase profile.

marcob95 2021-08-18 15:24:59 +0000 UTC [ - ]

Can someone help me figure out why this is such a big deal? Thanks

cyanite 2021-08-18 20:43:28 +0000 UTC [ - ]

I’d say that it’s not, and that many people think it is is because they haven’t fully read or understood the technical description of the system.

But what you should do is read the documents yourself.

a-dub 2021-08-18 15:16:22 +0000 UTC [ - ]

it gets even more fun when you can generate images that are imperceivably different to the human eye, but produce different (and controlled) hash values. (see shazam decoys and adversarial images). not sure if collision is really the right term here though as similarity hashing intentionally hashes multiple images to the same hash.

in other news, my google phone just helpfully scanned all client side images, rounded up everything it thought was a meme and suggested i delete them to save space. i wonder if that feature has telemetry built in...

hughrr 2021-08-18 10:04:38 +0000 UTC [ - ]

That’s end game. Now you can use it for targeted attacks against innocent people. This needs to be shut down and disposed of immediately. There is no other outcome which is socially acceptable for Apple.

I feel vindicated now. There are a lot of people saying that I’m insane as I’ve dumped the entire iOS ecosystem in the last week. But Craig was busy steamrolling out the marketing still only a couple of days back about how this is good for the world. No way am I going back.

Edit: I’m going to reply to some child comments here because I can’t be bothered to reply to each one. This is step one down the stairs to hell. If even one person is caught with this it will be leverage to move to step 2. Within a few years your entire frame buffer and camera will be working against you full time.

nbzso 2021-08-18 11:48:46 +0000 UTC [ - ]

Don't feel bad at all. I dumped macOS entirely from production workflow. I cannot work on computer knowing that something is "scanning" me and I am glad that my "paranoid" feeling stopped me to upgrade all office macs.

Billionaires at (Apple) don't give a flying f*ck about users privacy. It is all vertical integration in the name of world domination. How removed from reality they are. This is week after Pegasus/NSO and there is no law who requires them to "scan" on device. So the logical explanation is "growth". Way to go Apple.

sandworm101 2021-08-18 13:14:27 +0000 UTC [ - ]

>> there is no law who requires them to "scan" on device.

There is. Apple must comply with warrant requests. If they have a system for scanning files on customer devices they must, if presented with a warrant, allow police access to that system. We can quibble about jurisdictions and constitutional protections, but if the FBI shows up with a federal warrant demanding that Apple remotely scan Sandworm101's phone for a particular image, Sandworm101's phone is going to be scanned.

ethbr0 2021-08-18 13:44:25 +0000 UTC [ - ]

That's an incomplete statement.

Currently, they must comply with warranty requests by scanning if they have the ability to scan.

If they have no such ability (say, because they designed their phones from a privacy-first perspective), the law makes no requirement that they create such a capability.

And that's what pisses people off about this.

snowwrestler 2021-08-18 14:50:25 +0000 UTC [ - ]

If Apple launches a system for comparing iCloud uploads to a third-party hash list, then adding the ability to do targeted scans for arbitrary additional law-enforcement-provided hashes would also be a form of creating capability.

The people getting pissed off about this have not, so far as I’ve seen, demonstrated why the law would require Apple to add the capability for targeted scans of arbitrary hashes.

Police can use a warrant to ask you for a video file they suspect you have. They can’t use a warrant to force you to videotape someone—even if you already own a video camera. The same principle applies to Apple.

sandworm101 2021-08-18 15:01:15 +0000 UTC [ - ]

>> They can’t use a warrant to force you to videotape someone

Maybe in the narrow context of US domestic child pornography investigations. In the wider world it is very possible for police to force such things. Even in the US, CALEA demands that certain companies develop abilities that they would not normally want (interception). The principal that US companies need not actively participate in police investigations disappeared decades ago. On the international level, all bets are off.

https://en.wikipedia.org/wiki/Communications_Assistance_for_...

"by requiring that telecommunications carriers and manufacturers of telecommunications equipment modify and design their equipment, facilities, and services to ensure that they have built-in capabilities for targeted surveillance [...] USA telecommunications providers must install new hardware or software, as well as modify old equipment, so that it doesn't interfere with the ability of a law enforcement agency (LEA) to perform real-time surveillance of any telephone or Internet traffic."

snowwrestler 2021-08-18 15:13:13 +0000 UTC [ - ]

Can you explain why you think CALEA would apply to what Apple announced?

And why such application would have had to wait until Apple announced this? In other words, if the law can force Apple to do things in general, why does the law need to wait for Apple to announce certain capabilities first?

sandworm101 2021-08-18 15:19:20 +0000 UTC [ - ]

>> why you think CALEA would apply to what Apple announced?

Read what I stated. I said no such thing. I said that CALEA shows that companies can sometimes be forced to be do things they do not want to do, to take actions at the behest of law enforcement that they would not normally do. CALEA would clearly not be applied to apple in this case. It would be some other law/warrant/NSL that would force apple to do something it normally would not do. CALEA stands as proof that such things have long been acceptable under US law.

snowwrestler 2021-08-18 15:35:22 +0000 UTC [ - ]

I understand why you are worried that a more powerful law could be passed by Congress in the future.

I thought this particular conversation was about what Apple can be forced to do by a warrant issued under current law.

HelloNurse 2021-08-18 15:13:48 +0000 UTC [ - ]

> why the law would require Apple to add the capability for ...

This isn't the sort of thing that is mandated by law, but rather requested behind the scenes by espionage, er, "law enforcement" agencies. They might be more or less friendly deals; nice monopoly you have here, it would be a shame if something happened to it...

tantalor 2021-08-18 14:14:35 +0000 UTC [ - ]

Can a warrant compel them to develop the capability?

simcop2387 2021-08-18 14:22:38 +0000 UTC [ - ]

A warrant alone, very likely not. A court order from a judge (slightly different than a warrant), possibly not nut it's not exactly a well tread area of legislation/law. It would probably run afoul of the first amendment given some readings, but it would have to be tested in court and would likely be argued that requiring a backdoor via some law doesn't cause compelled speech or that making that backdoor isn't protected speech etc. It's a relatively new area to be explored.

sandworm101 2021-08-18 14:32:48 +0000 UTC [ - ]

Yes. Lavabit. Those warrants demanded that Lavabit alter its system to capture passwords and/or decrypt stored email. Lavbit instead decided to stop operating and delete everything rather than comply. Such warrants have not been fully tested in courts but they do exist.

jamesdwilson 2021-08-18 14:45:23 +0000 UTC [ - ]

IIRC they gave up the information AND they shut down

sandworm101 2021-08-18 15:13:34 +0000 UTC [ - ]

They handed over a hard copy of their private key, a printout. The FBI went to court to demand a machine-readable copy. Then they shut down.

roenxi 2021-08-18 14:44:50 +0000 UTC [ - ]

Have they been served a warrant?

Just because someone could force you to do something you don't want to do is not really a major concern if you choose, preemptively, to do the thing. The concerning part here is Apple signalling that they have executives that don't see phone scanning as a problem. That is a major black eye for their branding.

bcrl 2021-08-18 16:17:03 +0000 UTC [ - ]

Apple's warrant canary disappeared in 2014.

lumost 2021-08-18 14:50:25 +0000 UTC [ - ]

The law gets debatable here. The warrant can ultimately be served only to the owner/responsible party for the system. If apple decides that they own all iPhones, a claim they've loosely made for a long time re Flash etc. Then they could be served a warrant on the device.

If they just sold the device to someone, then the police would have to issue a warrant to that individual. My understanding is that as an individual your options are either to comply with the warrant or commit a separate crime destroying the evidence or refusing the warrant.

snowwrestler 2021-08-18 15:00:21 +0000 UTC [ - ]

Apple does not claim they own all iPhones.

The legal principle is the same, actually. The law says you can install whatever software you want on an iPhone and Apple cannot stop you (jailbreaking is legal). But the law cannot force Apple to make it easy for you to do so.

ethbr0 2021-08-18 17:48:23 +0000 UTC [ - ]

I haven't kept us with the latest cases, but I'm not sure where it falls on app-based SaaS.

I.e. If I use an app, on a system in which app updates are possible, which encrypts my data locally and stores it on their cloud, then the owner/responsible party for the app is...?

2021-08-18 14:04:16 +0000 UTC [ - ]

jodrellblank 2021-08-18 14:14:51 +0000 UTC [ - ]

> "the law makes no requirement that they create such a capability."

and they have not created such a capability. This is built into the image upload libary of iCloud. This has less ability to "scan the device" than the iOS updater which can run arbitrary code supplied by Apple and read/write anywhere on the system. This is less able to be extended to a general purpose scanner than that is.

treis 2021-08-18 14:50:18 +0000 UTC [ - ]

Warrants are only part of the issue. The bigger issue is civil and criminal liability. If someone uploads child porn to iCloud and then shares it with someone else Apple is possessing and distributing child porn.

Today they aren't liable because there is a law that shields them. But that law comes with strings attached around assisting law enforcement. By doing an end run around those strings Apple is not holding up it's end of the bargain and runs the risk of lawmakers removing the liability shield.

If that happens then it's a whole new ball game.

brandon272 2021-08-18 15:30:03 +0000 UTC [ - ]

> If someone uploads child porn to iCloud and then shares it with someone else Apple is possessing and distributing child porn.

If someone sends CSAM using Federal Express, is FedEx legally liable for "possessing and distributing" that material? Does FedEx need to start opening up packages and scanning hard drives, DVDs, USB sticks, etc. to ensure that they don't contain any CSAM or other illegal data?

I really struggle with the lengths that people are going to to justify these moves. If we can justify this, it's pretty simple to justify a lot more surveillance as well. CSAM is not the only scourge in our society.

Reminder: Apple tells us that they consider privacy a "fundamental human right"[1]. That simply does not square with their recent announcement of on-device scanning, and some would argue that it does not square with scanning or content analysis anywhere, especially on behalf of government.

[1] https://apple.com/privacy

treis 2021-08-18 16:30:38 +0000 UTC [ - ]

>If someone sends CSAM using Federal Express, is FedEx legally liable for "possessing and distributing" that material?

Yes:

https://www.dea.gov/press-releases/2014/07/18/fedex-indicted...

That was for drugs but conceptually the same for CSAM.

brandon272 2021-08-18 16:55:28 +0000 UTC [ - ]

This is not at all conceptually the same.

FedEx was not held liable simply because their service was used to mail illegal drugs. They were held liable because not only did they know about the specific instances in which it was mailed, they allegedly conspired with the shippers to facilitate the mailings. They were knowingly mailing packages to parking lots where drug dealers would wait to pick them up:

> According to the indictment, as early as 2004, FedEx knew that it was delivering drugs to dealers and addicts. FedEx’s couriers in Kentucky, Tennessee, and Virginia expressed safety concerns that were circulated to FedEx Senior management, including that FedEx trucks were stopped on the road by online pharmacy customers demanding packages of pills; that the delivery address was a parking lot, school, or vacant home where several car loads of people were waiting for the FedEx driver to arrive with their drugs; that customers were jumping on the FedEx trucks and demanding online pharmacy packages; and that FedEx drivers were threatened if they insisted on delivering packages to the addresses instead of giving the packages to customers who demanded them.

They had drug addicts literally stopping FedEx trucks on the street to intercept packages from online pharmacies. I don't see how this specific situation is analogous to Apple at all.

The fact remains that FedEx is not scanning hard drives, DVD or other storage devices going through their network and they don't appear to be breaking any laws by not doing that scanning.

treis 2021-08-18 18:56:17 +0000 UTC [ - ]

>They were held liable because not only did they know about the specific instances in which it was mailed, they allegedly conspired with the shippers to facilitate the mailings.

Apple knows that CSAM is being sent and conspires to do so (i.e. transmits the image). Conceptually they are the same.

The rest of your post details the practical differences between sending physical packages and digital images.

brandon272 2021-08-18 22:20:59 +0000 UTC [ - ]

I'm not even sure where to start here.

FedEx knows that CSAM is sent using its services in the same way that Apple does. That is to say that both companies know that CSAM has been sent historically and both know that is possible to send that type of material using its services, and one could argue that they should assume that their services are used to transfer that data.

Why does one of these companies have to go to such great depths to search out and report these materials and the other does not? If we suggest that Apple "knows" about CSAM on their network, we must also accept that Apple "knows" about many other crimes in which the devices they are sell are used in the planning and execution of those crimes. Why are they only focused on CSAM if they'd also have legal exposure in these other crimes simply for being a hardware and service provider?

Lastly, to suggest that Apple "conspires" to transmit this content is wholly inaccurate. Conspiracy implies two or more parties are in explicit agreement to commit a crime.

treis 2021-08-18 22:56:58 +0000 UTC [ - ]

>If we suggest that Apple "knows" about CSAM on their network, we must also accept that Apple "knows" about many other crimes in which the devices they are sell are used in the planning and execution of those crimes. Why are they only focused on CSAM if they'd also have legal exposure in these other crimes simply for being a hardware and service provider?

It's not knowledge of a crime. Apple is committing a crime by possessing and distributing CSAM.

>Lastly, to suggest that Apple "conspires" to transmit this content is wholly inaccurate. Conspiracy implies two or more parties are in explicit agreement to commit a crime.

But that is exactly what they're doing. They are agreeing to possess and deliver images of which they know a portion are CSAM.

brandon272 2021-08-18 23:58:42 +0000 UTC [ - ]

> Apple is committing a crime by possessing and distributing CSAM.

Apple's liability for content is limited by Section 230. If they find out about specific instances of CSAM on their servers they are required to remove it.

They are not required to hunt for it or otherwise search it out. They are not required to scan for it. Anything currently being done in that regard is voluntary.

> But that is exactly what they're doing. They are agreeing to possess and deliver images of which they know a portion are CSAM.

When you use Apple's cloud services, you agree to multiple legally binding agreements that include provisions about not using their cloud services to store or transmit illegal materials including CSAM.

Someone using Apple's services to do that in contravention of those agreements does not constitute them "agreeing to possess and deliver" that content.

What WOULD constitute them conspiracy and them agreeing to possess and deliver that content would be if they were informed of the specific instance of content being transmitted and delivered and did nothing about it or otherwise enabled its continued storage and distribution.

khuey 2021-08-18 14:02:12 +0000 UTC [ - ]

At least in the United States, this is absolutely false. A warrant cannot force them to do something they have no capability to do.

dheera 2021-08-18 16:49:42 +0000 UTC [ - ]

(a) They created the capability, that's the problem

(b) The world is not the United States

snowwrestler 2021-08-18 14:37:49 +0000 UTC [ - ]

I don’t think this is correct.

1) Even if Apple does roll out this system, it’s not a system for scanning files on customer devices. It’s a system for comparing photos being uploaded to iCloud to a single standardized (everyone has the same one) list of hashes.

2) Apple likely has the legal power to refuse to alter that list of hashes, by the same argument they used against the FBI’s request to bypass the unlock code on specific iPhones.

3) Any argument about what Apple will and will not need to do with this system needs to explain how Microsoft Defender (or other AV products) interact with law enforcement, since those are software systems that scan client devices (ALL files, typically) for signature matches.

2021-08-18 13:47:57 +0000 UTC [ - ]

2021-08-18 14:51:59 +0000 UTC [ - ]

Grustaf 2021-08-18 13:14:19 +0000 UTC [ - ]

How likely is it that you will have enough colliding images in your photo library to even trigger a review?

I'm guessing you need at least 5 images, perhaps much more, to trigger it. In any case, 1 image is definitely not enough.

sandworm101 2021-08-18 13:22:07 +0000 UTC [ - ]

For a normal random person, a grey man. In the real world, a single collision could be enough for the police to acquire further access to people they are already looking at. If the police want access to your phone for other reasons (drugs, taxes, illegal speech) they can use that one collision to get a warrant which will give them greater access.

It is akin to cops wanting to search a car. They don't need a warrant. They only need to follow the car until it breaks any number of traffic laws. Then the resulting traffic stops lets them talk to the driver, who seems evasive, which gives them probable cause, which gets them a warrant to perform a full search. The accidental collision, week evidence of an offending image on the phone, opens the door to whatever other investigation they want to do.

merpnderp 2021-08-18 14:22:39 +0000 UTC [ - ]

Isn't Apple seeding their database with fake hits so that a court can't order them to turn over everyone who's just a few shy of Apple's threshold? So when this database leaks, won't there be a lot of people accused of horrific crimes simply because Apple seeded their database with random hits?

Why would people pay a company to falsely accuse them of horrific crimes?

JKCalhoun 2021-08-18 14:35:50 +0000 UTC [ - ]

Since, as I have read, Microsoft and Google already do this, where are all the illicit cop take-downs? I have not heard of any.

sandworm101 2021-08-18 14:55:12 +0000 UTC [ - ]

They are scanning images on online accounts, stuff that is stored on their systems and system files on machines (files they own). They are not, from what I have read, scanning customer-owned images stored on customer-owned devices.

snowwrestler 2021-08-18 15:06:54 +0000 UTC [ - ]

Every AV (antivirus) software system scans customer-owned files on customer-owned devices. So the question is the same: if it’s easy for law enforcement to deputize such a system, where is the flood of cases built off of evidence gathered this way?

moe 2021-08-18 15:57:49 +0000 UTC [ - ]

I feel like you are both chasing red herrings.

This is not about child porn nor about what AV software can or cannot do.

It's about normalising mass surveillance and implementing populace control. They don't want another Snowden to scare the public with reports about "backdoors" and mass privacy violations.

They want the coming generation to perceive it as normal. Because all phones do it, hasn't it always been this way, and think of the children.

Oh, these dissidents in $distant_country that will be muted and killed using Apple's shiny surveillance tech? Well, evil governments do what they do. But we are not like that. Over here it is only about the child predators. Trust us.

Apple has been an opponent of these developments for decades. Now they are spearheading it.

sandworm101 2021-08-18 15:24:36 +0000 UTC [ - ]

>> Every AV (antivirus) software system

Mine doesn't. If Ubuntu or ClamAV is scanning all my photos and reporting the results to Canonical against my will then I will soon be having words with Mr. Linus.

snowwrestler 2021-08-18 15:42:02 +0000 UTC [ - ]

If law enforcement can force software companies to adapt their software to serve law enforcement purposes, why aren’t Ubuntu or ClamAV doing so already? What makes them better at resisting requests from law enforcement than Apple?

sandworm101 2021-08-18 17:14:38 +0000 UTC [ - ]

>> why aren’t Ubuntu or ClamAV doing so already?

Because if some local police agency like the FBI were altering code on an open source project as large as ubuntu then the world would know about it pretty quickly. The linux community would burn every bridge with Canonical and ubuntu would disappear overnight. The idea of the FBI adding malware to ubuntu unnoticed, malware that openly scans files and reports to remote servers, is the stuff of comic books.

MichaelGroves 2021-08-18 16:41:21 +0000 UTC [ - ]

Canonical and ClamAV aren't trying to curry favor with the US government to stave off antitrust action.

jasamer 2021-08-18 16:27:09 +0000 UTC [ - ]

Apples solution only scans photos that will be uploaded, and the results of the scan require a server side to be evaluated. The client does not know whether a match occurred.

Grustaf 2021-08-18 13:25:08 +0000 UTC [ - ]

I'm not sure what you mean. If we are talking about Apple's new feature, they will not even be alerted until you have enough matches.

And the police probably won't be using any neural hashes if they have access to your device, so I'm not sure what you are talking about.

sandworm101 2021-08-18 13:28:16 +0000 UTC [ - ]

That is an Apple current policy, not any sort of law. If the FBI wants to know if there are any collisions on a particular phone, that will be shared. If the FBI wants to know of all single collisions, that too must be shared. A corporation's internal policy decisions are nothing when faced with a government official carrying a warrant. If they have data indicating possible crimes, no matter how small, they can be forced to divulge it to investigators.

Unlike Lavabit, I don't think Apple will ever close up shop to protect customer data.

Grustaf 2021-08-18 13:32:40 +0000 UTC [ - ]

Well of course if you believe that FBI can ask Apple to share anything and they will agree, and not tell us, then what is even the point of this discussion? Then they already have access to every photo, every email, every text. Because Apple controls those apps on your phone.

fsflover 2021-08-18 16:25:02 +0000 UTC [ - ]

The difference is that now they confirmed that they do search your private images.

Grustaf 2021-08-18 23:22:18 +0000 UTC [ - ]

Why would that make it any likelier that they are also secretly searching your files, with another method? Seems completely orthogonal to me.

fsflover 2021-08-19 16:42:06 +0000 UTC [ - ]

They showed that they are willing to do it in one case. Why not in other cases?

jasamer 2021-08-18 16:22:53 +0000 UTC [ - ]

I thought the system is designed such that the threshold must be met to check the matches, i.e. it’s not possible to follow the FBIs request without changing the entire system.

brokenmachine 2021-08-19 01:44:37 +0000 UTC [ - ]

int matchThreshold = 30;

if (numMatches > matchThreshold) { snitch(customer) && customer.life.ruin() }

gorbachev 2021-08-18 14:15:51 +0000 UTC [ - ]

For individual users the risk is very small, but when you have a billion users the chances of innocent people being caught up is pretty much 100%.

Platform owners like Google, Apple, Twitter and Facebook should really keep stuff like that in mind when they deploy algorithmic solutions like this.

Like dhosek said, the manual review step should reduce the risk quite a bit, though.

Grustaf 2021-08-18 14:28:55 +0000 UTC [ - ]

They did keep it in mind, that's why they require 30 matches, which gets the false positive rate down to 1 in a trillion per year, so on average it will happen about 1 per millennium, given a billion users.

So that's per photo library, not per photo. The rate per photo in their testing was 3 in 100 million, then they added a 30x safety margin and assumed it's actually 1 in a million.

https://www.zdnet.com/article/apple-to-tune-csam-system-to-k...

2021-08-18 18:22:36 +0000 UTC [ - ]

JKCalhoun 2021-08-18 14:33:18 +0000 UTC [ - ]

Not sure why the downvotes. You are absolutely correct that a threshold in the number of positives (false or otherwise) must be met.

To be fair though we do not know what the threshold is. But I would guess even higher than 5 — I would presume 12 or more.

I'm no criminologist (IANAC) but when you read about someone getting busted with child pornography they have hundreds or thousand of images — not one, not five. They're "trading cards" for these creeps.

cyanite 2021-08-18 20:46:33 +0000 UTC [ - ]

Apple says they expect to use 30 as the threshold.

dhosek 2021-08-18 13:47:26 +0000 UTC [ - ]

Apple has publicly stated 30. Plus, there would be a manual review of the images before forwarding to law enforcement. I don't see this collision attack leading to innocent people being handed over to the police. The worst-case (best-case?) scenario is that Apple gives up on the whole hashing system and then there will be zero chance of there ever being end-to-end encryption of photos in iCloud.

floatingatoll 2021-08-18 14:52:29 +0000 UTC [ - ]

I saw 30 mentioned somewhere, and I think that’s a more likely guess than 5. When men have a porn collection, it rarely has less than a hundred images, and so the threshold for CSAM detections can be set quite high in quantity, and enjoy a near-zero false positive rate aside from malicious hackers.

(And yes, 90% of CSAM abusers are men, so y’all will have to find some other way to weaken my argument.)

mschuster91 2021-08-18 12:55:55 +0000 UTC [ - ]

> and there is no law who requires them to "scan" on device

Yet. The demands by "concerned parents" (aka fronts for secret services, puritans/other religious fundamentalists and law-and-order hardliners) to "do something against child porn" have grown ever more strong and insane over the last years. (And you can bet that what is used on CSAM will immediately be used to go after legal pornography, sex work, drug enforcement, ...)

Many current and powerful politicians don't understand a single bit about computers, to the point of bragging of never having used one (e.g. Japan's cybersecurity minister) or having assistants print out emails daily and transcribing handwritten responses (can't say more than that this is the situation for at least two German MPs) - but there are young politicians who do, and will replace them over the next decade hopefully. That means it's last chance for lobbying groups to get devastating laws passed through, and we must all be vigorous in spotting and preventing at least the worst of them.

And what is suspiciously lacking in Apple's response: what are they going to do when they are compelled to extend the CSAM scanner by law in the US, India, China and/or EU? It's feasible for Apple to say they'll just be sticking the middle finger towards markets such as Russia, Saudi-Arabia or similar tiny dictatorships, but the big markets? Apple can't ignore these, and especially India and China are heavyweights.

Grustaf 2021-08-18 13:27:47 +0000 UTC [ - ]

Why would they extend the CSAM scanner? It would be much simpler to just use all the OCR and image classification functions they have already deployed.

CSAM scanning is only useful for areas where Apple really doesn't want to even look at the actual material until they are extremely certain that it's a match. If they want to detect anti-government propaganda or something, there would be no such concerns, they would just do a regex search of your inbox. Infinitely more convenient.

sandworm101 2021-08-18 13:43:47 +0000 UTC [ - ]

>> useful for areas where Apple really doesn't want to even look at the actual material

Correct. It has plausible deniability built in. Apple is unable to verify that the images the government are looking for are actually CSAM. They could be political. They could be protest images. They could be Winnie the Pooh. Apple can plead ignorance as it blindly scans for whatever the requesting government asks it to scan for.

Nobody really minds that this system is going to be used for CSAM. What everyone recognizes is how ripe this system is for abuse, how easily it can be leveraged by oppressive governments. And Apple can play the innocent.

mlindner 2021-08-18 14:11:40 +0000 UTC [ - ]

> Nobody really minds that this system is going to be used for CSAM.

I beg to differ. It doesn't matter how evil the content is, no scanning of my computers by outside parties, period. More so by scanning law enforcement can even plant legitimate child pornography on people's computers and get convictions all the easier because the system self-reports.

Grustaf 2021-08-18 14:18:21 +0000 UTC [ - ]

It's clearly not the scanning you are opposed to, it's the reporting. Apple apps, well a lot of apps, already scan all your content, for classifying images, doing OCR, extracting dates and addresses etc. That is nothing new, it's omnipresent.

jasamer 2021-08-18 16:34:07 +0000 UTC [ - ]

What about the human reviewers at Apple? Those need to confirm matches, and they obviously look at the photos to do so. Apple can’t claim ignorance.

Also, it has to be two requesting governments.

sandworm101 2021-08-18 16:49:13 +0000 UTC [ - ]

The current Apple policy to use human reviewers. That might change at a moment's notice and there is no technical reason why Apple couldn't bypass humans for certain requests. One must always judge a new system three ways: how it was meant to be implemented, how it is actually implemented, and how it might be abused by bad actors in the future.

Grustaf 2021-08-18 14:00:07 +0000 UTC [ - ]

Except that Apple will review the photos once you've matched 30 of them, so it's still not possible for the government to misuse it.

kps 2021-08-18 14:10:32 +0000 UTC [ - ]

In China, iCloud is run by the government.

Grustaf 2021-08-18 14:16:29 +0000 UTC [ - ]

It's actually not, but even if it were, that would be yet another reason CSAM scanning is completely irrelevant for government spying.

Any spying really. It's much easier to just look at the images themselves.

ttflee 2021-08-18 15:29:34 +0000 UTC [ - ]

> It's actually not

It's Joe Wong's joke, that it's like peeing in the snow in a dark winter night, while there is a difference but it's really hard to tell.

short_sells_poo 2021-08-18 14:47:29 +0000 UTC [ - ]

So some underpaid contractor reviewing "visual derivatives" that may or may not be CSAM completely prevents governments from misusing this in your mind?

Grustaf 2021-08-18 16:18:36 +0000 UTC [ - ]

Considering how high profile and incredibly sensitive this is, I doubt they will hire an underpaid contractor. It's the opposite of app review, where the volume is very high but the stakes are low.

MichaelGroves 2021-08-18 16:46:03 +0000 UTC [ - ]

Facebook hires underpaid and poorly treated contractors to moderate exactly this same kind of content. I see no reason to believe Apple will be any different, particularly a few months from now when the general public's attention has shifted to other matters (assuming they're even paying attention right now. I don't think the interests of HN are necessarily representative of the general public..)

ethbr0 2021-08-18 13:49:40 +0000 UTC [ - ]

Why even keep Apple in the loop? Why not just allow government to submit scanning models directly?

Which Apple will dutifully install and run, because they're required by local laws.

jodrellblank 2021-08-18 14:18:28 +0000 UTC [ - ]

> "Which Apple will dutifully install and run, because they're required by local laws."

Which Apple have stated that they won't do, and have designed the system so they can't do that without it being found out: https://news.ycombinator.com/item?id=28221082

Can't you at least post accurate information about this system and support outrage based on facts instead of fantasy?

mschuster91 2021-08-18 15:10:23 +0000 UTC [ - ]

> Which Apple have stated that they won't do

For your random off-of-the-mill dictatorship, yes.

For the US? EU? China? India? No way they can refuse such a request from these markets. And if they could and get away with it, it would be a very worrying situation in itself regarding (democratic) control of government over global mega corporations.

jodrellblank 2021-08-18 15:28:04 +0000 UTC [ - ]

If you believe "the government can compel any company to do anything", then this system is no worse than any other.

Although how do you think Linus Torvalds "manged to get away with refusing" adding backdoors? https://www.techdirt.com/articles/20130919/07485524578/linus...

mschuster91 2021-08-18 16:58:29 +0000 UTC [ - ]

Simple reason: every line of code in the Linux kernel passes many eyeballs in the open. Even if someone were to compel Linus Torvalds to commit a backdoor and he would comply with that order, people across many jurisdictions would notice this immediately, rendering the backdoor worthless.

jodrellblank 2021-08-18 14:08:05 +0000 UTC [ - ]

> "and what is suspiciously lacking in Apple's response: what are they going to do when they are compelled to extend the CSAM scanner by law in the US, India, China and/or EU?"

from https://daringfireball.net/linked/2021/08/09/apple-csam-faq - "we will not accede to any government’s request to expand it."

From https://www.msn.com/en-us/news/technology/craig-federighi-sa... - "“We ship the same software in China with the same database we ship in America, as we ship in Europe. If someone were to come to Apple [with a request to scan for data beyond CSAM], Apple would say no. But let’s say you aren’t confident. You don’t want to just rely on Apple saying no. You want to be sure that Apple couldn’t get away with it if we said yes,” he told the Journal. “There are multiple levels of auditability, and so we’re making sure that you don’t have to trust any one entity, or even any one country, as far as what images are part of this process.”" - Craig Federighi

jkestner 2021-08-18 14:20:24 +0000 UTC [ - ]

A “government’s request” is different from a country’s law. “This process” is not the only process by which third parties can breach your privacy.

ksec 2021-08-18 14:55:09 +0000 UTC [ - ]

I actually want Apple to stand ground and implement this feature. Like you said the double down on PR and marketing was enough for me. I may not be dumping all iOS and Mac for now. But it was "the" definite signal and evidence this is no longer the old Steve Jobs's Apple. It is like watching Mark Zuckerberg talking about privacy when he doesn't understand anything about it. ( Or more like he has a different understand of privacy than most people )

If they stand ground it makes the decision a lot easier for others. If they dont they will continue to use the boiling frog tactics.

After all they dont think they are wrong and they are still righteous. And It would be far more interesting to see how the world folds. The whole computing market, Google, Apple and Microsoft are currently operating with a moat that is literally impenetrable. This will be a small push towards a new world of possibilities.

Yes. I really hope they stand ground. I wish I won the lottery I could put a few million into the Social Media echo chamber to support this. Or you know, cough certain Apple competitor could do this.

danuker 2021-08-18 15:11:29 +0000 UTC [ - ]

Chances are by the next sales report it will be forgotten. Let's see how deep the memory hole goes.

mda 2021-08-18 17:43:34 +0000 UTC [ - ]

"Steve Jobs Apple"? I don't think Jobs would give a damn about people crying about Apple's decisions. I don't know why people thinks he would be a smidgen better than whoever managing apple after him.

ksec 2021-08-18 20:55:48 +0000 UTC [ - ]

Because he actually understand privacy better than 99.9% of people in Silicon Valley. He is also a product person who understand how users feel. Compare to current Apple which is "still" trying to give me a technical explanation of what is and what's not.

floatingatoll 2021-08-18 15:23:07 +0000 UTC [ - ]

Steve Jobs would have implemented this in secret and never told us at all, same as they already did for iCloud photos so many years ago. That would have been a far better approach than today’s Apple is taking. Oh well.

dmix 2021-08-18 16:25:39 +0000 UTC [ - ]

There’s a significant leap from implementing something server side to on the consumer handheld devices themselves. Even if it’s just similar software it is regardless much more serious.

I personally thought the unencrypted backups was enough of a death-knell as it provides everything on your phone but anything with real-time on device access is always a gold mine for surveillance hawks.

floatingatoll 2021-08-18 17:17:30 +0000 UTC [ - ]

It's serious to you, but CSAM scanning is irrelevant to 99.99999% of Apple's customers, and Jobs would never have allowed an announcement about migrating CSAM scanning from uploads to the cloud to the device uploading to the cloud. That's an implementation detail that wouldn't be relevant to discuss with outsiders. Instead, I expect he would have presented it in a closed session to the FBI and/or Congress. Never to the press, not like this.

neolog 2021-08-18 17:46:29 +0000 UTC [ - ]

> CSAM scanning is irrelevant to 99.99999% of Apple's customers

How long until an group of governments tells Apple to add Tank Man to the list?

floatingatoll 2021-08-18 19:09:35 +0000 UTC [ - ]

Why would they bother? That's a terrible way to approach it.

Just pass legislation requiring in-country datacenters that can be decrypted by thoughtcrime enforcers, like Russia and China are doing. Trying to get this done via a CSAM list that's absurdly closely audited would be a huge waste of time and not provide any significant benefit, and if such a request were ever made public, would likely result in severe political and economic sanctions.

That's what everyone's missing in this argument. There's no need to be all underhanded and secretive when you can just pass laws and conduct military-backed demands upon companies using those laws. Trying to exploit the CSAM process would be a horrifically bad idea, and would result in public exposure and humiliation, rather than the much more useful outcome that simply passing a law would provide.

neolog 2021-08-18 21:12:18 +0000 UTC [ - ]

Without the technology deployed, Apple can (and did) say they don't have the ability to break into users' phones.

If Apple deploys on-phone scanning, governments can just tell Apple to support a new list. It won't be the NCMEC CSAM list. It will be a "public safety and security" list. I wouldn't rule out underhandedness either. [1]

[1] https://www.nytimes.com/2020/07/01/technology/china-uighurs-...

floatingatoll 2021-08-18 21:31:15 +0000 UTC [ - ]

Apple already has technology deployed to perform binary file scans of every file on macOS and iOS, and the ability to at any time release signatures for those scans, that are very difficult for normal users to prevent updates for. They've had that for years, maybe even a decade by now, and so far to date we have seen no abuse of that list.

How is Apple's new CSAM list somehow increasing the chances of Apple going rogue, given that we've all been living with that risk for the past X years?

neolog 2021-08-18 21:45:46 +0000 UTC [ - ]

What technology are you referring to as already deployed?

floatingatoll 2021-08-18 22:02:47 +0000 UTC [ - ]

For macOS, I'm talking about XProtect and MRT. I don't know the exact subsystem names on iOS, apologies.

https://support.apple.com/guide/security/protecting-against-...

Each system is closed source, provides a mechanism for checking content signatures against files on disk, and is thought to report telemetry to Apple when signatures are found.

How is CSAM scanning new and different from those existing closed-source systems?

neolog 2021-08-18 22:39:09 +0000 UTC [ - ]

I'd say the primary differences are that the CSAM scan is a perceptual hash rather than a regular file hash, and that the technical infrastructure of the CSAM system is designed from the ground up to be used against (rather than for) the user and report them individually to authorities for violation.

floatingatoll 2021-08-18 22:50:42 +0000 UTC [ - ]

Do you have an alternate design in mind that is both "used for the user", and is also effective at reporting CSAM content being uploaded from the device, without allowing CSAM abusers to opt-out of that reporting? I haven't been able to come up with anything myself, but maybe you've had better luck.

neolog 2021-08-19 00:27:38 +0000 UTC [ - ]

I can only point to other people who know more than me.

https://stratechery.com/2021/apples-mistake/ is a smart tech commentator

https://www.nytimes.com/2021/08/11/opinion/apple-iphones-pri... are two security/encryption experts

hughrr 2021-08-18 18:25:27 +0000 UTC [ - ]

Exactly this. I see lots of people saying that Apple are forced to implement something via legislation as if it’s an excuse for it.

If they want to do business in China they will be forced by their legislation.

My entire argument is don’t build the mechanism.

floatingatoll 2021-08-18 19:19:39 +0000 UTC [ - ]

China forced Apple by legislation to implement new iCloud algorithms for assigning China-region user data into China-hosted datacenters. Most countries, unlike the US, are not constrained by a requirement to only exercise previously-built mechanisms and not create new ones, in response to government demands. If China decides to require Apple to censor non-CSAM content on-device, they will do so whether or not CSAM content fingerprinting exists. That China has not done so is because they benefit greatly from Apple's manufacturing and sales and do not wish to create a diplomatic incident with Apple.

cyanite 2021-08-18 20:44:56 +0000 UTC [ - ]

> China-hosted China-decryptable datacenters.

China hosted, yes, but Apple denies China-decryptable, so that’s speculation unless you have a good source.

floatingatoll 2021-08-18 20:59:11 +0000 UTC [ - ]

Nope, that's just me remembering wrong. Deleted those two words, thanks for the correction :)

tandav 2021-08-18 17:26:42 +0000 UTC [ - ]

until ios source code is closed all privacy claims is only backed by trust. They easily can do whatever they want if you're not compiling from source. There's no way to ensure your data is not leaving your iphone/mac with some "system" network requests.

cyanite 2021-08-18 20:45:33 +0000 UTC [ - ]

Security researchers and hackers routinely look at the code on the device, though.

ksec 2021-08-18 20:48:15 +0000 UTC [ - ]

Well Steve Jobs was the one that resisted PRISM and gave the middle finger to NSA.

floatingatoll 2021-08-18 20:56:13 +0000 UTC [ - ]

NSA PRISM was and illegal and warrantless text analysis and search system for most communications on the Internet, collected all communications without bothering to filter at all, and had no protections in place to prevent people from randomly searching and reading content out of human curiosity.

Apple's CSAM implementation protects the user against algorithm defects, does not expose the user to legal trouble until they have at least 30 human-verified visual matches of CSAM content, and occurs on-device using a small set of confirmed and audited signatures to ensure that CSAM scanning requires a central system only for verifying the unverified, blurred, positive matches.

I would hesitate to compare Steve Jobs' views on PRISM to his likely views on something that is so clearly opposed to it in so many ways. So I do not yet understand your viewpoint that Apple CSAM scanning and PRISM would have been treated equally by Steve. Help me understand?

2021-08-18 15:08:35 +0000 UTC [ - ]

devwastaken 2021-08-18 14:26:13 +0000 UTC [ - ]

The collisions are supposedly reviewed, my concern is that the process for photodna isn't going to always be the same, and we don't actually have any knowledge as to whether their claims are true. Eventually they will phase out human intervention and replace with gameable AI. 3 letter agencies don't need to review the actual images, they can easily get federal warrants based on some numbers.

Apple is content with putting in backdoors. Theres collusion behind the scenes here, they know it's bad, but hey, it's apple, it's ran by manipulators and liars and forms it's own cult.

atoav 2021-08-18 14:44:24 +0000 UTC [ - ]

One thing about this is that we are farther and farther away from "if there is evidence for a crime a jury can understand it and rationally decide what do with it" and we are getting closer to "if this lightbulb is glowing the machine says they are guilty, so better trust us".

This kind of stuff should not be evidence, if anything it should be a indicator where to look.

2021-08-18 16:44:16 +0000 UTC [ - ]

hannasanarion 2021-08-18 19:03:02 +0000 UTC [ - ]

In what case do you think the hash will be introduced as evidence, where the actual image being hashed, which is a grey blob, cannot?

vegetablepotpie 2021-08-18 13:10:27 +0000 UTC [ - ]

> I’ve dumped the entire iOS ecosystem in the last week.

Not to go off topic from your main point, but what did you move to?

ByteWelder 2021-08-18 13:43:47 +0000 UTC [ - ]

I'm not the one you responded to, but did something similar and feel like sharing:

- Desktop: Manjaro Gnome, for that amazing macOS-like desktop. It even does the 3 finger swipe up to see all your apps with Apple's Touchpad.

- Phone: OnePlus 8T with microG variant of LineageOS (alternatives: Pixel line-up), because that allows me to still receive push notifications. I have over 40 apps and only found 1 that didn't work so far (which is Uber Eats, because they seem to require Google Advertisement ID). I pushed a modified Google Camera app to it, so my camera is better supported. I think only 3 out of 4 cameras are working, but I don't care.

- Watch: Amazfit GTR 2e with the official app. Alternatively it should work with Gadgetbridge if you don't want to use the offical app ("Zepp"). Amazfit GTR 2 is a better option if you want it to have WiFi and want to store music on it.

drvdevd 2021-08-18 17:31:51 +0000 UTC [ - ]

Thanks for sharing. Time to start compiling these lists.

hughrr 2021-08-18 13:13:21 +0000 UTC [ - ]

Linux, dumbphone, DSLR. This was the final straw on a planned exit to be honest.

rootsudo 2021-08-18 13:40:39 +0000 UTC [ - ]

That's interesting, I may just do the same thing as well - the camera is the largest thing of why I want to be on an phone.

I may just go LineageOS.

Sanzig 2021-08-18 14:10:33 +0000 UTC [ - ]

If you have a Pixel, or are willing to buy one, CalyxOS [1] may be worth a look. It's a privacy-focused ROM that still integrates microG, so it's compatible with most apps (unlike the other popular privacy-focused ROM GrapheneOS [2] which doesn't support microG).

The big advantage to CalyxOS or GrapheneOS versus Lineage is they support re-locking the bootloader, which means that verified boot still works - important if your phone gets swiped.

[1] https://calyxos.org/ [2] https://grapheneos.org/

sphinxcdi 2021-08-19 14:17:08 +0000 UTC [ - ]

> It's a privacy-focused ROM that still integrates microG, so it's compatible with most apps (unlike the other popular privacy-focused ROM GrapheneOS [2] which doesn't support microG).

Many apps are also compatible with GrapheneOS and installing https://grapheneos.org/usage#sandboxed-play-services provides broader app compatibility.

rootsudo 2021-08-18 21:40:47 +0000 UTC [ - ]

Thank you for the info, I will go buy and experiment about this.

zepto 2021-08-18 14:02:37 +0000 UTC [ - ]

Vindicated?

This can’t be used for a targeted attack. It’s no different from the previous false posting making this claim.

zekrioca 2021-08-18 18:01:11 +0000 UTC [ - ]

Imagine if WhatsApp and other apps added received photos to iPhone's iCloud Photos gallery by default.. wait, they do.

zepto 2021-08-18 18:12:23 +0000 UTC [ - ]

Imagine if Apps had to ask permission before they could save photos…

Imagine if the local photo storage is not the same as iCloud Photo Library…

Imagine if anyone who cared could simply switch off iCloud Photo Library…

drvdevd 2021-08-18 17:12:28 +0000 UTC [ - ]

> Within a few years your entire frame buffer and camera will be working against you full time.

This. This is hard enough to explain but with all the PR around whether the feature is good or bad it's even more difficult. Putting this functionality on device is equivalent to modifying the hardware directly, because software is eating the world.

gjsman-1000 2021-08-18 14:16:32 +0000 UTC [ - ]

It's not really end-game, because the original hashes are, themselves, hashed and not available (so you don't have a hash to work towards). And second, even if you somehow managed to get over that huge leap, raw noise won't pass Apple's review, so you have to reverse-engineer a new image that looks like CSAM, to match a hash you don't have. Big leaps required.

cyanite 2021-08-18 20:50:31 +0000 UTC [ - ]

Technical nitpick: the hashes are not hashed (again), but rather cryptographically blinded. This is not reversible, so seems like a hash, but computations can be done on it which result can in some cases be seen by the server. This is actually all very clever, and described in a paper by Apple, linked from the technical summary article.

ballenf 2021-08-18 15:21:53 +0000 UTC [ - ]

Is it theoretically possible that photo filter effect could raise the false positive rate to make the 1 / a trillion number off by several order of magnitude?

Could be either malicious or accidental (less likely presumably).

Shorel 2021-08-18 15:23:09 +0000 UTC [ - ]

That's just a checkbox that indicates further action is required. All hashing algorithms have potential collisions.

Now, the hashes' database is not public, AFAIK it is in the hands of the FBI.

robertoandred 2021-08-18 14:59:34 +0000 UTC [ - ]

You feel vindicated because someone made one image look like another?

spullara 2021-08-18 17:47:20 +0000 UTC [ - ]

What exactly is the targeted attack? To get a bunch of someone's random looking photos reviewed by a human?

dgb99 2021-08-18 14:56:16 +0000 UTC [ - ]

can someone upvote me please? I need to add my account to my keybase profile.

elisbce 2021-08-18 10:22:34 +0000 UTC [ - ]

You are insane.

1. Dumping iOS ecosystem and pick what? You think Android is better and won't have this? iOS is the strongest mobile system in terms of privacy protection available to this date. Hell, the FBI doesn't even need to ask Google to decrypt an Android.

2. Theoretically you can target attacks against anyone. It is just a matter of efforts. If you are a political target, they can already implant spywares around you to track and monitor you. They don't even need to break your phone.

3. If you are not possessing CSAM materials and not one of those targets, then you are not worth the efforts to be attacked or monitored. They don't care. And to be honest, this is the best(might be the only real) way to stay private.

user-the-name 2021-08-18 10:13:01 +0000 UTC [ - ]

It can't. No actions are taken on hashes alone. The procedure is, if an account uploads some number of images with matching hashes, those images are verified by a human.

This can attack that system itself, though, by overloading those humans with too much work looking at random noise, but that requires quite a large organised effort. It also requires getting a hold of actual blacklisted hashes, which I doubt anyone has, unless they have actual child pornography.

reuben_scratton 2021-08-18 10:17:53 +0000 UTC [ - ]

"Verified by a human" - and that human will be an overworked, underpaid, overseas subcontractor who may well have an incentive to mash the "Confirm match" button from time to time to improve his performance.

nomel 2021-08-18 16:48:16 +0000 UTC [ - ]

The "verified by a human" is only the the last step of the process for Apple. After that, it's given to NCMEC for review, then it's reported to authorities. The Apple subcontractor isn't the only one verifying things here. What you're saying is not realistic.

apetrovic 2021-08-18 12:50:37 +0000 UTC [ - ]

I find it very hard to believe that Apple will put literal future of iPhone (just imagine the amount of bad press if one false negative comes out of review process and is reported to the authorities) to "overworked, underpaid overseas subcontractor".

Apple is greedy, ruthless, tone-deaf machine, but I don't think they're stupid.

notriddle 2021-08-18 14:08:17 +0000 UTC [ - ]

That’s already how app reviews work.

apetrovic 2021-08-18 15:00:16 +0000 UTC [ - ]

I think rejecting one of million apps and angering one developer is a bit different than reporting someone about a child pornography/abuse, ruining their life and making a global scandal that can put future iPhone sales in jeopardy.

read_if_gay_ 2021-08-18 19:00:43 +0000 UTC [ - ]

Barely anyone ever cared about privacy. Convenience, no matter how insignificant, always wins. I will be surprised if this is the last straw that finally gets the masses to care about privacy. In reality there might be a few scandals but at some point no one will care anymore and Apple will remain popular.

short_sells_poo 2021-08-18 12:09:53 +0000 UTC [ - ]

Beyond that, are we seriously going to ignore the fact that eventually this system will share the private photos of someone's naked kid with some random subcontractor? How is that even remotely OK? Or that it will find and share actual CSAM with said subcontractor?

FabHK 2021-08-18 12:18:36 +0000 UTC [ - ]

It only shares "visual derivatives" of images whose NeuralHash match the NeuralHash of known CSAM (either by being the same image ("perceptually") or a collision).

short_sells_poo 2021-08-18 14:32:16 +0000 UTC [ - ]

That to me just sounds like weasel words to avoid having to say that it shares images. Let's not beat about the bush, the "visual derivative" has to be good enough to identify what's going on in it for the manual confirmation.

Are you actually arguing in good faith here at all? Because I can't see how a "visual derivative" that's nevertheless good enough for manual confirmation is any better than the source image?

cyanite 2021-08-18 20:52:37 +0000 UTC [ - ]

> Are you actually arguing in good faith here at all?

Are you? Because you just seemed to claim that it could match against innocent pictures of your naked children, but this tells me that you don’t understand that this system looks for known pictures, not for something that looks like naked children.

Edit: if you do, apologies, but then I’d say that Apple has suggested that it’s a low resolution version of the picture. This should be contrasted with server side scanning, where the server accesses all pictures fully.

stetrain 2021-08-18 13:28:06 +0000 UTC [ - ]

This system does not use ML to find new CSAM images. It only checks for ones already in a known database. Your pictures of kids in the bathtub are not on the list.

What is show to the reviewer is a "visual derivative" which hasn't been clearly defined. A thumbnail image? Something with a censored section? We don't really know.

short_sells_poo 2021-08-18 14:30:17 +0000 UTC [ - ]

Yes I'm aware that it checks against a known database but clearly there can be collisions. So eventually it will share someone's private images.

bengale 2021-08-18 15:25:55 +0000 UTC [ - ]

It would need to have 30 collisions before anything even took place, which realistically isn't going to happen.

brokenmachine 2021-08-19 02:12:00 +0000 UTC [ - ]

Most cameras nowadays can easily take a burst of 30 visually very similar images in a second.

Lots of people leave their cameras in burst mode.

robertoandred 2021-08-18 14:39:22 +0000 UTC [ - ]

Or rather, it will share some gray blob apparently.

short_sells_poo 2021-08-18 15:23:39 +0000 UTC [ - ]

What is the point of sharing a gray blob? How is that going to prove anything?

stetrain 2021-08-18 16:04:50 +0000 UTC [ - ]

I think the gray blob refers to engineered hash collisions, like the example in the article link.

csande17 2021-08-18 10:18:51 +0000 UTC [ - ]

> It also requires getting a hold of actual blacklisted hashes, which I doubt anyone has, unless they have actual child pornography.

I've never personally seen or looked for any images like this, but if they weren't already proliferating online and widely available to criminals, why would we need to build an elaborate client-side scanning system to detect and report people who have copies of them?

user-the-name 2021-08-18 10:22:44 +0000 UTC [ - ]

Do you have access to darknet child pornography trading networks? I don't.

tzs 2021-08-18 13:04:01 +0000 UTC [ - ]

OK, now I'm curious. How does darknet stuff actually work?

Is there some darknet search engine that you access over Tor and type whatever illegal thing you seek such as "child porn" or "stolen credit cards" or "heroin" or "money laundering" into and it points you to providers of those illegal goods and services reachable over some reasonably secure channel?

Or is it one of those things where someone already using a particular child porn or stolen card or money laundering or heroin selling site has to put you in contact with them?

quenix 2021-08-18 13:36:21 +0000 UTC [ - ]

Hmm. Tor onion addresses are generally super long and have a large random element to them, such as

facebookwkhpilnemxj7asaniu7vnjj.onion

where the "Facebook" part is there for convenience and may likely not be present for actual illegal addresses. As such, you probably won't be able to find a CP website unless you actively brute force the entire onion address space.

I don't know how the sharing actually happens, but I don't think a real search engine for the dark net exists. There are websites that try (Ahlia, I think?) but they are super limited in scope.

I suspect addresses for drugs and CP are shared person-to-person.

colinmhayes 2021-08-18 13:40:54 +0000 UTC [ - ]

dark.fail indexes darknet markets, but all the ones I've seen ban CSAM

2021-08-18 10:26:21 +0000 UTC [ - ]

nabakin 2021-08-18 10:21:17 +0000 UTC [ - ]

Agreed. This attack still has consequences for people but they will not run into any legal trouble as far as I'm aware.

atrus 2021-08-18 10:17:44 +0000 UTC [ - ]

I doubt someone who is able/wanting to attack such a system would be unwilling to own some of the material themselves.

hughrr 2021-08-18 10:18:21 +0000 UTC [ - ]

That assumes that the human review process is competent, your own images aren’t poisoned in some way (consider your own kids in the bath with some noise added) etc.

In the mean time they lock your account which means your entire digital life stops dead until their review process is done.

No way do I accept any of this.

Also the hashes are on the device I understand and it’s not going to be that difficult to extract them.

FabHK 2021-08-18 10:27:45 +0000 UTC [ - ]

You:

> the hashes are on the device I understand and it’s not going to be that difficult to extract them.

----

A Review of the Cryptography Behind the Apple PSI System, Benny Pinkas, Dept. of Computer Science, Bar-Ilan University:

> Do users learn the CSAM database? No user receives any CSAM photo, not even in encrypted form. Users receive a data structure of blinded fingerprints of photos in the CSAM database. Users cannot recover these fingerprints and therefore cannot use them to identify which photos are in the CSAM database.

https://www.apple.com/child-safety/pdf/Technical_Assessment_...

----

The Apple PSI Protocol, Mihir Bellare, Department of Computer Science and Engineering University of California, San Diego:

> users do not learn the contents of the CSAM database.

https://www.apple.com/child-safety/pdf/Technical_Assessment_...

----

A Concrete-Security Analysis of the Apple PSI Protocol, Mihir Bellare, Department of Computer Science and Engineering University of California, San Diego:

> the database of CSAM photos should not be made public or become known to the user. Apple has found a way to detect and report CSAM offenders while respecting these privacy constraints.

https://www.apple.com/child-safety/pdf/Alternative_Security_...

hughrr 2021-08-18 10:42:04 +0000 UTC [ - ]

So fundamentally there’s a lot of mechanism around keeping secrets but the source material is available. Hmm.

https://youtu.be/eU2Or5rCN_Y

nyuszika7h 2021-08-18 10:46:49 +0000 UTC [ - ]

> In the mean time they lock your account which means your entire digital life stops dead until their review process is done.

Do you have a source for this, or are you just making things up?

user-the-name 2021-08-18 10:21:57 +0000 UTC [ - ]

The human reviewer would be able to check against the exact image that generated the hash in the first place. Taking another completely unrelated image and perturbing it would be immediately obvious.

hughrr 2021-08-18 10:22:52 +0000 UTC [ - ]

So there’s an office somewhere with computers full of illegal child porn that people are staring at and comparing your photos to?

There’s some irony in that.

henrikeh 2021-08-18 10:30:00 +0000 UTC [ - ]

Yes. That is called NCMEC in the US and it is a core aspect of how this whole process works.

If you don’t understand the details of this, I’ll recommend this podcast episode which sums it up and discusses the implications https://atp.fm/443

hughrr 2021-08-18 10:39:11 +0000 UTC [ - ]

I am aware of the details. The episode of Brass Eye comes into context here, the relevant clip being as follows, showing exactly the issues of competence.

https://youtu.be/_U-7L1tmBAo

et2o 2021-08-18 11:16:49 +0000 UTC [ - ]

I don’t have much of an opinion here except that it is silly to write “I am aware of the details” when someone gives you a helpful explanation and you are obviously not aware of the details, as you had just asked a question betraying.

henrikeh 2021-08-18 10:49:29 +0000 UTC [ - ]

I’m sorry, what is your point? I legitimately don’t understand what you mean.

shmatt 2021-08-18 13:47:23 +0000 UTC [ - ]

the blacklisted hashes are openly given to thousands of companies by a foundation called the National Center for Missing & Exploited Children, in both PhotoDNA and MD5 versions of the hashes.

Yes, I can't give you an open link, but I imagine at least hundreds of thousands of people have access to download them. Some companies with better security than others

It only takes 1 guy to sell it to an NSO-type company, who would then use it to target attacks for $$$

user-the-name 2021-08-18 18:05:00 +0000 UTC [ - ]

Where would that $$$ come from? Like I said, the only attack possible is to overload the human reviewers, and even that is difficult to pull off except as a coordinated effort by lots of people.

Grustaf 2021-08-18 13:23:01 +0000 UTC [ - ]

Working against you?

This technology doesn't make it even an ounce easier for Apple, or some evil three letter agency, to spy on you. If they want to see if you're a Trump supporter, anti-vaxxer or whatever, they already have access to all your photos and emails, on device. They even have OCR and classification of your photos. An intern could add a spying function in an afternoon.

if OCR(photo[i]).containsString("MAGA") { reportUser() }

argvargc 2021-08-18 14:48:08 +0000 UTC [ - ]

It didn't even last a week.

Introducing the people asking us to trust them with the responsibility of mass, automated crime accusations.

hannasanarion 2021-08-18 19:21:55 +0000 UTC [ - ]

Why do you think somebody will accuse you of a crime because you have a photo of a grey blob?

The real world isn't as stupid as the computer one. The justice system is not deterministic and automatic. Nobody is going to look at this grey blob and go "welp, we have no choice but to throw you in prison forever"

argvargc 2021-08-18 20:24:23 +0000 UTC [ - ]

The real world and the computer world intersect. This is precisely typified by what is being discussed, surely?

Apple is trying to automate and computerise a process that was not automated previously, apply it to a huge number of people, and with disastrous potential consequences should their wonderful design be found lacking.

And within days, they have already utterly failed to provide one of their own self-stated and incredibly obvious requirements. What else could possibly go wrong?

Further, if you think that because the first pre-image failure found (in days...) was a grey blob, that this means all future possible cultivated collisions will only ever be grey blobs, well - good luck with that.

hannasanarion 2021-08-18 20:47:20 +0000 UTC [ - ]

The process of making accusations and arrests is not automatic.

The Apple system sends flagged photos to reviewers, and if the reviewers find them suspicious, they send them to the poor souls at NCMEC, who will compare the flagged photo with the original illegal photo that's supposedly a match, and inform law enforcement if they are in fact a match.

Nobody will get cops at their door because somebody sent them a grey blob image.

The process that's being "automated" is merely an automatic flag that initiates a several-step process of human review. Apple isn't rolling out robocops.

Seriously, what are the "disasterous consequences" that you envision? What is the sequence of events where a hash collision leads to any inconvenience whatsoever for a user?

argvargc 2021-08-19 08:06:01 +0000 UTC [ - ]

What part of "they already totally fucked up how their own process is supposed to work" don't you understand?

hannasanarion 2021-08-19 15:47:59 +0000 UTC [ - ]

What part of it is fucked up? They never promised that their hashing algorithm was uncollidable. The process is specifically designed to be tolerant of hash collisions (and in fact wouldn't work otherwise, because the system needs to ignore small differences between copies of the illegal images, like one-pixel edits or color temperature differences or photocopies).

There are multiple stages of human review in the process and a high threshold for activation of the proccess because collisions are inevitable. The fact that collisions exist doesn't impact the safety of the product whatsoever.

I ask again, what is the sequence of events where a hash collision from a benign image can lead to any consequence at all for the user?

smlss_sftwr 2021-08-18 16:50:51 +0000 UTC [ - ]

"it's ok, there's only a 1 in a trillion chance of this happening"

th0ma5 2021-08-18 17:05:40 +0000 UTC [ - ]

Just visiting this page would trigger a bit right?

spoonjim 2021-08-18 20:24:17 +0000 UTC [ - ]

Any idea why Apple had to be too cute by half and not just scan these files server-side? Or why it doesn't change their plan to do that, given the backlash?

cyanite 2021-08-18 20:32:51 +0000 UTC [ - ]

I hope not, since doing the scans like this offers much more privacy for the user. Even when the user is too ignorant (sorry) to realize this.

But this will be evident by reading and understanding the technical description.

spoonjim 2021-08-18 21:31:10 +0000 UTC [ - ]

I'm not concerned with this system -- I'm concerned with the things that governments will now start to force Apple (and any computer vendor) to start scanning for on the client side. Once the client side scanning Rubicon is crossed, countries will want vendors to scan everything (not just iCloud folders) for everything (not just CSAM).

shadowgovt 2021-08-18 11:36:39 +0000 UTC [ - ]

Does this algorithm work for the reverse goal (i.e. can content that would trip the CSAM hash be perturbed enough to avoid it without compromising quality of the underlying image)?

To my mind, that's far more disquieting than the risk of someone staging and elaborate attack on an enemy's device.

zug_zug 2021-08-18 13:25:56 +0000 UTC [ - ]

Why is this downvoted? This checks out -- A problem with local scanning is that the source code will always be available and thus the user can learn exactly which perturbations trick the system.

2021-08-18 11:03:47 +0000 UTC [ - ]

2021-08-18 11:28:06 +0000 UTC [ - ]

ptidhomme 2021-08-18 09:56:28 +0000 UTC [ - ]

Yes, just like rape accusations. It doesn't matter that you prove it was false afterwards.

Edit : well that was a hint to Assange of course. Probably not true in general. So yes, I mean false accusations.

dang 2021-08-18 18:45:00 +0000 UTC [ - ]

We detached this subthread from https://news.ycombinator.com/item?id=28219243.

reayn 2021-08-18 10:35:37 +0000 UTC [ - ]

Why is this getting downvoted? It’s very true, just the accusation of committing such a crime (regardless of whether the person was acquitted or not) can easily ruin many facets of a persons’ life.

hosteur 2021-08-18 11:34:38 +0000 UTC [ - ]

Julian Assange and Jake Appelbaum being prime examples of this.

d33 2021-08-18 11:49:00 +0000 UTC [ - ]

In case of Appelbaum, are there any solid reasons to believe that the accusations are untrue? For Assange, I think that the victim admitted that the accusation was fabricated, isn't that the case?

coldtea 2021-08-18 12:17:26 +0000 UTC [ - ]

Accusations must be proven true, not untrue by the accused.

And besides the "victim" in the latter case there was a whole lot of diplomatic pressure and political commotion to set him up, with carrots and sticks and the aid of friendly satellite states.

_jal 2021-08-18 12:36:33 +0000 UTC [ - ]

They must be proven true beyond a reasonable doubt to get a person in a funny robe to put them in jail.

Normal humans are not required to prove anything in order to think them.

coldtea 2021-08-18 12:45:21 +0000 UTC [ - ]

>Normal humans are not required to prove anything in order to think them.

No, but decent human beings are required to not think them true just because they're out there...

whimsicalism 2021-08-18 13:29:39 +0000 UTC [ - ]

decent humans can have a lower standard for thinking someone did a crime than the criminal justice system does.

this is true whether you believe or disbelieve a rape claim, because if you disbelieve it you typically must believe the corrolary that the victim was committing the crime of lying to the police.

JohnWhigham 2021-08-18 14:34:24 +0000 UTC [ - ]

Which is why false rape allegations are so fucking dangerous, and are not nearly as punished as harshly as they should be.

Griffinsauce 2021-08-18 15:24:23 +0000 UTC [ - ]

*in the US

coldtea 2021-08-18 15:40:20 +0000 UTC [ - ]

I'm not in the US.

And even if it didn't hold in my country, it's still a tenet I think is fair.

So, I didn't intended invoke what's the legal requirements, but what should be the "right" thing (and what I think should also be legally mandated, even if it's not).

xpe 2021-08-18 14:28:38 +0000 UTC [ - ]

> Accusations must be proven true, not untrue by the accused.

I respect this ethical claim; however, there is considerable variation in how legal systems operate in this domain.

enriquto 2021-08-18 12:12:52 +0000 UTC [ - ]

> solid reasons to believe that the accusations are untrue

this is not how the concept of accusation works

brigandish 2021-08-18 10:38:12 +0000 UTC [ - ]

I think you may have attracted less downvotes if the phrasing was changed to "Yes, just like false accusations of rape, it doesn't matter that you prove it was false afterwards."

I also think that those downvoting you might've applied the principle of charity and taken the best interpretation of what you've written or at least ask.

sneak 2021-08-18 10:39:22 +0000 UTC [ - ]

All criminal accusations, including true ones, should be treated as false until the accused is proven guilty.

This is a fundamental tenet of human rights in western, small-l liberal free societies.

The fact that this is controversial these days is literally insane to me.

The consequences of throwing this fundamental system out the window is that you get the sort of nonsense that happened with Assange, where he was literally never even charged yet completely and thoroughly discredited due to headlines containing the word "rape" when no such thing ever happened.

(If you have been misled to believe otherwise, I encourage you to read the direct statements of the women involved.)

DebtDeflation 2021-08-18 13:09:04 +0000 UTC [ - ]

The real problem occurs long before a verdict is rendered in a court of law. Someone gets arrested for a misdemeanor, they spend a couple of nights in jail until bail can be arranged, in the meantime they got fired for missing work, the car they were driving was towed/impounded and will cost them hundreds of dollars to retrieve, they missed their rent payment and their landlord has begun eviction proceedings, etc. The fact that they are found not guilty when their trial happens a year later is irrelevant. Not to mention every future potential employer Googles their name and the first link that comes up is their arrest record. The "presumption of innocence" is meaningless when so much damage is done long before a trial even starts.

Ensorceled 2021-08-18 12:10:30 +0000 UTC [ - ]

> All criminal accusations, including true ones, should be treated as false until the accused is proven guilty.

No, they need to be treated as unproven, a very critical difference.

Just to be clear, witness testimony, including testimony FROM THE VICTIM, is evidence of the crime. Just for some reason, in rape cases, we go all wonky with this principle.

technothrasher 2021-08-18 12:24:31 +0000 UTC [ - ]

> No, they need to be treated as unproven, a very critical difference.

Right. I have a jar of gumballs and tell you I think there is an even number of gumballs in it. Do you believe me? If you don't, does that mean you believe there are in fact an odd number of gumballs in it? No, you do not believe either that there are an odd or an even number, because you just don't know. For a criminal accusation , your initial belief should not be guilty or innocent, it should be, "I just don't know".

Ensorceled 2021-08-18 12:44:59 +0000 UTC [ - ]

Yeah, "it is unverified or untested or unproven" seems to be a very hard concept for "black and white" brains to process. People keep following up "it is unproven" with "well then it must be false".

EDIT: judging by the downvotes, they are also making the leap to the original gumball counter must be lying ...

ralfn 2021-08-18 12:28:06 +0000 UTC [ - ]

1. By the public at large it should not be treated in any regard, false or not.

2. The state is the only authorized monopoly of violence and they should treat unproven and untrue as identical, and the only place where that decision is made is in a courtroom.

3. The 'believe the victims' activists however are rightfully (IMHO) suggesting to break principle #2 because there is institutional and systemic supression of the rule of law being applied properly. I would consider this civil disobedience.

4. The state needs to get it's act together and administer the law properly. Only through reform can they regain the legitimacy that is required for #2.

5. Those reforms should focus on racism, sexism and classism. It should focus on what part of the law is too ambiguous for either law enforcement and the judiciary branch.

6. This might include actually blinding the court, i.e. offer only verifiable facts that are admissable and can not be utilized as widgets for race, gender or social economic positions.

If you don't think the legal system has a problem ask yourself why every lawyer recommends the defendant to wear a suit to court. How could it matter if the process was assumed to be without bias.

bscphil 2021-08-18 13:40:29 +0000 UTC [ - ]

> The state is the only authorized monopoly of violence and they should treat unproven and untrue as identical, and the only place where that decision is made is in a courtroom.

This is completely wrong. If the state treated all accusations as untrue prior to conviction, they would not send armed men to haul you to prison, bar you from release unless you can bail yourself out (or sometimes not at all), and not have a prosecutor charge you with a crime.

This is no petty distinction. In order to function in its judiciary role, the state in fact must distinguish between plausibly true accusations and not plausible accusations, and must treat certain plausibly true accusations as "unproven" but not untrue, and take steps to make sure the accused does not flee into another country or commit further crimes.

ralfn 2021-08-18 18:59:15 +0000 UTC [ - ]

You are right. There is a lot more too it.

Although arresting and jailing someone is not being justice being administered, it is facilitating justice and fact finding and it itself is indeed an act of violence.

brigandish 2021-08-19 03:09:59 +0000 UTC [ - ]

The legal system may have problems but that is about implementation, the fundamental principles of presumption of innocence, that no harm should come to the innocent, that all are equal before the law, and that truth comes before all other concerns; these are fine principles and I would say that problems with the legal system mostly stem from straying from these principles. Perhaps you disagree?

whimsicalism 2021-08-18 13:32:26 +0000 UTC [ - ]

Sorry, don't see how 3 in your "syllogism" is remotely true. If 10 employees of yours come to you and accuse another employee of sexual harassment and you fire that employee, at no point were you acting as the state or using violence.

Sebb767 2021-08-18 12:18:47 +0000 UTC [ - ]

Testimony is seen as wonky in all kinds of cases. The problem with testimony about rape, however, is that, in those cases, supporting evidence is rarer than usual and even when it exists, proving it was non-consensual at the time is even harder (especially when relationships or affairs are involved).

Ensorceled 2021-08-18 12:20:53 +0000 UTC [ - ]

Yes, rape is more difficult to prove than a number of other crimes. That is no reason to jump to a default "assume the accusation is false".

ziml77 2021-08-18 12:44:43 +0000 UTC [ - ]

The reason to default to that is because a principle of our legal system is that people are innocent until proven guilty. On top of that proving someone guilty requires going beyond reasonable doubt.

Ensorceled 2021-08-18 12:50:22 +0000 UTC [ - ]

The principle is the defendant is innocent until proven guilty, NOT that the accuser is making a false accusation.

There is a reason the verdict is "not guilty" instead of "innocent". After a not guilty verdict the legal system still does not assume the accuser was making a false accusation.

Sebb767 2021-08-18 12:56:03 +0000 UTC [ - ]

> There is a reason the verdict is "not guilty" instead of "innocent".

Both verdicts exists, actually; the latter one is just far rarer (since it is both harder to prove and usually not what is argued about).

Ensorceled 2021-08-18 16:38:18 +0000 UTC [ - ]

Yes, and almost always a either a case of mistaken identity or emerging malfeasance by either witnesses or prosecution.

Ironically, a provable case of false rape accusation might result in this verdict.

Sebb767 2021-08-18 12:54:54 +0000 UTC [ - ]

> That is no reason to jump to a default "assume the accusation is false".

That's also not what I was trying to say. I only explained why testimony is far more doubted in rape cases than in most other cases. In general, I agree with you that accusations should be treated as unproven, so neither false nor true; we have a justice system to give a final verdict.

deelowe 2021-08-18 12:52:16 +0000 UTC [ - ]

Depends. Are certain groups disproportionally more likely to benefit from said accusations?

Sorry, it needs to be said. It's easier to take one side of an argument when it's less likely to affect you personally.

whimsicalism 2021-08-18 13:34:11 +0000 UTC [ - ]

Yes, generally powerful men disproportionately benefit from accusations that a rape claim is false.

No doubt some are false, but have not seen any evidence suggesting the majority are false.

Ensorceled 2021-08-18 12:54:46 +0000 UTC [ - ]

What are you actually claiming here? You stance is unclear.

deelowe 2021-08-18 14:54:42 +0000 UTC [ - ]

That this is a contentious topic with biases on both sides. One group clearly benefits from any accusations being squashed before ever being investigated. Another benefits from never having to prove their accusations hold merit.

This is why due process is important.

Ensorceled 2021-08-18 16:39:44 +0000 UTC [ - ]

I see this a lot in these discussions. What do you think Trump or Cosby's accusers gained by their accusations holding merit?

brigandish 2021-08-19 03:03:09 +0000 UTC [ - ]

All criminal accusations need to be treated as unproven, of course, could not agree more. Also, all those accused of a crime (and those not!) need to be treated as innocent until they are proven guilty.

kergonath 2021-08-18 14:14:34 +0000 UTC [ - ]

This is bonkers. We are innocent until proven guilty, not "unproven".

Witness testimony on its own is circumstantial evidence in general. Witnesses are very unreliable.

Ensorceled 2021-08-18 18:56:26 +0000 UTC [ - ]

We are innocent until proven guilty, I never disputed that.

I'm saying the accusation, the witness testimony, is NOT assumed false, the accusation is simply unproven.

If the jury/judge believes the testimony, they may convict on that testimony. Then the accused is proven guilty.

That the accusation started off being false and then magically became true when when the jury believed it is the bonkers belief here.

The fact that witness testimony is unreliable is a big part of WHY the accused is presumed innocent.

weimerica 2021-08-18 12:26:28 +0000 UTC [ - ]

> Just to be clear, witness testimony, including testimony FROM THE VICTIM, is evidence of the crime. Just for some reason, in rape cases, we go all wonky with this principle.

People lie, people have memory issues. Complicating things is the lovely issue modernity has brought, of two adults getting intoxicated and fornicating followed by regret and rape accusations sometime later. Then we have weaponized accusations - just like we have keyboard warriors making false police reports to have their opposition receive a SWAT team visit, we have people that will make false accusations to get revenge after a perceived slight.

What is wonky to me is the people who hear an accusation and treat it like the Gospel, destroying the accused depriving them of any possible justice. This is what is truly, absolutely, criminally insane.

skhr0680 2021-08-18 12:24:36 +0000 UTC [ - ]

> including testimony FROM THE VICTIM, is evidence of the crime. Just for some reason, in rape cases, we go all wonky with this principle.

In a criminal trial:

Victim: This person did it

Defendant: No I didn't

Not guilty

Ensorceled 2021-08-18 12:47:21 +0000 UTC [ - ]

That is not how it works, a large number of people are convicted on eye witness testimony from a single accuser.

elzbardico 2021-08-18 12:55:11 +0000 UTC [ - ]

Yes, they shouldn't.

chitowneats 2021-08-18 13:39:28 +0000 UTC [ - ]

The West has largely abandoned "small-l" liberalism. Not yet formally in many cases but defacto in terms of government actions and the views of the people.

cma 2021-08-18 11:00:17 +0000 UTC [ - ]

That's the burden for incarceration, not for making a personal best guess about guilt. Even far below best guess, would you send your kid with a camp councilor that you were 20% sure was guilty of something like that?

Sebb767 2021-08-18 12:20:36 +0000 UTC [ - ]

I probably would be doubtful even at 1%, sure. But there's no denying that this ruins the life of 99 innocent people for every actual criminal.

sneak 2021-08-18 18:26:16 +0000 UTC [ - ]

Ensorceled 2021-08-18 12:05:35 +0000 UTC [ - ]

> I also think that those downvoting you might've applied the principle of charity and taken the best interpretation of what you've written or at least ask.

There are far too many people who assume/assert false accusations of rape are the norm for the principle of charity to apply here.

brigandish 2021-08-19 04:36:14 +0000 UTC [ - ]

Is it the norm here? If not, then I suggest the principle should apply here.

OJFord 2021-08-18 11:06:31 +0000 UTC [ - ]

Vouched, because as I understand the comment, people must not be reading past 'rape' and just gut-flagging with completely the wrong impression.

account42 2021-08-18 11:36:46 +0000 UTC [ - ]

> people must not be reading past 'rape' and just gut-flagging with completely the wrong impression.

Which is pretty ironic, considering that kind of reaction is exactly what the comment is about.

craftinator 2021-08-18 12:38:10 +0000 UTC [ - ]

Build it and they will come.

da_big_ghey 2021-08-18 19:45:27 +0000 UTC [ - ]

Indeed yes, for instance statement like "President denies allegations of beating his wife" is damming all by itself.

thulle 2021-08-18 11:52:10 +0000 UTC [ - ]

Huh? In what way has the accusations against Assange been disproven? Is this one of the "if he didn't jump someone random it isn't rape"-arguments?

pthrowaway9000 2021-08-18 14:03:31 +0000 UTC [ - ]

Told ya. Sent from my S5 running lineageOS since 2015, now with MicroG.

sandworm101 2021-08-18 13:12:53 +0000 UTC [ - ]

Every bad day for Apple is another great day for Linux.

Smug mode activated.

farmerstan 2021-08-18 14:06:58 +0000 UTC [ - ]

Apple should be embarrassed for itself. The “one in trillion hash” was cracked in a week. Everyone associated with that should be fired.

rootsudo 2021-08-18 13:21:30 +0000 UTC [ - ]

And everyone was saying how impossible this was.

Not an expert, but I do enjoy how quickly this was absconded and disproven.

Ajedi32 2021-08-18 13:30:23 +0000 UTC [ - ]

Impossible (or nearly so) for a cryptographically secure hash function, not a perceptual hash. When the stories on this first broke it wasn't entirely clear which type of hash Apple was using for their CSAM detection. (I assume now we know they're using NeuralHash?)

guywhocodes 2021-08-18 13:26:43 +0000 UTC [ - ]

I was on the fence on this one because of Alex Stamos statements on this. I did expect to be able to trust him and in extension the Stanford Internet Observatory more.

Traubenfuchs 2021-08-18 13:41:43 +0000 UTC [ - ]

Nothing about this ever made any sense. Any pedophile with an IQ over room temperature already deleted their iCloud at this point, with all the media buzz about it.

wisienkas 2021-08-18 11:26:08 +0000 UTC [ - ]

Not knowing too much of the NeuralHash model, but why are they using MD5 hash, they are known to have many collisions. We don't use MD5 for private/public keys for the same reason

enedil 2021-08-18 11:43:02 +0000 UTC [ - ]

We don't use md5 for private/public keys because md5 is a hashing algorithm, unrelated completely to encryption. Also, what are your reasons to believe that md5 has been used there?

Grollicus 2021-08-18 11:58:21 +0000 UTC [ - ]

Hashes are generally a part of the signature generation used with certificates. See for example "What role do hashes play in TLS/SSL certificate validation?" -> https://security.stackexchange.com/questions/67512/what-role...

In certificates, md5 - and sha1 - was used quite some time after it was known to be weak, I suspect OP was thinking of that.

This article seems to give a good summary what happened with sha1, mentions md5 in passing and links the related chromium issue: https://konklone.com/post/why-google-is-hurrying-the-web-to-...

Retr0id 2021-08-18 12:04:05 +0000 UTC [ - ]

What makes you think they're using MD5 anywhere?

Even if they were, it wouldn't matter, because NeuralHash is non-cryptographic by design.

wisienkas 2021-08-19 08:34:22 +0000 UTC [ - ]

I looked at the link and looked at the output from the algorithm for the 2 images which was a MD5 hash. so from that :)

Retr0id 2021-08-19 14:30:27 +0000 UTC [ - ]

But it isn't MD5. It's not even the same length as an MD5 hash. I am confused by your reasoning.

zaptheimpaler 2021-08-18 10:27:52 +0000 UTC [ - ]

This is so overblown. Scanning images for CSAM seems to be a requirement followed by Facebook, Google, Insta and Snap already [1]:

> To put this in perspective, in 2019 Facebook reported 65 million instances of CSAM on its platform, according to The New York Times. Google reported 3.5 million photos and videos, while Twitter and Snap reported “more than 100,000,” Apple, on the other hand, reported 3,000 photos.

ALL of those services are already scanning all your photos server side implying complete access to the photo for other purposes.

Apple went above and beyond what anyone else does and moved the scanning to be client side! I can't believe they are getting shit for this. They are doing MORE than anyone else to protect your privacy, while still complying with federal laws. If you object to this, you should object to all cloud based photo storage services, and with the federal laws. Apple not only made it more private, but also transparently announced what they do and they are getting shit for it.

[1] https://www.engadget.com/apple-child-safety-csam-detection-e...

kmbfjr 2021-08-18 11:08:34 +0000 UTC [ - ]

It is not overblown. I refuse to be a perpetual suspect in the possession and transfer of child pornography. Having these checks on my phone is just a short hop to checking everything the camera sees.

I worked with a guy who was convicted of this very crime. Do you know what the FBI installs on his computer and phone? This kind of monitoring.

I am not going to be treated like I am on parole.

raxxorrax 2021-08-18 15:23:36 +0000 UTC [ - ]

To form a collision here you basically use a partial image of the original. What it can do is leave some conclusion about the workings of their blackbox algorithm as the collision image ripped those parts that probably have no influence on the resulting hash. There will likely be more though. As long as you don't have the original images, it will be hard to create "fake" ones.

WA 2021-08-18 10:40:35 +0000 UTC [ - ]

Is it so hard to understand? Some people don’t use cloud storage for precisely the reason that the photos are not encrypted.

Now they can’t even use their phone for storing photos.

The thing with "only when iCloud is enabled" is only for now. It’s trivial to make Scanning all photos default in a future version.

nyuszika7h 2021-08-18 10:53:28 +0000 UTC [ - ]

That would require a software update and would definitely not go unnoticed. Would you rather they implement scanning on server side and never be able to enable end-to-end encryption for iCloud Photos? I imagine that might be the end goal, otherwise I don't see why they wouldn't have just done it on server side.

Sure, this system still has the potential to be abused, but if I had to choose between "end-to-end encrypted with local scanning before upload, which can maybe be abused but has a higher chance of being noticed if it is, and can be disabled with a jailbreak if you're really paranoid" and "not end-to-end encrypted, Apple can inspect my whole photo library whenever they desire, which can go completely unnoticed due to gag orders forcing them to hand data over to governments, not even requiring a software update, and you are completely powerless to do anything about it except for not using iCloud Photos at all", I would definitely choose the former.

sodality2 2021-08-18 12:07:40 +0000 UTC [ - ]

False dichotomy. You should UNEQUIVOCALLY not be FORCED to choose one of those. You shouldn't choose one of those at all. I do not need to be treated like a criminal.

Facebook et al scans server side because they have liability. Tell me why apple thinks they need to scan locally? They dont have liability, which is even worse. It means they're doing it for other reasons. Plenty of people read that as "wow apple is so kindhearted they did this without the business incentive". And you know what? They didn't. They might even have good intentions. But, as the saying goes, the path to hell is paved with good intentions.

WA 2021-08-18 11:15:53 +0000 UTC [ - ]

Strawman.

I choose no cloud storage plus device that doesn’t scan my data.

You argue for the former, which isn’t implemented, vs the latter, which I assume is reality anyways for all cloud storage providers.

We can have this discussion if Apple implements the former. If this was the goal, they would’ve announced it like you suggested.

Hackbraten 2021-08-18 11:23:38 +0000 UTC [ - ]

> and can be disabled with a jailbreak if you're really paranoid

Except that if I’m really paranoid, I’m not going to skip security updates. And those updates will probably render known jailbreak exploits useless.

CrimsonRain 2021-08-18 12:59:25 +0000 UTC [ - ]

You're wrong. Scanning is NOT required by US law. The law says IF [1] you know about CSAM, then you MUST report. If you don't know, you don't have to report. And it is not your legal duty scan. Law even has privacy sections [2]. But if you DO scan, then you must report. So all these CSAM scan is bullshit. Companies can stop scanning if they want to.

[1] 18 US Code 2258A

[2] 18 US Code 2258A -> F.1, F.2, F.3