Hash collision in Apple NeuralHash model
topynate 2021-08-18 12:58:51 +0000 UTC [ - ]
This is useful for two purposes I can think of. One, you can randomize all the vectors on all of your images. Two, you can make problems for others by giving them harmless-looking images that have been cooked to give particular hashes. I'm not sure how bad those problems would be – at some point a police officer does have to look at the image in order to get probable cause. Perhaps it could lead to your Apple account being suspended, however.
Kalium 2021-08-18 13:08:16 +0000 UTC [ - ]
Of course, this is assuming everything works as intended and they don't find anything else they can use to charge you with something as they search your home. If you smoke cannabis while being in the wrong state, you're now in several more kinds of trouble.
topynate 2021-08-18 13:16:51 +0000 UTC [ - ]
falcolas 2021-08-18 14:48:26 +0000 UTC [ - ]
They will instead contact the police and say "Person X has Y images that are on list Z," and let the police get a warrant based off that information and execute it to check for actual CSAM.
topynate 2021-08-18 15:01:23 +0000 UTC [ - ]
gowld 2021-08-18 15:27:43 +0000 UTC [ - ]
iCloud is encrypted, so that warrant is useless.
They need to unlock and search the device.
ggreer 2021-08-18 20:43:18 +0000 UTC [ - ]
> iCloud content may include email, stored photos, documents, contacts, calendars, bookmarks, Safari Browsing History, Maps Search History, Messages and iOS device backups. iOS device backups may include photos and videos in the Camera Roll, device settings, app data, iMessage, Business Chat, SMS, and MMS messages and voicemail. All iCloud content data stored by Apple is encrypted at the location of the server. When third-party vendors are used to store data, Apple never gives them the encryption keys. Apple retains the encryption keys in its U.S. data centers. iCloud content, as it exists in the customer’s account, may be provided in response to a search warrant issued upon a showing of probable cause, or customer consent.
1. https://www.apple.com/legal/privacy/law-enforcement-guidelin...
topynate 2021-08-18 15:34:29 +0000 UTC [ - ]
Kalium 2021-08-18 16:17:53 +0000 UTC [ - ]
heavyset_go 2021-08-18 20:45:10 +0000 UTC [ - ]
They regularly are, and they regularly give up customer data in order to comply with subpeonas[1]. They give up customer data in response to government requests for 150,000 users/accounts a year[1].
bbatsell 2021-08-18 19:02:33 +0000 UTC [ - ]
sudosysgen 2021-08-18 19:33:34 +0000 UTC [ - ]
HALtheWise 2021-08-18 19:40:06 +0000 UTC [ - ]
heavyset_go 2021-08-18 20:42:47 +0000 UTC [ - ]
alfiedotwtf 2021-08-18 19:23:55 +0000 UTC [ - ]
__blockcipher__ 2021-08-18 21:05:00 +0000 UTC [ - ]
Man that seems horrible. So you just have to trust the description is accurate? You’d think there’d at least be a “private viewing room” type thing (I get the obvious concern of not giving them a file to take home)
Kalium 2021-08-18 13:27:12 +0000 UTC [ - ]
That said, I'm not willing to say it won't happen. There are too many law enforcement entities of wildly varying levels of professionalism, staffing, and technical sophistication. Someone innocent, somewhere, is likely to have a multi-year legal drama because their local PD got an email from Apple.
And we haven't even gotten to subjects like how some LEOs will happily plant evidence once they decide you're guilty.
cirrus3 2021-08-18 23:50:06 +0000 UTC [ - ]
Just stop.
teachrdan 2021-08-18 23:52:46 +0000 UTC [ - ]
hda2 2021-08-19 02:57:57 +0000 UTC [ - ]
Operyl 2021-08-19 10:53:07 +0000 UTC [ - ]
e_proxus 2021-08-18 18:59:43 +0000 UTC [ - ]
You could take actual CSAM, check if it matches the hashes and keep modifying the material until it doesn’t (adding borders, watermarking, changing dimensions etc.). Then just save it as usual without any risk.
cyanite 2021-08-18 19:51:05 +0000 UTC [ - ]
tambourine_man 2021-08-18 13:18:25 +0000 UTC [ - ]
Whatever you say about Apple, they are an extremely well oiled communication machine. Every C-level phrase has a well thought out message to deliver.
This interview was a train wreck. Joanna kept asking: please, in simple terms, to a hesitant and inarticulate Craig. It was so bad that she had to produce infographics to fill the communication void left by Apple.
They usually do their best to “take control” of the narrative. They were clearly caught way off guard here. And that's revealing.
cwizou 2021-08-18 13:59:09 +0000 UTC [ - ]
And because of this they calibrated their communication completely wrong, focusing on the on device part as being more private. Using the same line of thinking they use for putting Siri on device.
And the follow up was an uncoordinated mess that didn't help either (as you rightly pointed out with Craig's interview). In the Neuenschwander interview [1], he stated this :
> The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled.
This still has me confused, here's my understanding so far (please feel free to correct me)
- Apple is shipping a neural network trained on the dataset that generates NeuralHashes
- Apple also ships (where ?) a "blinded" (by an eliptic curve algo) table lookup that match (all possible?!) NeuralHashes to a key
- This key is used to encrypt the NeuralHash and the derivative image (that would be used by the manual review) and this bundle is called the voucher
- A final check is done on server using the secret used to generate the elliptic curve to reverse the NeuralHash and check it server side against the known database
- If 30 or more are detected, decrypt all vouchers and send the derivative images to manual review.
I think I'm missing something regarding the blinded table as I don't see what it brings to the table in that scenario, apart from adding a complex key generation for the vouchers. If that table only contained the NeuralHashes of known CSAM images as keys, that would be as good as giving the list to people knowing the model is easily extracted. And if it's not a table lookup but just a cryptographic function, I don't see where the blinded table is coming from in Apple's documentation [2].
Assuming above assumptions are correct, I'm paradoxically feeling a tiny bit better about that system on a technical level (I still think doing anything client side is a very bad precedent), but what a mess did they put themselves into.
Had they done this purely server side (and to be frank there's not much difference, the significant part seems to be done server side) this would have been a complete non-event.
[1] : https://daringfireball.net/linked/2021/08/11/panzarino-neuen...
[2] This is my understanding based on the repository and what's written page 6-7 : https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...
brokenmachine 2021-08-19 00:45:47 +0000 UTC [ - ]
That's a *huge* amount of crypto mumbo-jumbo for a system to scan your data on your own device and send it to the authorities.
They must really care about children!!
If only this system was in place while Trump, Jeffrey Epstein, and Prince Andrew were raping children, surely none of that would have happened!! /s
HatchedLake721 2021-08-18 15:09:18 +0000 UTC [ - ]
brandon272 2021-08-18 15:36:34 +0000 UTC [ - ]
This was painful to watch.
FabHK 2021-08-18 10:20:30 +0000 UTC [ - ]
This is what would need to happen:
1. Attacker generates images that collide with known CSAM material in the database (the NeuralHashes of which, unless I'm mistaken, are not available)
2. Attacker sends that to innocent person
3. Innocent person accepts and stores the picture
4. Actually, need to run step 1-3 at least 30 times
5. Innocent person has iCloud syncing enabled
6. Apple's CSAM detection then flags these, and they're manually reviewed
7. Apple reviewer confuses a featureless blob of gray with CSAM material, several times
Note that other cloud providers have been scanning uploaded photos for years. What has changed wrt targeted attacks against innocent people?
fsloth 2021-08-18 11:01:10 +0000 UTC [ - ]
Just insert a known CSAM image on target's device. Done.
I presume this could be used against a rival political party to ruin their reputation - insert bunch of CSAM images on their devices. "Party X is revealed as an abuse ring". This goes oh-so-very-nicely with Qanon conspiracy theories which even don't require any evidence to propagate widely.
Wait for Apple to find the images. When police investigation is opened, make it very public. Start a social media campaign at the same time.
It's enough to fabricate evidence only for a while - the public perception of the individual or the group will be perpetually altered, even though it would surface later that the CSAM material was inserted by hostile third party.
You have to think about what nation state entities that are now clients of Pegasus and so on could do with this. Not how safe the individual component is.
kreitje 2021-08-18 11:28:50 +0000 UTC [ - ]
They are even in the same political party.
https://www.reddit.com/r/321/comments/jt32rs/fdle_report_bet...
JimBard2 2021-08-18 12:58:59 +0000 UTC [ - ]
JKCalhoun 2021-08-18 14:39:51 +0000 UTC [ - ]
Or maybe thirty. You have to surpass the threshold.
Also, if Twitter, Google, Microsoft are already deploying CSAM scanning in their services .... why are we not hearing about all the "swatting"?
faeriechangling 2021-08-19 01:34:48 +0000 UTC [ - ]
Now apple is in this crappy situation where they can't claim their software is secure because it's open source and auditable, but they also can't claim it's secure because it's closed source and they fixed the problems in some later version because this entire debacle has likely destroyed all faith in their competence. If apple is in the position of having to boast "Trust us bro, your iPhone won't be exploited to get you SWATTED over CSAM anymore, we patched it" the big question is why is apple voluntarily adding something to their devices where the failure mode is violent imprisonment and severe loss of reputation when they are not completely competent?
This entire debacle reminds me of this video: https://www.youtube.com/watch?v=tVq1wgIN62E
romwell 2021-08-18 18:25:31 +0000 UTC [ - ]
>their services
>T H E I R S E R V I C E S
Because it's on their SERVICES, not on their user's DEVICES, for one.
Also, regardless of swatting, that's why we have an issue with Apple.
stephenr 2021-08-18 19:11:25 +0000 UTC [ - ]
How is that a meaningful difference for the stated end goals, that can explain the lack of precedent.
croutonwagon 2021-08-18 21:26:25 +0000 UTC [ - ]
In this specific case yes. That is what is supposed to happen.
But Apple also sets the standard that this is just the beginning, not the end. They say as much on page 3 in bold, differentiated color ink
https://www.apple.com/child-safety/pdf/Expanded_Protections_...
And there’s nothing to stop them from scanning all images on a device. Or scanning all content for keywords or whatever. iCloud being used as a qualifier is a red herring to what this change is capable of.
Maybe someone shooting guns is now unacceptable, kids have been kicked from schools for posting them on Facebook or having them in their rooms on zoom. What if it’s kids shooting guns? There are so many possibilities of how this could be misused, abused or even just an oopsie, sorry I upended your life to solve a problem that is so very rare.
Add to that their messaging has been muddy at best. And it incited a flame war. A big part of that is iCloud is not a single thing. It’s a service, it can sync snd hold iMessages, it can sync backups, or in my case We have shared iCloud albums that we use to share images with family. Others are free to upload and share. In fact that’s our only use of iCloud other than find my. They say iCloud photos as if that’s just a single thing but it’s easy to extrapolate that to images in iMessages, backups etc.
And the non profit that hosts this database is not publicly accountable. They have public employees on their payroll but really they can put whatever they want in that database. They have no accountability or public disclosure requirements.
So even I, when their main page was like 3 articles was a bit perturbed and put off. I’m not going to ditch my iPhone, mainly because it’s work assigned but I have been keeping a keen eye on what’s happening, how it’s happening and will keep an eye out for their chnages they are promising. I’m also going to guess they won’t nearly be as high profile in the future.
cyanite 2021-08-18 19:58:15 +0000 UTC [ - ]
Effectively the same for Apple. It’s only when uploading the photo. Doing it on device means the server side gets less information.
short_sells_poo 2021-08-18 14:51:49 +0000 UTC [ - ]
sweetheart 2021-08-18 14:58:57 +0000 UTC [ - ]
short_sells_poo 2021-08-18 15:10:39 +0000 UTC [ - ]
sweetheart 2021-08-18 18:22:07 +0000 UTC [ - ]
Yeah, basically. It doesn't seem like people actually use CSAM to screw over innocent folks, so I don't think we need to worry about it. What Apple is doing doesn't really make that any easier, so it's either already a problem, or not a problem.
> And that making an existing situation even more widespread is also completely OK?
I don't know if I'd say any of this is "completely OK", as I don't think I've fully formed my opinion on this whole Apple CSAM debate, but I at least agree with OP that I don't think we need to suddenly worry about people weaponizing CSAM all of a sudden when it's been an option for years now with no real stories of anyone actually being victimized.
colejohnson66 2021-08-18 15:57:29 +0000 UTC [ - ]
hannasanarion 2021-08-18 18:44:03 +0000 UTC [ - ]
Okay, and then what? You think people will just look at this cute picture of a dog and be like "welp, the computer says it's a photo of child abuse, so we're taking you to jail anyway"?
russdpale 2021-08-18 17:29:06 +0000 UTC [ - ]
FabHK 2021-08-18 20:48:19 +0000 UTC [ - ]
Yes, but then the hash collision (topic of this article) is irrelevant.
sam0x17 2021-08-18 19:48:07 +0000 UTC [ - ]
You don't even need to go that far. You just need to generate 31 false positive images and send them to an innocent user.
cyanite 2021-08-18 19:55:58 +0000 UTC [ - ]
Also, “sending” them to a user isn’t enough; they need to be stored in the photo library, and iCloud Photo Library needs to be enabled.
cyanite 2021-08-18 19:54:28 +0000 UTC [ - ]
What do you mean “just”? That’s not usually very simple. It needs to go into the actual photo library. Also, you need like 30 of them inserted.
> I presume this could be used against a rival political party
Yes, but it’s not much different from now, since most cloud photo providers scan for this cloud-side. So that’s more an argument against scanning all together.
lowkey_ 2021-08-18 20:23:13 +0000 UTC [ - ]
The one failsafe would be Apple's manual reviewers, but we haven't heard much about that process yet.
tick_tock_tick 2021-08-18 20:18:33 +0000 UTC [ - ]
iMessage photos received are automatically synced so no. Finding 30 photos take zero time at all on Tor. Hell finding a .onion site that doesn't have CP randomly spammed is harder.....
bsql 2021-08-18 21:24:28 +0000 UTC [ - ]
y7 2021-08-18 10:53:09 +0000 UTC [ - ]
1. Obtain known CSAM that is likely in the database and generate its NeuralHash.
2. Use an image-scaling attack [2] together with adversarial collisions to generate a perturbed image such that its NeuralHash is in the database and its image derivative looks like CSAM.
A difference compared to server-side CSAM detection could be that they verify the entire image, and not just the image derivative, before notifying the authorities.
[1] https://news.ycombinator.com/item?id=28218922
[2] https://bdtechtalks.com/2020/08/03/machine-learning-adversar...
FabHK 2021-08-18 11:11:56 +0000 UTC [ - ]
But a conceivable novel avenue of attack would be to find an image that:
1. Does not look like CSAM to the innocent victim in the original
2. Does match known CSAM by NeuralHash
3. Does look like CSAM in the "visual derivative" reviewed by Apple, as you highlight.
avianlyric 2021-08-18 11:46:02 +0000 UTC [ - ]
1. Looks like an innocuous image, indeed even an image the victim is expecting to receive.
2. Downscales in such a way to produce a CSAM match.
3. Downscales for the derivative image to create actual CSAM for the review process.
Which is a pretty scary attack vector.
zepto 2021-08-18 15:11:30 +0000 UTC [ - ]
FabHK 2021-08-18 11:53:28 +0000 UTC [ - ]
avianlyric 2021-08-18 11:59:35 +0000 UTC [ - ]
At this point you’re already inside the guts of the justice system, and have been accused of distributing CSAM. Indeed depending on how diligent the prosecutor is, you might need to wait till trial before you can defend yourself.
At that point you’re life as you know is already fucked. The only thing proving your innocence (and the need to do so is itself a complete miscarriage of justice) will save you from is a prison sentence.
ta988 2021-08-18 13:04:25 +0000 UTC [ - ]
zepto 2021-08-18 15:13:39 +0000 UTC [ - ]
If the creation of fakes is as easy as claimed, Neuralhash evidence alone will become inadmissible.
There are plenty of lawyers and money waiting to establish this.
IncRnd 2021-08-19 02:11:57 +0000 UTC [ - ]
> If the creation of fakes is as easy as claimed, Neuralhash evidence alone will become inadmissible.
GeckoEidechse 2021-08-18 14:05:13 +0000 UTC [ - ]
What if they are placed on the iDevice covertly? Say you want to remove politician X from office. If you got the money or influence you could use a tool like Pegasus (or whatever else there is out there that we don't know of) to place actual CSAM images on their iDevice. Preferably with an older timestamp so that it doesn't appear as the newest image on their timeline. iCloud notices unsynced images and syncs them while performing the CSAM check, it comes back positive with human review (cause it was actual CSAM) and voilà X got the FBI knocking on their door. Even if X can somehow later proof innocence by this time they'll likely have been removed from office over the allegations.
Thinking about it now it's probably even easier: Messaging apps like WhatsApp allow you to save received images directly to camera roll which then auto-syncs with iCloud (if enabled). So you can just blast 30+ (or whatever the requirement was) CSAM images to your victim while they are asleep and by the time they check their phone in the morning the images will already have been processed and an investigation started.
zepto 2021-08-18 15:14:36 +0000 UTC [ - ]
Sebb767 2021-08-18 12:28:35 +0000 UTC [ - ]
I doubt deleting them (assuming the victim sees them) works once the image has been scanned. And, given that this probably comes with a sufficient smear campaign, deleting them will be portraye. as evidence of guilt
bostonsre 2021-08-18 13:53:52 +0000 UTC [ - ]
notriddle 2021-08-18 14:00:21 +0000 UTC [ - ]
Bjartr 2021-08-18 14:26:31 +0000 UTC [ - ]
mmcwilliams 2021-08-18 15:44:05 +0000 UTC [ - ]
st_goliath 2021-08-18 13:02:29 +0000 UTC [ - ]
This part IMO makes Apple itself the most likely "target", but for a different kind of attack.
Just wait until someone who wasn't supposed to, somewhere, somehow gets their hands on some of the actual hashes (IMO bound to happen eventually). Also remember that with Apple, we now have an oracle that can tell us. And with all the media attention around the issue, this might further incentivize people to try.
From that I can picture a chain of events something like this:
1. Somebody writes a script that generates pre-image collisions like in the post, but for actual hashes Apple uses.
2. The script ends up on the Internet. News reporting picks it up and it spreads around a little. This also means trolls get their hands on it.
3. Tons of colliding image are created by people all over the planet and sent around to even more people. Not for targeted attacks, but simply for the lulz.
4. Newer scripts show up eventually, e.g. for perturbing existing images or similar stunts. More news reporting follows, accelerating the effect and possibly also spreading perturbed images around themselves. Perturbed images (cat pictures, animated gifs, etc...) get uploaded to places like 9gag, reaching large audiences.
5. Repeat steps 1-4 until the Internet and the news grow bored with it.
During that entire process, potentially each of those images that ends up on an iDevice will have to be manually reviewed...
Grustaf 2021-08-18 13:15:51 +0000 UTC [ - ]
dylan604 2021-08-18 13:32:27 +0000 UTC [ - ]
Can anyone else think of times where Apple has admitted to something bad on their end and then reversed/walked away from whatever it was?
jtmarl1n 2021-08-18 14:50:34 +0000 UTC [ - ]
The next Macbook refresh will be interesting as there are rumors they are bring back several I/O ports that were removed when switching to all USB-C.
I agree with your overall point, just some things that came to mind when reading your question.
dylan604 2021-08-18 15:46:41 +0000 UTC [ - ]
The trashcan MacPro is still the only mea culpa I am aware of them actually owning the mistake.
The Airpower whatever was never really released as a product though, so it is a strange category. New question, is the Airpower whatever the only product offically announced on the big stage to never be released?
Grustaf 2021-08-18 13:34:39 +0000 UTC [ - ]
dylan604 2021-08-18 13:58:17 +0000 UTC [ - ]
With Apple, nothing is "obvious".
Grustaf 2021-08-18 14:06:22 +0000 UTC [ - ]
But I am certain they will not want all the bad publicity that would come if the system was widely abused, if you worry about that. That much is actually "obvious", they are not stupid.
the8472 2021-08-18 10:25:16 +0000 UTC [ - ]
A better collision won't be a grey blob, it'll take some photoshopped and downscaled picture of a kid and massage the least significant bits until it is a collision.
sitharus 2021-08-18 10:42:55 +0000 UTC [ - ]
formerly_proven 2021-08-18 10:59:04 +0000 UTC [ - ]
saynay 2021-08-18 11:23:05 +0000 UTC [ - ]
Sebb767 2021-08-18 12:32:03 +0000 UTC [ - ]
hdjjhhvvhga 2021-08-18 11:07:18 +0000 UTC [ - ]
tpush 2021-08-18 13:39:14 +0000 UTC [ - ]
Just to clarify, Apple doesn't report anyone to the police. They report to NCMEC, who presumably contacts law enforcement.
dannyw 2021-08-18 17:13:49 +0000 UTC [ - ]
zimpenfish 2021-08-18 11:25:04 +0000 UTC [ - ]
No, the images are only decryptable after a threshold (which appears to be about 30) is breached. If you've received 30 pieces of CSAM from WhatsApp contacts without blocking them and/or stopping WhatsApp from automatically saving to iCloud, I gotta say, it's on you at that point.
basilgohar 2021-08-18 11:55:52 +0000 UTC [ - ]
zimpenfish 2021-08-18 12:05:11 +0000 UTC [ - ]
A fair point, yes, and somewhat scuppering towards my argument.
avianlyric 2021-08-18 11:55:09 +0000 UTC [ - ]
BrianOnHN 2021-08-18 11:43:51 +0000 UTC [ - ]
kleene_op 2021-08-18 13:07:08 +0000 UTC [ - ]
If you know the method used by Apple to scale down flagged images before they are sent for review, you can make it so the scaled down version of the image shows a different, potentially misleading one instead:
https://thume.ca/projects/2012/11/14/magic-png-files/
At the end of the day:
- You can trick the user into saving an innocent looking image
- You can trick Apple NN hashing function with a purposely generated hash
- You can trick the reviewer with an explicit thumbnail
There is no limit to how devilish one can be.
avianlyric 2021-08-18 11:54:19 +0000 UTC [ - ]
In this scenario you could create an image that looks like anything, but where it’s visual derivative is CSAM material.
Currently iCloud isn’t encrypted, so Apple could just look at the original image. But in future is iCloud becomes encrypted, then the reporting will be don’t entirely based on the visual derivative.
Although Apple could change this by include a unique crypto key for each uploaded images within their inner safety voucher, allowing them to decrypt images that match for the review process.
the8472 2021-08-18 10:49:59 +0000 UTC [ - ]
labcomputer 2021-08-18 15:42:22 +0000 UTC [ - ]
It occurs to me that compromising an already-hired reviewer (either through blackmail or bribery) or even just planting your own insider on the review team might not be that difficult.
In fact, if your threat model includes nation-state adversaries, it seems crazy not to consider compromised reviewers. How hard would it really be for the CIA or NSA to get a few of their (under cover) people on the review team?
Grustaf 2021-08-18 13:17:01 +0000 UTC [ - ]
spoonjim 2021-08-18 20:22:31 +0000 UTC [ - ]
brokenmachine 2021-08-19 01:12:58 +0000 UTC [ - ]
goohle 2021-08-18 12:02:56 +0000 UTC [ - ]
FabHK 2021-08-18 10:29:28 +0000 UTC [ - ]
michaelt 2021-08-18 10:38:18 +0000 UTC [ - ]
I'd be astonished if it wasn't possible to do the same thing here.
FabHK 2021-08-18 10:48:21 +0000 UTC [ - ]
But in the case of Apple's CSAM detection, the collision would first have to fool the victim into seeing an innocent picture and storing it (presumably, they would not accept and store actual CSAM [^]), then fool the NeuralHash into thinking it was CSAM (ok, maybe possible, though classifiers <> perceptual hash), then fool the human reviewer into also seeing CSAM (unlike the innocent victim).
[^] If the premise is that the "innocent victim" would accept CSAM, then you might as well just send CSAM as an unscrupulous attacker.
marcus_holmes 2021-08-18 12:22:49 +0000 UTC [ - ]
step 1 - As others have pointed out, there are plenty of ways of getting an image onto someone's phone without their explicit permission. WhatsApp (and I believe Messenger) do this by default; if someone sends you an image, it goes onto your phone and gets uploaded to iCloud.
step 2 - TFA proves that hash collision works, and fooling perceptual algorithms is already a known thing. This whole automatic screening process is known to be vulnerable already.
step 3 - Humans are harder to fool, but tech giants are not great at scaling human intervention; their tendency is to only use humans for exceptions because humans are expensive and unreliable. This is going to be a lowest-cost-bidder developing-country thing where the screeners are targeted on screening X images per hour, for a value of X that allows very little diligence. And the consequences of a false positive are probably going to be minimal - the screeners will be monitored for individual positive/negative rates, but that's about it. We've seen how this plays out for YouTube copyright claims, Google account cancellations, App store delistings, etc.
People's lives are going to be ruined because of this tech. I understand that children's lives are already being ruined because of abuse, but I don't see that this tech is going to reduce that problem. If anything it will increase it (because new pictures of child abuse won't be on the hash database).
labcomputer 2021-08-18 15:48:24 +0000 UTC [ - ]
Or just blackmail and/or bribe the reviewers. Presumably you could add some sort of 'watermark' that would be obvious to compromised reviewers. "There's $1000 in it for you if you click 'yes' any time you see this watermark. Be a shame if something happened to your mum."
faeriechangling 2021-08-19 01:45:42 +0000 UTC [ - ]
>If the premise is that the "innocent victim" would accept CSAM, then you might as well just send CSAM as an unscrupulous attacker.
This adds trojan horses embedded in .jpg files as an attack vector, which while maybe not overly practical, I could certainly imagine some malicious troll uploading "CSAM" to some pornsite.
UncleMeat 2021-08-18 11:55:28 +0000 UTC [ - ]
nabakin 2021-08-18 10:38:10 +0000 UTC [ - ]
thinkingemote 2021-08-18 10:35:19 +0000 UTC [ - ]
Every single tech company is getting rid of manual human review towards an AI based approach. Human-ops they call it - they dont want their employees to be doing this harmful work, plus computers are cheaper and better at
We hear about failures of inhuman ops all the time on HN. people being banned, falsely accused, cancelled, accounts locked, credit denied. All because the decisions which were once by humans are now made by machine. This will happen eventually here too.
It's the very reason why they have the neuralhash model. To remove the human reviewer.
mannerheim 2021-08-18 10:30:22 +0000 UTC [ - ]
Just because the PoC used a meaningless blob doesn't mean that collisions have to be those. Plenty of examples of adversarial attacks on image recognition perturb real images to get the network to misidentify them, but to a human eye the image is unchanged.
nyuszika7h 2021-08-18 10:44:43 +0000 UTC [ - ]
mannerheim 2021-08-18 11:25:36 +0000 UTC [ - ]
Anyway, as for human reviewers, depends on what the image being perturbed is. Computer repair employees have called the police on people who've had pictures of their children in the bath. My understanding is that Apple does not have the source images, only NCMEC, so Apple's employees wouldn't necessarily see that such a case is a false positive. One would hope that when it gets sent to NCMEC, their employees would compare to the source image and see that is a false positive, though.
int_19h 2021-08-19 00:54:52 +0000 UTC [ - ]
brokenmachine 2021-08-19 01:20:12 +0000 UTC [ - ]
jsdalton 2021-08-18 10:33:35 +0000 UTC [ - ]
gambiting 2021-08-18 11:12:15 +0000 UTC [ - ]
>> What has changed wrt targeted attacks against innocent people?
Anecdote: every single iphone user I know has iCloud sync enabled by default. Every single Android user I know doesn't have google photos sync enabled by default.
Nextgrid 2021-08-18 10:36:01 +0000 UTC [ - ]
Given the scanning is client-side wouldn't the client need a list of those hashes to check against? If so it's just a matter of time before those are extracted and used in these attacks.
brokenmachine 2021-08-19 01:22:16 +0000 UTC [ - ]
visarga 2021-08-18 20:01:21 +0000 UTC [ - ]
When reports come in the images would not match, so they need to intercept them before they are discarded by Apple, maybe by having a mole in the team. But it's so much easier than other ways to have an iOS platform scanner for any purpose. Just let them find the doctored images and add them to the database and recruit a person in the Apple team.
anonymousab 2021-08-18 13:28:17 +0000 UTC [ - ]
I find it hard to believe that anyone has faith in any purported manual review by a modern tech giant. Assume the worst and you'll still probably not go far enough.
cm2187 2021-08-18 10:33:22 +0000 UTC [ - ]
vermilingua 2021-08-18 10:41:29 +0000 UTC [ - ]
varajelle 2021-08-18 11:07:24 +0000 UTC [ - ]
philote 2021-08-18 12:58:53 +0000 UTC [ - ]
GeckoEidechse 2021-08-18 14:11:05 +0000 UTC [ - ]
johnla 2021-08-18 16:54:47 +0000 UTC [ - ]
If anything, this gives weapons to people against the scanner as we can now bomb the system with false positives rendering it impossible to use. I don't know enough about cryptography but I wonder if there is any ramifications of the hash being broken.
beiller 2021-08-18 19:29:49 +0000 UTC [ - ]
cirrus3 2021-08-18 23:52:33 +0000 UTC [ - ]
30 times.
30 times a human confused a blob with CSAM?
Johnny555 2021-08-18 15:31:42 +0000 UTC [ - ]
You can do steps 2-3 all in one step "Hey Bob, here's a zip file of those funny cat pictures I was telling you about. Some of the files got corrupted and are grayed out for some reason".
f3d46600-b66e 2021-08-18 14:26:08 +0000 UTC [ - ]
It's my understanding that many tech companies (Microsoft? Dropbox? Google? Apple? Other?) (and many people in those companies) have access to the CSAM database, which essentially makes it public.
cyanite 2021-08-18 20:09:50 +0000 UTC [ - ]
spicybright 2021-08-18 10:27:46 +0000 UTC [ - ]
FabHK 2021-08-18 10:32:01 +0000 UTC [ - ]
> If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say “It wasn’t us” as they wear t-shirts that say “IT WAS DEFINITELY US,” and then they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them.
mrits 2021-08-18 15:14:28 +0000 UTC [ - ]
If you didn't have Apple scanning your drive trying to find a new way for you to go to prison then it wouldn't be a problem.
FabHK 2021-08-18 20:41:09 +0000 UTC [ - ]
cyanite 2021-08-18 20:05:10 +0000 UTC [ - ]
Also, most other major cloud photo providers scan images server side, leading to the same effect (but with them accessing more data).
incrudible 2021-08-18 11:16:01 +0000 UTC [ - ]
People are getting nerd-sniped about hash collisions. It's completely irrelevant.
The real-world vector is that an attacker sends CSAM through one of the channels that will trigger a scan. Through iMessage, this should be possible in an unsolicited fashion (correct me if I'm wrong). Otherwise, it's possible through a hacked device. Of course there's plausible deniability here, but like with swatting, it's not a situation you want to be in.
UseStrict 2021-08-18 13:51:13 +0000 UTC [ - ]
deelowe 2021-08-18 13:28:38 +0000 UTC [ - ]
cyanite 2021-08-18 20:06:24 +0000 UTC [ - ]
Sure, but those don’t go into your photo library, so it won’t trigger any scanning. Presumably people wouldn’t actively save CSAM into their library.
aardvarkr 2021-08-18 12:41:04 +0000 UTC [ - ]
ziml77 2021-08-18 12:54:45 +0000 UTC [ - ]
spicybright 2021-08-18 11:19:39 +0000 UTC [ - ]
If you don't like someone (which happens very often in this line of work) you could potentially screw someone over with this.
Jcowell 2021-08-18 20:49:59 +0000 UTC [ - ]
xucheng 2021-08-18 11:29:05 +0000 UTC [ - ]
Depending on how the secret sharing is used in Apple PSI, it may be possible that duplicating the same image 30 times would be enough.
dang 2021-08-18 18:47:31 +0000 UTC [ - ]
eptcyka 2021-08-18 11:23:42 +0000 UTC [ - ]
shadowgovt 2021-08-18 11:30:18 +0000 UTC [ - ]
eptcyka 2021-08-18 15:02:27 +0000 UTC [ - ]
soziawa 2021-08-18 10:29:42 +0000 UTC [ - ]
Is the process actually documented anywhere? Afaik they are just saying that they are verifying a match. This could of course just be a person looking at the hash itself.
FabHK 2021-08-18 10:35:44 +0000 UTC [ - ]
https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...
hughrr 2021-08-18 10:44:10 +0000 UTC [ - ]
halflings 2021-08-18 10:08:26 +0000 UTC [ - ]
The fact that you can randomly manipulate random noise until it matches the hash of an arbitrary image is not surprising. The real challenge is generating a real image that could be mistaken for CSAM at low res + is actually benign (or else just send CSAM directly) + matches the hash of real CSAM.
This is why SHAttered [1] was such a big deal, but daily random SHA collisions aren't.
lifthrasiir 2021-08-18 10:14:05 +0000 UTC [ - ]
(Added later:) I should note that the DoS attack is only possible with the preimage attack and not the second preimage attack as the issue seemingly suggests, because you need the original CSAM to perform the second preimage attack. But given the second preimage attack is this easy, I don't have any hope for the preimage resistance anyway.
(Added much later:) And I realized that Apple did think of this possibility and only stores blinded hashes in the device, so the preimage attack doesn't really work as is. But it seems that the hash output is only 96 bits long according to the repository, so this attack might still be possible albeit with much higher computational cost.
[1] To be fair, I don't think that Apple's claim of 1/1,000,000,000,000 false positive rate refers to that of the algorithm. Apple probably tweaked the threshold for manual checking to match that target rate, knowing NeuralHash's false positive rate under the normal circumstances. Of course we know that there is no such thing like the normal circumstances.
shapefrog 2021-08-18 12:00:15 +0000 UTC [ - ]
Sounds pretty stupid to me to fill your phone with kiddie porn in protest, but you do you internet people.
mannerheim 2021-08-18 12:58:39 +0000 UTC [ - ]
robertoandred 2021-08-18 14:43:39 +0000 UTC [ - ]
mannerheim 2021-08-18 15:17:35 +0000 UTC [ - ]
https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...
robertoandred 2021-08-18 16:11:01 +0000 UTC [ - ]
mannerheim 2021-08-18 16:17:48 +0000 UTC [ - ]
robertoandred 2021-08-18 16:18:52 +0000 UTC [ - ]
fastball 2021-08-19 04:19:05 +0000 UTC [ - ]
So at some point you generate an image which triggers a subroutine and you know that image collides.
cyanite 2021-08-18 20:11:30 +0000 UTC [ - ]
HALtheWise 2021-08-18 19:53:46 +0000 UTC [ - ]
falcolas 2021-08-18 14:52:45 +0000 UTC [ - ]
robertoandred 2021-08-18 14:54:38 +0000 UTC [ - ]
lifthrasiir 2021-08-18 16:00:36 +0000 UTC [ - ]
- The device generates a secret X and divides it into X[1]...X[m] with the secret sharing algorithm. m is some large number and any k (but no less) copies out of X[i] are enough to reconstruct X.
- The device stores blinded hashes f(H[1])...f(H[n]). The function f itself is not known to the client.
- The image hash H0 is compressed with another function g to the range between 1 and n.
- The downscaled image data (for the human check) is encrypted with X and appended with (probably) random X[i].
- The result is then encrypted again with a key derived from f(H0) and sent to the server with an associated data g(H0).
- The server tries to decrypt it with a key derived from f(H[g(H0)]). This is only possible when H[g(H0)] = H0, i.e. H0 represents some known CSAM.
- You can only decrypt the second layer with at least k copies of X[i] then.
At this stage Apple can still learn the number of CSAM images less than k. The fix is described in an overly technical document and I can't exactly follow, but supposedly the client can inject an appropriate amount of synthetic data where only the first layer can be always decrypted and the second layer is bogus (including the presumed X[i]).
---
Assuming this scheme is correctly implemented, the only attack I can imagine is the timing attack. As I understand a malicious client can choose not to send false data. This will affect the number of items that pass the first layer of encryption, so the client can possibly learn the number of actual matches by adjusting the number of synthetic data since the server can only proceed to the next step with at least k such items.
This attack seems technically possible, but is probably infeasible to perform (remember that we already need 2^95 oracle operations, which is only vaguely possible even in the local device). Maybe the technical report actually has a solution for this, but for now I can only guess.
falcolas 2021-08-18 16:41:59 +0000 UTC [ - ]
> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.
https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...
falcolas 2021-08-18 14:58:25 +0000 UTC [ - ]
robertoandred 2021-08-18 15:17:05 +0000 UTC [ - ]
In addition, they payload is protected at another layer by your user key. Only with enough mash matches can Apple put together the user decryption key and open the very innards of your image's payload containing the full hash and visual derivative.
falcolas 2021-08-18 15:27:00 +0000 UTC [ - ]
> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.
https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...
mannerheim 2021-08-18 15:02:40 +0000 UTC [ - ]
falcolas 2021-08-18 15:04:51 +0000 UTC [ - ]
Otherwise, they'd just keep doing it on the material that's actually uploaded.
mannerheim 2021-08-18 15:16:08 +0000 UTC [ - ]
> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.
https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...
cyanite 2021-08-18 20:14:07 +0000 UTC [ - ]
cyanite 2021-08-18 20:13:27 +0000 UTC [ - ]
Yes but as stated in the technical description, this match is against a blinded table, so the device doesn’t learn if it’s a match or not.
floatingatoll 2021-08-18 14:16:57 +0000 UTC [ - ]
falcolas 2021-08-18 14:53:49 +0000 UTC [ - ]
Granted, they most likely won't care, but it's a legitimate attack vector.
floatingatoll 2021-08-18 15:11:48 +0000 UTC [ - ]
Attempted entrapment and abuse of computing systems, which is an uncomfortable way to phrase the WhatsApp scenario, would be quite sufficient cause for a discovery warrant to have WhatsApp reveal the sender’s identity to Apple. Doesn’t mean they’d be found guilty, but WhatsApp will fold a lot sooner than Apple, especially if the warrant is sealed by the court to prevent the sender from deleting any CSAM in their possession.
A hacker would say that’s all contrived nonsense and anyways it’s just SWATting, that’s no big deal. A judge would say that’s a reasonable balance of protecting the sender from being dragged through the mud in the press before being indicted and permitting the abused party (Apple) to pursue a conviction and damages.
I am not your lawyer, this is not legal advice, etc.
Majromax 2021-08-18 12:45:59 +0000 UTC [ - ]
It is, actually. Remember that hashes are supposed to be many-bit digests of the original; it should take O(2^256) work to find a message with a chosen 256-bit hash and O(2^128) work to find a "birthday attack" collision. Finding any collision at all with NeuralHash so soon after its release is very surprising, suggesting the algorithm is not very strong.
SHAttered is a big deal because it is a fully working attack model, but the writing was on the wall for SHA-1 after the collisions were found in reduced-round variations of the hash. Attacks against an algorithm only get better with time, never worse.
Moreover, the break of NeuralHash may be even stronger than the SHAttered attack. The latter modifies two documents to produce a collision, but the NeuralHash collision here may be a preimage attack. It's not clear if the attacker crafted both images to produce the collision or just the second one.
Ajedi32 2021-08-18 13:50:19 +0000 UTC [ - ]
It's not particularly surprising to me that a perceptual hash might also have collisions that don't look similar to the human eye, though if Apple ever claimed otherwise this counterexample is solid proof that they're wrong.
cyanite 2021-08-18 20:15:06 +0000 UTC [ - ]
rdli 2021-08-18 10:15:22 +0000 UTC [ - ]
rootusrootus 2021-08-18 14:31:11 +0000 UTC [ - ]
Not that it's going to happen, since it would also require NCMEC to think the images match, but whatever. Attack me! Attack me! I want to retire.
hda2 2021-08-19 02:41:08 +0000 UTC [ - ]
For now, sure. What happens when their money runs short? What about the other tech companies that will inevitably be forced to deploy this shit? Will they also have Apple's pretty deep pockets?
Blind faith in this system will not magically fix how flawed it is nor the abuse and harm it will allow. This is going to hurt a lot of innocent people.
brokenmachine 2021-08-19 03:49:31 +0000 UTC [ - ]
If you post your whatsapp address, I'm sure someone will oblige.
icelancer 2021-08-18 10:45:26 +0000 UTC [ - ]
Never? You sure that one or more human operators will never make this mistake, dooming someone's life / causing them immense pain?
shapefrog 2021-08-18 12:03:39 +0000 UTC [ - ]
falcolas 2021-08-18 14:56:59 +0000 UTC [ - ]
One example that sticks out in my mind is a pair of grandparents who photographed their grandchildren playing in the back yard. A photo tech flagged their photo, they were arrested, and it took their lawyer going through the hoops to get a review of the photo for the charges to be dropped.
shapefrog 2021-08-18 15:38:44 +0000 UTC [ - ]
If the photo was a grey blob and they had to go through a judicial review for someone to look at the photo and confirm 'yes that is a grey blob' then color me wrong.
falcolas 2021-08-18 15:43:05 +0000 UTC [ - ]
shapefrog 2021-08-18 15:54:11 +0000 UTC [ - ]
falcolas 2021-08-18 16:36:54 +0000 UTC [ - ]
They'll view visual hashes, look at descriptions, and so forth, but nobody from Apple will actually be looking at them, because then they are guilty of viewing and transmitting CSAM.
I noted in another comment, even the prosecutors and defense lawyers in the case typically only get a description of the content, they don't see it themselves.
shapefrog 2021-08-18 17:15:51 +0000 UTC [ - ]
Over the years there have been countless articles etc about how fucked up being a reviewer of content flagged at all the tech companies is. https://www.vice.com/en/article/a35xk5/facebook-moderators-a...
hannasanarion 2021-08-18 18:58:16 +0000 UTC [ - ]
It is not illegal to be an unwilling recipient of illegal material. If a package shows up at your door with a bomb, you're not gonna be thrown in jail for having a bomb.
hnfong 2021-08-18 19:11:55 +0000 UTC [ - ]
At the very least, you'd be one of the primary suspects, and if you somehow got a bad lawyer, all bets are off.
https://hongkongfp.com/2021/08/12/judges-criticise-hong-kong...
hannasanarion 2021-08-18 19:25:51 +0000 UTC [ - ]
What is the scenario where a grey blob gets on your phone that sets off CSAM alerts, an investigator looks at it and sees only a grey blob, and then still decides to alert the authorities even though it's just a grey blob, and the authorities still decide to arrest you even though it's just a grey blob, and the DA still decides to prosecute you even though it's just a grey blob, and a jury still decides to convict you, even though it's still just a grey blob?
You're the one who's off in theory-land imagining that every person in the entire justice system is just as stupid as this algorithm is.
int_19h 2021-08-19 00:56:44 +0000 UTC [ - ]
hannasanarion 2021-08-19 16:13:53 +0000 UTC [ - ]
On this topic, the Supreme Court has ruled in Dickerson v US that, in all cases, to avoid First Amendment conflicts, all child pornography statutes must be interpreted with at least a "reckless disregard" standard.
Here is a typical criminal definition, from Minnesota, where a defendant recently tried to argue that the statute was strict liability and therefore unconstitutional, and that argument was rejected by the courts because it is clearly written to require knowledge and intent:
> Subd. 4. Possession prohibited. (a) A person who possesses a pornographic work or a computer disk or computer or other electronic, magnetic, or optical storage system ․ containing a pornographic work, knowing or with reason to know its content and character, is guilty of a felony․
notRobot 2021-08-18 12:45:51 +0000 UTC [ - ]
bscphil 2021-08-18 13:55:31 +0000 UTC [ - ]
It only becomes public knowledge if law enforcement then chooses to charge you - and if all that happens on the basis of an obvious adversarial net image, the result is a publicity shitshow for Apple and you become a civil rights hero after your lawyer (even an underpaid overworked public defender should be able to handle this one) demonstrates this.
As others have stated in this thread, I think the real failure case is not someone's life getting ruined by claims of CSAM possession somehow resulting from a bad hash match, but the fact that planted material (or sent via message) can now easily ruin your life because it gets automatically reported; you can't simply delete it and move on any more.
rootusrootus 2021-08-18 14:34:33 +0000 UTC [ - ]
But not really comparable, IMO. You won't even know you got investigated until after the original images have been shipped off to NCMEC for verification.
shapefrog 2021-08-18 13:18:47 +0000 UTC [ - ]
dannyw 2021-08-18 16:13:50 +0000 UTC [ - ]
shapefrog 2021-08-18 16:23:52 +0000 UTC [ - ]
What if the legal porn of a 21 year old that triggered the collision match looked really really really close? So close that a human can not distinguish between the image of a 12 year old being raped that they have in their database and your image? Well then you might have a problem, legal and otherwise.
> defence lawyers are not allowed to look at alleged CSAM material in court right
I know this is not true in many countries, but cant speak for your country.
dannyw 2021-08-18 16:42:19 +0000 UTC [ - ]
I'm not talking about images of rape here. I'm taking about images that you'd see on a regular porn site, of adults and their body parts.
You are also aware that CSAM covers anywhere from 0 to 17.99 years of age, and the legal obligation to report exists equally for the whole spectrum?
So let's say I download a close up pussy collection of 31 images of what I believe to be consenting 20 year olds, and what are consenting 20 year olds.
But they are actually planted by an attacker (let's say an oppressive regime who doesn't like me) and disturbed to match CSAM, that is, pussy close ups of 17 year olds. They are all just pussy pics. They will look the same.
Should I go to jail?
Do I have a non zero chance of going to jail? Yes.
shapefrog 2021-08-18 17:27:08 +0000 UTC [ - ]
Without getting into the metaphysics of what is an image, at that point, you basically have a large collection of child porn.
Your hypothetical oppressive regime has gone to a lot of trouble planting not illegal evidence on your device. It would be much more effective to just put actual child porn on your device, which you would need to have to conduct the attack in the first place.
cyanite 2021-08-18 20:16:54 +0000 UTC [ - ]
I doubt images that look quite generic will make it into those hash sets, though.
icelancer 2021-08-18 21:46:14 +0000 UTC [ - ]
This wasn't the question I asked.
shapefrog 2021-08-18 23:08:11 +0000 UTC [ - ]
Yes I can say 100% that no human operator will ever classify a grey image for a child being raped. Happy to put money on it.
meowster 2021-08-18 23:58:20 +0000 UTC [ - ]
When dealing with a monotonous task that the operator is probably getting PTSD from, I think the chance is greater than 0%.
Articles about content moderators and PTSD:
https://www.businessinsider.com/youtube-content-moderators-a...
https://www.bbc.com/news/technology-52642633
https://www.theverge.com/2019/2/25/18229714/cognizant-facebo...
SXX 2021-08-18 14:35:56 +0000 UTC [ - ]
Why do you have an idea that image have to be benign? Almost everyone watch porn and it's will be so much easier to find collisions by manipulating actual porn images which are not CSAM.
Also this way you'll more likely to trigged false-positive from Apple staff since they aren't suppose to see how actual CSAM looks like.
bo1024 2021-08-18 12:26:04 +0000 UTC [ - ]
Strongly disagree. (1) The primary feature of any decent hash function is that this should not happen. (2) Any preimage attack opens the way for further manipulations like you describe.
brokensegue 2021-08-18 14:37:02 +0000 UTC [ - ]
bo1024 2021-08-18 14:47:04 +0000 UTC [ - ]
But hashing is used in many places that could be vulnerable to an attack, so I think the distinction is blurry. People used MD5 for lots of things but are moving away for this reason, even though they're not in cryptographic settings.
nabakin 2021-08-18 10:27:58 +0000 UTC [ - ]
Also, generating images that look the same as the original and yet produce a different hash.
varispeed 2021-08-18 11:29:57 +0000 UTC [ - ]
The reviewer, likely on a minimum wage, will report images just in case. Nobody would like to be dragged through the mud because they didn't report something they thought it is innocent.
Edd314159 2021-08-18 11:46:48 +0000 UTC [ - ]
* Is this a problem with Apple's CSAM discriminator engine or with the fact that it's happening on-device?
* Would this attack not be possible if scanning was instead happening in the cloud, using the same model?
* Are other services (Google Photos, Facebook, etc.) that store photos in the cloud not doing something similar to uploaded photos, with models that may be similarly vulnerable to this attack?
I know that an argument against on-device scanning is that people don't like to feel like the device that they own is acting against them - like it's snitching on them. I can understand and actually sympathise with that argument, it feels wrong.
But we have known for a long time that computer vision can be fooled with adversarial images. What is special about this particular example? Is it only because it's specifically tricking the Apple CSAM system, which is currently a hotly-debated topic, or is there something particularly bad here, something that is not true with other CSAM "detectors"?
I genuinely don't know enough about this subject to comment with anything other than questions.
bo1024 2021-08-18 12:32:18 +0000 UTC [ - ]
A devastating scenario for such a system is if an attacker knows how to look at a hash and generate some image that matches the hash, allowing them to trigger false positives any time. That appears to be what we are witnessing.
Edd314159 2021-08-18 12:56:35 +0000 UTC [ - ]
This is my understanding too. But is this not also true for other (cloud-based) CSAM scanning systems? Why is Apple's special in this regard?
nonbirithm 2021-08-18 16:14:56 +0000 UTC [ - ]
Apple could have saved themselves so much backlash and not have caused the outrage to be focused exclusively on them if they hadn't tried to be novel with their method of hashing, and had just announced that they were about to do exactly what all the other tech companies had already been doing for years - server side scanning.
Apple would still be accused of walking back on its claims of protecting users' privacy, but for a different reason - by trying to conform. Instead of wasting all the debate on how Apple and only Apple is violating everyone's privacy with its on-device scanning mechanism, which was without precedent, this could have been an educational experience for many people about how little privacy is valued in the cloud in general, no matter who you choose to give your data to, because there is precedent for such privacy violations that take place on the server.
Apple could have been just one of the companies in a long line of others whose data management policies would have received significant renewed attention as a result of this. Instead, everyone is focused on criticizing Apple.
There is a significant problem with people's perception of "privacy" in tech if merely moving the scan on-device causes this much backlash while those same people stayed silent during the times that Google and Facebook and the rest adopted the very same technique on the server in the past decade. Maybe if Apple had done the same, they would have been able to get away with it.
cyanite 2021-08-18 20:20:05 +0000 UTC [ - ]
floatingatoll 2021-08-18 14:35:25 +0000 UTC [ - ]
rollinggoron 2021-08-18 14:17:27 +0000 UTC [ - ]
cyanite 2021-08-18 20:18:28 +0000 UTC [ - ]
fortenforge 2021-08-18 18:18:31 +0000 UTC [ - ]
magpi3 2021-08-18 12:39:01 +0000 UTC [ - ]
bo1024 2021-08-18 14:50:57 +0000 UTC [ - ]
Koffiepoeder 2021-08-18 12:45:58 +0000 UTC [ - ]
endisneigh 2021-08-18 13:24:24 +0000 UTC [ - ]
kemayo 2021-08-18 13:58:16 +0000 UTC [ - ]
This is a smart thing to disable, even outside this recent discussion of CSAM.
https://faq.whatsapp.com/android/how-to-stop-saving-whatsapp...
endisneigh 2021-08-18 14:23:05 +0000 UTC [ - ]
kemayo 2021-08-18 14:25:49 +0000 UTC [ - ]
Edit: Though, to be fair, the specific hash-collision scenario would be that someone could send you something that doesn't look like CSAM and so you wouldn't reflexively delete it.
endisneigh 2021-08-18 14:28:13 +0000 UTC [ - ]
Personally I don’t really see the issue.
MisterSandman 2021-08-18 14:57:39 +0000 UTC [ - ]
All you have to be is accused of CP for your life to be destroyed. It doesn't matter if you did it or not.
endisneigh 2021-08-18 17:20:35 +0000 UTC [ - ]
If the government wants to get you they don’t need this Apple scanning tech, or anything at all really
kemayo 2021-08-18 14:31:11 +0000 UTC [ - ]
Kliment 2021-08-18 20:35:56 +0000 UTC [ - ]
gitgud 2021-08-19 00:59:45 +0000 UTC [ - ]
I think this is blowing up because it's cathartic to see a technology you disagree with get undermined and basically broken by the community...
vesinisa 2021-08-18 10:16:48 +0000 UTC [ - ]
notquitehuman 2021-08-18 13:56:41 +0000 UTC [ - ]
nonbirithm 2021-08-18 19:39:07 +0000 UTC [ - ]
Even if it is flawed, in a few years Apple will just release a new version or a different technique that works more effectively for its stated purpose. Other companies will silently improve the server-side scanning techniques they already use after the public outcry started by Apple blows over.
So long as companies feel the need to protect themselves from liability for hosting certain kinds of data, be it criminal or political or moral or anything else, there are no societal checks in place to stop them from doing so.
This is not a technological issue like so many people are trying to frame it as; it is a policy issue.
rootusrootus 2021-08-18 14:38:31 +0000 UTC [ - ]
Croftengea 2021-08-18 13:20:42 +0000 UTC [ - ]
SXX 2021-08-18 14:59:52 +0000 UTC [ - ]
Do people who believe in behevolent Apple understand that CSAM don't all have some big red "CHILD PORN" sign on it? No demonic feel included. Like any porn we can suppose that many of such images might not even have any faces or literally anything that make them different from images of 18yo person.
When you think well about it brute-forcing actual porn or some jailbait images doesn't sound that impossible. All you need is a lot of totally legal porn and some compute power. Both we have in abundance.
dannyw 2021-08-18 16:16:42 +0000 UTC [ - ]
SXX 2021-08-18 18:51:32 +0000 UTC [ - ]
int_19h 2021-08-19 00:59:14 +0000 UTC [ - ]
UncleMeat 2021-08-18 11:52:10 +0000 UTC [ - ]
If you can get rooting malware on the target device then you could
1. Produce actual CSAM rather than a hash collision
2. Produce lots of it
3. Sync it with Google Photos
This attack has been available for many years and does not need convoluted steps like hash collisions if you have the means to control somebody's phone with a RAT.
UseStrict 2021-08-18 13:56:24 +0000 UTC [ - ]
On their cloud services (Apple, Google, MS, etc) they can only get to content that a user has implicitly agreed to share (whether or not they understand the ramifications is another conversation). On your device, there's no more separation.
UncleMeat 2021-08-18 14:33:38 +0000 UTC [ - ]
cyanite 2021-08-18 20:23:18 +0000 UTC [ - ]
If you don’t trust Apple to not lie, then the entire discussion becomes a bit moot, I think, since then they could pretty much do anything, with or without this system.
grishka 2021-08-18 14:17:11 +0000 UTC [ - ]
UncleMeat 2021-08-18 14:35:14 +0000 UTC [ - ]
hannasanarion 2021-08-18 19:02:15 +0000 UTC [ - ]
cyanite 2021-08-18 20:24:05 +0000 UTC [ - ]
dang 2021-08-18 18:48:55 +0000 UTC [ - ]
short_sells_poo 2021-08-18 12:08:22 +0000 UTC [ - ]
Ignoring the whataboutism in your question, we know that privacy once lost is practically impossible to get back. Once the genie is out of the bottle and Apple is doing on device scanning, what's to stop 3 letter agencies and governments around the world to start demanding ever more access? Because that's exactly what's going to happen. "Oh it's not a big deal, we just need slightly more access than we have now."
10 years down the line, we'd have people's phones datamining every piece of info they have and silently reporting to an unknowable set of entities. All in the name of fighting crime.
There needs to be pushback on this crap. Every single one of these attempts must be met with absolute and unconditional refusal. Mere inaction means things will inevitably and invariably get worse over time.
UncleMeat 2021-08-18 12:41:18 +0000 UTC [ - ]
1. There can be false positives or other mechanisms for innocent people to get flagged.
2. It is bad to do this sort of check on the local disk.
The discussion at hand started as entirely #1. But now you've swapped to #2, talking about government spying on local files. It makes it very difficult to have a conversation because any pushback against arguments made for one point is assumed to be pushback against arguments made for the other point.
stetrain 2021-08-18 13:52:15 +0000 UTC [ - ]
I am mostly convinced based on the technical details that have (slowly) come out from Apple that they have made this system sufficiently inconvenient to use as a direct surveillance system by a malicious government.
Yes a government could secretly order Apple to make changes to the system, but they could also order them to just give them all of your iCloud photos and backups, or send data directly from the Camera or Messages app, or any number of things that would be easier and more useful for the government. If you don't trust your government don't use a mobile device on a public cell network or store your data on servers.
But all of that said there is a still a line being crossed on principle and precedent for scanning images locally on device and then reporting out when something "bad" is found.
Apple thinks this is more private than just directly rifling through your photos in iCloud, but I can draw a line between what is on my device and what I send to Apple's servers and be comfortable with that.
short_sells_poo 2021-08-18 14:40:14 +0000 UTC [ - ]
And please do not bring in whataboutist arguments like "but other cloud providers are already doing it".
I'm honestly taken aback by how many people on HN are completely OK with what Apple is pulling here.
In my mind, it's irrelevant that the current system is "sufficiently inconvenient to use as a dragnet surveillance system", because it's the first step towards one that is convenient to use and if we extrapolate all the other similar efforts, we know full well what is going to happen.
stetrain 2021-08-18 14:46:16 +0000 UTC [ - ]
And my point is that if the government wanted a dragnet they could just legislate or secretly order one. Just like they have done in various forms over the last 20 years. And Apple might not even be allowed to tell us it is happening.
short_sells_poo 2021-08-18 15:22:57 +0000 UTC [ - ]
More to the point, anyone remotely sophisticated can just encrypt CSAM into a binary blob and plaster it all over the cloud providers servers.
Ie, this system will possibly catch some small time perverts at the cost of even more potential of government misuse of the current technical landscape.
I consider it a similar situation to governments trying to legislate backdoors into encryption. A remotely sophisticated baddie will just ignore the laws, and all it does is add risks for innocents.
> And my point is that if the government wanted a dragnet they could just legislate or secretly order one. Just like they have done in various forms over the last 20 years. And Apple might not even be allowed to tell us it is happening.
I don't understand how is this an argument against the pushback at all. So just because it could be worse, we should just throw our hands in the air and say "Oh it's all fine, it could be worse so the (so far) mild intrusion into privacy is nothing to worry about."
The thing is, these things seem to happen step by step so the outrage is minimized. Insert your favorite "frog being boiled slowly" anecdote here. You don't push back on the mild stuff, and before you realize things have gotten so much worse.
stetrain 2021-08-18 15:36:24 +0000 UTC [ - ]
I’m saying if the concern is that a government orders Apple to change it and do something different, then that’s a government problem and maybe we should try fixing that.
short_sells_poo 2021-08-18 16:15:55 +0000 UTC [ - ]
But why even give governments hints that people are generally OK with their devices being scanned?
We can argue about the technicalities of how abusable or resilient the current implementation is. But we can agree that it's a step towards losing privacy, yes? We didn't have scanning of iDevices before, now we do.
Because in my mind it's not a long shot to argue that once it becomes normalized that Apple can scan people's phones for CSAM when uploading to iCloud, it's just a small extra step to scan all pictures. The capability is basically in place already, it's literally removing a filter.
And then the next small step is not just CSAM but any fingerprint submitted by LE. And so it goes.
Governments can't legally compel Apple to implement this capability. But if the capability is already there, Apple can be compelled to turn over the information collected. Again, they can't do that if the capability and information doesn't exist.
E.g. if Apple can't decrypt data on your phone because they designed it such, then they can't be forced, even with a warrant, to backdoor your phone. They can legally refuse to add such capabilities.
stetrain 2021-08-18 16:55:56 +0000 UTC [ - ]
Yes, that was the intention of my original comment. The “slippery slope” is one of principle and precedent, more than this specific technical implementation.
short_sells_poo 2021-08-18 14:35:36 +0000 UTC [ - ]
#2 is bad on it's own and in my mind it shouldn't even get to the merits of discussing #1 because at that point the debate is already lost in favor of surveillance.
But beyond that, #1 is highly problematic, doubly so given the fact that government surveillance is basically a non-decreasing function.
cyanite 2021-08-18 20:26:09 +0000 UTC [ - ]
HumblyTossed 2021-08-18 14:03:48 +0000 UTC [ - ]
floatingatoll 2021-08-18 14:24:52 +0000 UTC [ - ]
“You tried to exploit our production systems and we’re ending our customer relationship with you over it” is a classic Apple move and no one outside of the Hackintosh community realizes that their devices include crypto-signed attestations of their serial number during Apple service sign-ins.
robertoandred 2021-08-18 14:45:23 +0000 UTC [ - ]
jug 2021-08-18 16:01:26 +0000 UTC [ - ]
robertoandred 2021-08-18 16:18:15 +0000 UTC [ - ]
dannyw 2021-08-18 16:25:02 +0000 UTC [ - ]
The low res derivative will match, perhaps even closely, because pussy closeups look similar to an apple employee when its grayscale 64 by 64 pixels (remember: it's illegal for Apple to transmit CSAM, so it must be so visually degraded to the point where it's arguably not visual).
The victim will get raided, be considered a paedophile by their workplace, media, and family, and perhaps even go into jail.
The attacker in this case can be users of Pegasus unhappy with a journalist.
robertoandred 2021-08-18 17:51:25 +0000 UTC [ - ]
sierpinsky 2021-08-18 17:53:44 +0000 UTC [ - ]
So I guess the question is what exactly "others" are doing, 'only when someone shares them, not when they are uploaded'. The whole discussion seems to center around what Apple intends to do on-device, ignoring what others are already doing in the cloud. Isn't this strange?
[1] https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/6593...
eightysixfour 2021-08-18 23:07:52 +0000 UTC [ - ]
* Will not be compelled to change the list of targeted material by government coercion.
* Will not upload the "vouchers" unless the material is uploaded to iCloud.
* Will implement these things in such a way that hackers cannot maliciously cause someone to be wrongly implicated in a crime.
* Will implement these things in such a way that hackers cannot use the tools Apple has created to seek other information (such as state sponsored hacking groups looking for political dissidents).
And for many of us, we do not believe that these things are a given. Here's a fictional scenario to help bring it home:
Let's say a device is created that can, with decent accuracy, detect drugs in the air. Storage unit rental companies start to install them in all of their storage units to reduce the risk that they are storing illegal substances, and they notify the police if the sensors go off.
One storage unit rental company feels like this is an invasion of privacy, so they install the device in your house but promise to only check the results if you move your things into the storage unit.
This sounds crazy, right?
cyanite 2021-08-18 20:59:20 +0000 UTC [ - ]
Very strange. Especially when this on-device technique means that Apple needs to access far less data than when doing it on the cloud.
magpi3 2021-08-18 12:44:49 +0000 UTC [ - ]
bitneuker 2021-08-18 13:03:28 +0000 UTC [ - ]
This, combined with human error during the manual review process might result in someone getting reported. Seeing as twitter (and other social media sites) jump on the bandwagon whenever someone gets accused of being a pedophile, this might destroy someones life.
The entire story might seem a bit to far fetched, but based on past events, you never know how bad something 'simple' as a hash collision can be.
dannyw 2021-08-18 16:28:57 +0000 UTC [ - ]
The low res derivative will match, perhaps even closely, because pussy closeups look similar to an apple employee when its grayscale 64 by 64 pixels (remember: it's illegal for Apple to transmit CSAM, so it must be so visually degraded to the point where it's arguably not visual).
The victim will get raided, be considered a paedophile by their workplace, media, and family, and perhaps even go into jail.
The attacker in this case can be users of Pegasus unhappy with a journalist.
FabHK 2021-08-18 21:29:07 +0000 UTC [ - ]
1. accepted by innocent user,
2. flagged as known CSAM by NeuralHash,
2b. also flagged by the second algorithm Apple will run over flagged images server side as known CSAM,
3. apparently CSAM in the "visual derivative".
That strikes me as a rather remote scenario, but worth investigating. Having said that, if it's a 3-letter adversary using Pegasus unhappy with a journalist, couldn't they just put actual CSAM onto the journalist's phone? And couldn't they have done that for many years?
cyanite 2021-08-18 20:27:17 +0000 UTC [ - ]
rootusrootus 2021-08-18 14:39:51 +0000 UTC [ - ]
You can end the conversation right there. If you are up against a state actor, you have already lost.
dannyw 2021-08-18 16:33:07 +0000 UTC [ - ]
rootusrootus 2021-08-18 20:12:22 +0000 UTC [ - ]
bitneuker 2021-08-18 16:23:52 +0000 UTC [ - ]
dannyw 2021-08-18 16:34:10 +0000 UTC [ - ]
It's always possible before, but client-side CSAM detection and alerting has weaponised this.
Previously, you always had to somehow alert an unfriendly jurisdiction. Now, you just use malware like Pegasus to drop CSAM, whether real or disturbed from legal porn, and watch as Apple tips off the Feds on your enemies.
floatingatoll 2021-08-18 14:43:46 +0000 UTC [ - ]
dannyw 2021-08-18 16:31:45 +0000 UTC [ - ]
But now China can send some legal pornography (eg closeup pussy pictures), disturbed to match a CSAM hit, to a journalist they don't like and get them in jail.
Why can't China do this before? Because previously, they'd still need to tip off authorities, which has an attribution trail and credibility barrier. Now, they can just use Pegasus to plant these images and then watch as Apple turns them into the Feds. Zero links to the attacker.
floatingatoll 2021-08-18 17:27:48 +0000 UTC [ - ]
I know of zero instances of this attack being executed on anyone, so apparently even though it's been possible for years, it isn't a material threat to any Apple customers today. If you have information to the contrary, please present it.
What new attacks are possible upon device owners when the CSAM scanning of iCloud uploads is shifted to the device, that were not already a viable attack at any time in the past decade?
vnchr 2021-08-18 12:55:30 +0000 UTC [ - ]
manmal 2021-08-18 13:15:18 +0000 UTC [ - ]
zug_zug 2021-08-18 13:21:53 +0000 UTC [ - ]
There's a pretty good chance that it was inevitably going to get expanded to handle pictures arriving at the phone through other means.
Invictus0 2021-08-18 15:34:36 +0000 UTC [ - ]
varispeed 2021-08-18 12:55:01 +0000 UTC [ - ]
For one, you can't know if that's true as the image could have been manipulated to appear as such. For example you wouldn't know if a kind of steganography has been used to hide image in an image and that neuralhash picked on a hidden image.
> but I don't understand how they can get it on someone's phone
There is many vectors. For example you can leave phone unattended and someone can snap a picture of an image or since a collision may look innocent to you, you would overlook it in an email etc...
mukesh610 2021-08-18 13:23:37 +0000 UTC [ - ]
How would NeuralHash pick a "hidden image"? It only uses the pixels of the image to get the hash. Any hidden image in the metadata would not even be picked up and no amount of steganography can fool NeuralHash.
> There is many vectors. For example you can leave phone unattended and someone can snap a picture of an image or since a collision may look innocent to you, you would overlook it in an email etc...
As iterated elsewhere in this thread, random gibberish pixels colliding with CSAM would definitely not be useful in incriminating anyone. The manual process would catch that. Also, if the manual process is overloaded, I'm pretty sure basic object recognition can filter out most of the colliding gibberish .
varispeed 2021-08-18 15:16:04 +0000 UTC [ - ]
The NeuralHash would "see" the planted image, but for the viewer it would appear innocent.
I am trying to say that a person reviewing image manually, without special tools will not be able to tell if the image is a false positive and would have to report everything.
reacharavindh 2021-08-18 12:11:44 +0000 UTC [ - ]
Almost all tech people know that iCloud (or its backups) are not encrypted, so they can decrypt it as they wish.. and those among us that are privacy conscious can comfortable turn iCloud OFF(or not sign up for one) and use the iPhone as a simple private device?
helen___keller 2021-08-18 13:19:38 +0000 UTC [ - ]
Hence why this approach was stated as a privacy win by Apple. They catch the CSAM and they don't have to look at your photos stored online.
cyanite 2021-08-18 20:29:44 +0000 UTC [ - ]
jefftk 2021-08-18 12:19:27 +0000 UTC [ - ]
bitneuker 2021-08-18 13:09:22 +0000 UTC [ - ]
reacharavindh 2021-08-19 09:44:15 +0000 UTC [ - ]
I for one am aware of this with iCloud and am still using iCloud for convenience (and by choice). If I were in need of better privacy, I can always use the iPhone without iCloud, and it will work. After this "in phone" watchdog implementation, that will not be the case. I will assume that Apple is constantly watching all my unencrypted content in the worst case, on behalf of state actors and intelligence agencies.
We have seen Apple give in to the Chinese government spying because it is the law there. US government could easily ask for this and also apply a gag order preventing Apple from being able to tell the user. The best situation is to lack such a tool.
davidcbc 2021-08-18 13:42:08 +0000 UTC [ - ]
zug_zug 2021-08-18 13:18:09 +0000 UTC [ - ]
If they are e2e encrypted they can't be distributed/shared without giving a key of some sort. Why not just scan them at time of distribution, and treat undistributed files the same as any local hard-drive (i.e. not their problem).
cyanite 2021-08-18 20:30:28 +0000 UTC [ - ]
neximo64 2021-08-18 10:26:59 +0000 UTC [ - ]
cyanite 2021-08-18 21:09:41 +0000 UTC [ - ]
jakear 2021-08-18 15:32:18 +0000 UTC [ - ]
To this end, I would not be at all surprised to see that in some not-too-distant future Apple issues a big ol’ public apology and removes this feature. The operation will of course be long complete by then.
Just writing this out here so I have a “I told you so” link for if/when that time comes :)
azinman2 2021-08-18 16:38:33 +0000 UTC [ - ]
avsteele 2021-08-18 12:08:31 +0000 UTC [ - ]
manmal 2021-08-18 13:17:34 +0000 UTC [ - ]
cyanite 2021-08-18 20:34:10 +0000 UTC [ - ]
newArray 2021-08-18 15:15:05 +0000 UTC [ - ]
Something I find interesting is the necessary consequences of the property of small edits resulting in the same hash. We can show that this is impossible to absolutely achieve, or in other words there must exist an image such that changing a single pixel will change the hash.
Proof: Start with 2 images, A and B, of equal dimension, and with different perceptual hashes h(A) and h(B). Transform one pixel of A into the corresponding pixel of B and recompute h(A). At some point, after a single pixel change, h(A) = h(B), this is guaranteed to happen before or at A = B. Now A and the previous version of A have are 1 pixel apart, but have different hashes. QED
We can also ATTEMPT to create an image A with a specified hash matching h(A_initial) but which is visually similar to a target image B. Again start with A and B, different images with same dimensions. Transform a random pixel of A towards a pixel of B, but discard the change if h(A) changes from h(A_initial). Since we have so many degrees of freedom for our edit at any point (each channel of each pixel) and the perceptual hash invariant is in our favor, it may be possible to maneuver A close enough to B to fool a person, and keep h(A) = h(A_initial).
If this is possible one could transform a given CSAM image into a harmless meme while not changing the hash, spread the crafted image, and get tons of iCloud accounts flagged.
mrits 2021-08-18 15:18:04 +0000 UTC [ - ]
deadalus 2021-08-18 09:52:59 +0000 UTC [ - ]
Any news about "c-ild porn" being found on someone's phone is suspect now. This has been done before :
1) https://www.deccanchronicle.com/technology/in-other-news/120...
2) https://www.independent.co.uk/news/uk/crime/handyman-planted...
3) https://www.theatlantic.com/notes/2015/09/how-easily-can-hac...
4) https://www.nytimes.com/2016/12/09/world/europe/vladimir-put...
5) https://www.cnet.com/tech/services-and-software/a-child-porn...
bko 2021-08-18 10:21:08 +0000 UTC [ - ]
dijit 2021-08-18 10:31:51 +0000 UTC [ - ]
Those people are not political enemies, they're allies for whatever regime and their support is important.
Better to keep someone powerful in your pocket than to oust them and let someone uncorruptible or uncompromising assume the power vacuum.
LeoNatan25 2021-08-18 10:50:14 +0000 UTC [ - ]
dijit 2021-08-18 10:55:11 +0000 UTC [ - ]
Maybe "not sympathetic" would prevent me from being nerd sniped. :)
raxxorrax 2021-08-18 12:47:05 +0000 UTC [ - ]
Sorry for the rant.
Rd6n6 2021-08-18 12:23:33 +0000 UTC [ - ]
robertoandred 2021-08-18 15:04:00 +0000 UTC [ - ]
Why would you save tons of gray blobs to your photo library? Why would gray blobs look like child porn? Why would Apple reviewers think gray blobs are child porn? Why would the NCMEC think gray blobs are child porn? Why would law enforcement spend time arresting someone for gray blobs?
nullc 2021-08-18 16:40:20 +0000 UTC [ - ]
Since the grey blob exists, I believe it is fully possible to construct natural(-ish) images that have a selected hash.
So, you should perhaps instead imagine attackers that modify lawful nude images to have matching hashes with child porn images.
With that in mind, most of your questions are answered, except perhaps for "Why would the NCMEC think"-- and the answer would be "because its porn and the computer says its child porn".
Of course, an attacker could just as well use REAL child porn. But there are logistic advantages in having to do less handling of unlawful material, and it's less likely that the target will notice. Imagine: if the target already has nude images on their host the attacker might use those as the templates. I doubt most people would notice if their nude image collection was replaced with noisier versions of the same stuff. And even if they did notice, "someone is trying to frame me for child porn" wouldn't be their first guess. :)
hannasanarion 2021-08-18 18:48:45 +0000 UTC [ - ]
The same questions still apply. Why would you save tons of pictures of dogs to your photo library? Why would pictures of dogs look like child porn? Why would Apple reviewers think pictures of dogs are child porn? Why would the NCMEC think pictures of dogs are child porn? Why would law enforcement spend time arresting someone for pictures of dogs?
What is the scenario where someone can be harmed reputationally or criminally because of a hash collision attack, where the same attack could not be performed more easily and causing more damage by actually using real CSAM images?
nullc 2021-08-18 20:54:50 +0000 UTC [ - ]
You see that they can do this with pictures of dogs. What makes you think they can't do exactly the same with pictures of crotches?
Presumably you don't believe there is some kind inherent dog-nature that makes dog images more likely to undermine the hash. :) People are using pictures of dogs because they are a tasteful safe-for-work example, not because anyone actually imagines using images of dogs.
An actual attack would use ordinary nude images, probably ones selected so that if you were primed to expect child porn you'd believe it if you didn't spend a while studying the image.
> where the same attack could not be performed more easily and causing more damage by actually using real CSAM images?
Actual child porn images are more likely to get deleted by the target and/or reported by the target. The attacker also takes on some additional risk that their possession of the images is a strict liability crime. This means that if the planting party gets found with the real child porn they'll be in trouble vs with the hash-matching-legal-images they'll only be in trouble if they get caught planting it (and potentially only exposed to a civil lawsuit, rather than a felony, depending on how they were planting it).
Personally, I agree that the second-preimage-images are not the most interesting attack! But they are a weakness that makes the system even more dangerous. We could debate how much more dangerous they make it.
hannasanarion 2021-08-18 21:54:01 +0000 UTC [ - ]
nullc 2021-08-19 00:17:51 +0000 UTC [ - ]
As far as what kind of image would be believed to be child porn without a side-by-side with the supposed match: ordinary porn. Without context plenty would be hard to distinguish, especially with the popularity of waxed smooth bodies and explicit close up shots.
Prosecution over "child porn" that isn't is already a proved thing, with instances where the government was happily trotting out 'experts' to claim that the images were clearly a child based on physical characteristics only to be rebuffed by the actual actress taking the stand. ( https://www.crimeandfederalism.com/page/61/ )
hannasanarion 2021-08-19 15:55:27 +0000 UTC [ - ]
The entire point of this process is to catch only images from a known catalogue of existing images, of course there will be opportunity for side-by-side comparison. And if the side-by-side comparison can convince a jury that the images are the same, then the result is literally identical to if you had just sent the original image, the hash checking service hasn't impacted the attack whatsoever.
The idea that new images of nudity could be caught in it is because of a mixup by the press after it was announced, because people confused it with a different parental control feature that detects nudity in images taken by your kids' phones. This is not the same thing.
fouric 2021-08-18 17:50:32 +0000 UTC [ - ]
At the very least, there are large classes of images that, while not realistic photographs, people will willingly download, that should be very easy to craft collisions for:
Memes. In particular, given the distortions of surreal and deep-fried memes, it should be pretty easy to craft collisions there.
nullc 2021-08-18 20:56:00 +0000 UTC [ - ]
robertoandred 2021-08-18 17:49:27 +0000 UTC [ - ]
nullc 2021-08-18 20:46:08 +0000 UTC [ - ]
So someone need only find a collection of likely included images and compute their hashes and publish it.
The attack works just as well even if the attacker isn't sure that every hash is in the database, they just need to know enough likely-matches such that they get enough hits.
Consider it this way, some attacker spends a couple hours searching for child porn and finds some stuff. ... what kind of failure would the NCMEC database be if it didn't include that material?
Crontab 2021-08-18 11:56:03 +0000 UTC [ - ]
snarf21 2021-08-18 12:54:53 +0000 UTC [ - ]
cyanite 2021-08-18 20:35:28 +0000 UTC [ - ]
coldtea 2021-08-18 12:15:08 +0000 UTC [ - ]
hanklazard 2021-08-18 11:40:17 +0000 UTC [ - ]
pkulak 2021-08-18 16:26:36 +0000 UTC [ - ]
https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/issue...
fogof 2021-08-18 19:25:21 +0000 UTC [ - ]
fortenforge 2021-08-18 18:26:37 +0000 UTC [ - ]
yayr 2021-08-18 11:54:01 +0000 UTC [ - ]
Apple announcement of neural hashing: 5.8.2021.
Generic algorithm to generate a different matching image: 8.8.2021.
one script was already released 10 days ago here https://gist.github.com/unrealwill/c480371c3a4bf3abb29856c29...
cyanite 2021-08-18 21:11:52 +0000 UTC [ - ]
yayr 2021-08-18 21:29:48 +0000 UTC [ - ]
If someone with even more motivation and the means to put those images onto your device via social engineering, exploits or maybe even features and you become the target of a criminal investigation in any jurisdiction you just happen to be at the moment, this makes the algorithm harmful.
FabHK 2021-08-19 00:08:15 +0000 UTC [ - ]
Apple can (and will) run a second algorithm server side to filter out further false positives.
> If someone with even more motivation and the means to put those images onto your device via social engineering, exploits or maybe even features
Such an attacker could just plant CSAM directly. The hash collision has no bearing on it. If, however, it is hash collision you're worried about, they'd be caught during the manual review.
yeldarb 2021-08-18 16:02:35 +0000 UTC [ - ]
They don't; I think it'd be much harder to create an image that both matches the NeuralHash of a CSAM image and also fools a generic model like CLIP as a sanity check for being a CSAM image.
I wrote up my findings here: https://blog.roboflow.com/apples-csam-neuralhash-collision/
dannyw 2021-08-18 16:23:05 +0000 UTC [ - ]
Remember, anything under 18 is treated the exact same way from a legal perspective, and once discovered Apple must report.
So there is an adversarial attack in that you find legal porn of say 18 year old pussy closeups, disturb it to match CSAM with the neural hash, and send it to your enemies.
The Apple employee will verify a match. The Feds will raid your target and imprison them, and reputationally tarnish them for life.
Now think about how Pegasus can remotely modify your photos. And think about Julian Assange's sexual assault allegations that were admitted to be fabrications by the accuser.
proactivesvcs 2021-08-18 14:48:14 +0000 UTC [ - ]
When you're promised that this will never happen to you because there are humans in the process remember that they're one or more of underpaid, poorly-trained, poorly-treated, under-resourced or just flat-out terrible at their job.
This may well happen to you and you'll have no way back.
nine_k 2021-08-18 14:57:07 +0000 UTC [ - ]
AFAICT the hash function / perceptual model is openly available, or, at least, easy to find online.
bengale 2021-08-18 15:31:00 +0000 UTC [ - ]
teitoklien 2021-08-18 15:38:57 +0000 UTC [ - ]
I’d trust the algorithm more than the human reviewing it , considering the algorithm already shows flaws and loopholes , the human part of it does not bring any confidence.
This isnt facebook , this is your private photos , it shouldnt have reached till this stage , in the first place
bengale 2021-08-18 16:02:40 +0000 UTC [ - ]
> How society went from keeping personal photos private
Also, didn't the local photo development shops used to call the police quite frequently when dodgy images were sent in to be developed? I remember it happening once at our local supermarket.
teitoklien 2021-08-18 16:12:38 +0000 UTC [ - ]
So someone might forward you your personal pics from your meeting with them and its pixels might get edited by the messenger app you use to save the picture into your photos to specifically collide its hash with a csam image, while to you the image will look perfectly normal
This is one example , lets say you 100% trust every developer of every app that you download on apple’s phone.
Fine, but that’s already a lot of trust on a lot of people about something that can ruin your life.
Do you now trust every single image which might get auto downloaded to your gallery by a malicious actor , what if a random anon person messages you with 100 such colliding images , enough to cross threshold to get the authorities knocking your door ,
Is this “feature” really worth it , to have the inconvenience of authorities knocking your door and going through legal trouble ?
For an iphone ?
bengale 2021-08-18 16:16:31 +0000 UTC [ - ]
Ok, but then that image is already not private, if I've been sending it to someone that could do that.
> This is one example , lets say you 100% trust every developer of every app that you download on apple’s phone.
I don't have to, they have to ask me before storing images in my photo library. I'm not going to give some fart generator app access to my library.
> Do you now trust every single image which might get auto downloaded to your gallery by a malicious actor , what if a random anon person messages you with 100 such colliding images , enough to cross threshold to get the authorities knocking your door
Beyond the fact that nothing writes images to my library automatically. If they sent me 100 images that are grey blobs or slightly manipulated normal images they dont get past the check anyway so no police. If they send me 100 CSAM images I'll be on the phone to the police anyway.
teitoklien 2021-08-18 16:24:52 +0000 UTC [ - ]
Youre right indeed , you can protect yourself from this , but all the measures you mentioned are settings which are non-default,
A lot of apps annoyingly turn it on by default , while HN users can turn it off , I doubt my grandparents will go through the same effort or understand it.
Question is , why should they ? Their phone was supposed to help them , not be a snitch and a snitch with flaws at that.
Why should my non-tech friends train to protect themselves from “their” phones.
Human society has always tried to place co-existence and trust on their fore frontier , their ideal wish for society.
This makes us and our devices act as snitches to each other.
Snitch for whom ? Apple ?
But yes youre right , that if its normal image it will stop at review level , (altho thats on the condition you trust apple employees who are reviewing them)
I just still dont think its a good idea to even reach till this step.
This just makes our computers feel more hostile
bengale 2021-08-18 16:32:39 +0000 UTC [ - ]
If this was such a major problem why has no-one been targeted like this in the last decade when everyone else was scanning for it?
teitoklien 2021-08-18 16:37:16 +0000 UTC [ - ]
bengale 2021-08-18 16:53:13 +0000 UTC [ - ]
teitoklien 2021-08-18 17:00:46 +0000 UTC [ - ]
Everyone else are cloud services scanning for images , cloud or on-device wont change the math, math stays same and here the math makes mistake
> A picture of someone’s children wouldn’t trigger these systems at all
The main post under which we are conversing , is about someone having generated a collision to trigger this system.
bengale 2021-08-18 17:27:53 +0000 UTC [ - ]
The post we're commenting under has created hash collisions with grey blob images, if you get 100 of these they won't get past the review step anyway. It'd be a pointless attack.
Like most of the outrage around this system it mostly seems to boil down to FUD.
borplk 2021-08-18 11:27:25 +0000 UTC [ - ]
Good luck!
newsclues 2021-08-18 11:50:51 +0000 UTC [ - ]
Then leaked or unauthorized nudes will get filtered.
Filters for terrorists, and terrorist imagery or symbols.
Start scanning for guns and drugs.
Drug dealers and criminals get added to the list.
How long before before it’s dissidents, political opponents, and minorities in dictatorships?
How long before Tim Cooks CP filters are used against LGBT groups abroad?
zug_zug 2021-08-18 13:31:37 +0000 UTC [ - ]
Build surveillance to catch those people and I'll listen.
personlurking 2021-08-18 17:39:53 +0000 UTC [ - ]
Change the name on the box to something else, but same message.
newsclues 2021-08-18 15:33:34 +0000 UTC [ - ]
Crontab 2021-08-18 12:23:45 +0000 UTC [ - ]
Siri: "You seem to be having unpatriotic thoughts. We are dispatching someone to help you."
jnsie 2021-08-18 13:35:26 +0000 UTC [ - ]
robertoandred 2021-08-18 14:46:19 +0000 UTC [ - ]
newsclues 2021-08-18 15:31:55 +0000 UTC [ - ]
My Instagram account is mainly weed (legal in Canada) and Instagram censorship hits my content occasionally (because illegal cannabis selling on the platform is rampant).
robertoandred 2021-08-18 16:17:26 +0000 UTC [ - ]
newsclues 2021-08-18 16:22:20 +0000 UTC [ - ]
robertoandred 2021-08-18 16:23:51 +0000 UTC [ - ]
cyanite 2021-08-18 20:37:35 +0000 UTC [ - ]
plutonorm 2021-08-18 13:34:38 +0000 UTC [ - ]
newsclues 2021-08-18 15:30:29 +0000 UTC [ - ]
rogers18445 2021-08-18 13:46:09 +0000 UTC [ - ]
orange_puff 2021-08-18 13:14:41 +0000 UTC [ - ]
kuu 2021-08-18 09:51:58 +0000 UTC [ - ]
scandinavian 2021-08-18 10:16:06 +0000 UTC [ - ]
cyanite 2021-08-18 20:39:16 +0000 UTC [ - ]
jhugo 2021-08-18 10:22:16 +0000 UTC [ - ]
nulld3v 2021-08-18 09:55:56 +0000 UTC [ - ]
eptcyka 2021-08-18 10:02:57 +0000 UTC [ - ]
jdlshore 2021-08-18 18:57:00 +0000 UTC [ - ]
The database of CSAM hashes is blinded and no one has the hashes. Without the hashes, this attack is useless.
It's also mitigated by a LOT of checks and balances. First they have to know 30 hashes to target (they're secret). They have to get 30 colliding images on your phone. The images have to be unnoticed by you (why not just infiltrate CSAM, then?) or sufficiently compelling that you don't just delete them. Thirty images have to pass human review at Apple. At least one has to pass human review by law enforcement. Then, and only then, will you be arrested and face a threat.
Short version: If somebody wants to frame you for possessing CSAM, there are much easier ways. There is no new threat here. https://xkcd.com/538/
cyanite 2021-08-18 20:40:23 +0000 UTC [ - ]
True it’s not perfect here, but you should see Reddit, then :p. People there hardly even know what they are mad about.
visarga 2021-08-18 19:32:45 +0000 UTC [ - ]
raspasov 2021-08-18 13:48:47 +0000 UTC [ - ]
But, as a side note...
I get the feeling that a lot of people assume that the CSAM hashes are going to be stored directly on everyone's phone so it's easy to get a hold of them and create images that match those hashes.
That does not seem to be the case. The actual CSAM hashes go through a "blinding" server-side step.
https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...
cyanite 2021-08-18 20:38:28 +0000 UTC [ - ]
maz1b 2021-08-18 16:56:03 +0000 UTC [ - ]
daralthus 2021-08-18 17:37:27 +0000 UTC [ - ]
nojito 2021-08-18 12:56:02 +0000 UTC [ - ]
>Once Apple's iCloud Photos servers decrypt a set of positive match vouchers for an account that exceeded the match threshold, the visual derivatives of the positively matching images are referred for review by Apple. First, as an additional safeguard, the visual derivatives themselves are matched to the known CSAM database by a second, independent perceptual hash. This independent hash is chosen to reject the unlikely possibility that the match threshold was exceeded due to non-CSAM images that were adversarially perturbed to cause false NeuralHash matches against the on-device encrypted CSAM database. If the CSAM finding is confirmed by this independent hash, the visual derivatives are provided to Apple human reviewers for final confirmation.
They also fuzz this process by sending false positives I think?
notquitehuman 2021-08-18 14:12:20 +0000 UTC [ - ]
floatingatoll 2021-08-18 14:39:01 +0000 UTC [ - ]
1) This is invasive.
2) Slippery slope.
3) Gets innocents arrested.
Their solution mitigates #3 with review and mitigates #1 with blurring, and those mitigations occur and the point in the process where you claim it doesn’t matter what they do.
Please don’t reductively dismiss facts that don’t support your narrative. Saying that it doesn’t matter is wrong and serves only to boost your particular viewpoint on #2.
nojito 2021-08-18 15:45:31 +0000 UTC [ - ]
bugmen0t 2021-08-18 12:57:36 +0000 UTC [ - ]
rollinggoron 2021-08-18 14:19:31 +0000 UTC [ - ]
steeleduncan 2021-08-18 14:32:57 +0000 UTC [ - ]
You can whatsapp someone an innocent image doctored to have a hash collision with known CSAM. If they have default settings it will be saved to their photo reel, scanned by iOS and the police will be called.
Until the arresting police officer explains to them they are being arrested on suspicion of being a paedophile, they won't even know this has happened.
ec109685 2021-08-18 16:13:46 +0000 UTC [ - ]
cyanite 2021-08-18 20:41:35 +0000 UTC [ - ]
dragonwriter 2021-08-18 20:45:21 +0000 UTC [ - ]
floatingatoll 2021-08-18 14:20:58 +0000 UTC [ - ]
dgb99 2021-08-18 14:56:40 +0000 UTC [ - ]
marcob95 2021-08-18 15:24:59 +0000 UTC [ - ]
cyanite 2021-08-18 20:43:28 +0000 UTC [ - ]
But what you should do is read the documents yourself.
a-dub 2021-08-18 15:16:22 +0000 UTC [ - ]
in other news, my google phone just helpfully scanned all client side images, rounded up everything it thought was a meme and suggested i delete them to save space. i wonder if that feature has telemetry built in...
hughrr 2021-08-18 10:04:38 +0000 UTC [ - ]
I feel vindicated now. There are a lot of people saying that I’m insane as I’ve dumped the entire iOS ecosystem in the last week. But Craig was busy steamrolling out the marketing still only a couple of days back about how this is good for the world. No way am I going back.
Edit: I’m going to reply to some child comments here because I can’t be bothered to reply to each one. This is step one down the stairs to hell. If even one person is caught with this it will be leverage to move to step 2. Within a few years your entire frame buffer and camera will be working against you full time.
nbzso 2021-08-18 11:48:46 +0000 UTC [ - ]
Billionaires at (Apple) don't give a flying f*ck about users privacy. It is all vertical integration in the name of world domination. How removed from reality they are. This is week after Pegasus/NSO and there is no law who requires them to "scan" on device. So the logical explanation is "growth". Way to go Apple.
sandworm101 2021-08-18 13:14:27 +0000 UTC [ - ]
There is. Apple must comply with warrant requests. If they have a system for scanning files on customer devices they must, if presented with a warrant, allow police access to that system. We can quibble about jurisdictions and constitutional protections, but if the FBI shows up with a federal warrant demanding that Apple remotely scan Sandworm101's phone for a particular image, Sandworm101's phone is going to be scanned.
ethbr0 2021-08-18 13:44:25 +0000 UTC [ - ]
Currently, they must comply with warranty requests by scanning if they have the ability to scan.
If they have no such ability (say, because they designed their phones from a privacy-first perspective), the law makes no requirement that they create such a capability.
And that's what pisses people off about this.
snowwrestler 2021-08-18 14:50:25 +0000 UTC [ - ]
The people getting pissed off about this have not, so far as I’ve seen, demonstrated why the law would require Apple to add the capability for targeted scans of arbitrary hashes.
Police can use a warrant to ask you for a video file they suspect you have. They can’t use a warrant to force you to videotape someone—even if you already own a video camera. The same principle applies to Apple.
sandworm101 2021-08-18 15:01:15 +0000 UTC [ - ]
Maybe in the narrow context of US domestic child pornography investigations. In the wider world it is very possible for police to force such things. Even in the US, CALEA demands that certain companies develop abilities that they would not normally want (interception). The principal that US companies need not actively participate in police investigations disappeared decades ago. On the international level, all bets are off.
https://en.wikipedia.org/wiki/Communications_Assistance_for_...
"by requiring that telecommunications carriers and manufacturers of telecommunications equipment modify and design their equipment, facilities, and services to ensure that they have built-in capabilities for targeted surveillance [...] USA telecommunications providers must install new hardware or software, as well as modify old equipment, so that it doesn't interfere with the ability of a law enforcement agency (LEA) to perform real-time surveillance of any telephone or Internet traffic."
snowwrestler 2021-08-18 15:13:13 +0000 UTC [ - ]
And why such application would have had to wait until Apple announced this? In other words, if the law can force Apple to do things in general, why does the law need to wait for Apple to announce certain capabilities first?
sandworm101 2021-08-18 15:19:20 +0000 UTC [ - ]
Read what I stated. I said no such thing. I said that CALEA shows that companies can sometimes be forced to be do things they do not want to do, to take actions at the behest of law enforcement that they would not normally do. CALEA would clearly not be applied to apple in this case. It would be some other law/warrant/NSL that would force apple to do something it normally would not do. CALEA stands as proof that such things have long been acceptable under US law.
snowwrestler 2021-08-18 15:35:22 +0000 UTC [ - ]
I thought this particular conversation was about what Apple can be forced to do by a warrant issued under current law.
HelloNurse 2021-08-18 15:13:48 +0000 UTC [ - ]
This isn't the sort of thing that is mandated by law, but rather requested behind the scenes by espionage, er, "law enforcement" agencies. They might be more or less friendly deals; nice monopoly you have here, it would be a shame if something happened to it...
tantalor 2021-08-18 14:14:35 +0000 UTC [ - ]
simcop2387 2021-08-18 14:22:38 +0000 UTC [ - ]
sandworm101 2021-08-18 14:32:48 +0000 UTC [ - ]
jamesdwilson 2021-08-18 14:45:23 +0000 UTC [ - ]
sandworm101 2021-08-18 15:13:34 +0000 UTC [ - ]
roenxi 2021-08-18 14:44:50 +0000 UTC [ - ]
Just because someone could force you to do something you don't want to do is not really a major concern if you choose, preemptively, to do the thing. The concerning part here is Apple signalling that they have executives that don't see phone scanning as a problem. That is a major black eye for their branding.
lumost 2021-08-18 14:50:25 +0000 UTC [ - ]
If they just sold the device to someone, then the police would have to issue a warrant to that individual. My understanding is that as an individual your options are either to comply with the warrant or commit a separate crime destroying the evidence or refusing the warrant.
snowwrestler 2021-08-18 15:00:21 +0000 UTC [ - ]
The legal principle is the same, actually. The law says you can install whatever software you want on an iPhone and Apple cannot stop you (jailbreaking is legal). But the law cannot force Apple to make it easy for you to do so.
ethbr0 2021-08-18 17:48:23 +0000 UTC [ - ]
I.e. If I use an app, on a system in which app updates are possible, which encrypts my data locally and stores it on their cloud, then the owner/responsible party for the app is...?
jodrellblank 2021-08-18 14:14:51 +0000 UTC [ - ]
and they have not created such a capability. This is built into the image upload libary of iCloud. This has less ability to "scan the device" than the iOS updater which can run arbitrary code supplied by Apple and read/write anywhere on the system. This is less able to be extended to a general purpose scanner than that is.
treis 2021-08-18 14:50:18 +0000 UTC [ - ]
Today they aren't liable because there is a law that shields them. But that law comes with strings attached around assisting law enforcement. By doing an end run around those strings Apple is not holding up it's end of the bargain and runs the risk of lawmakers removing the liability shield.
If that happens then it's a whole new ball game.
brandon272 2021-08-18 15:30:03 +0000 UTC [ - ]
If someone sends CSAM using Federal Express, is FedEx legally liable for "possessing and distributing" that material? Does FedEx need to start opening up packages and scanning hard drives, DVDs, USB sticks, etc. to ensure that they don't contain any CSAM or other illegal data?
I really struggle with the lengths that people are going to to justify these moves. If we can justify this, it's pretty simple to justify a lot more surveillance as well. CSAM is not the only scourge in our society.
Reminder: Apple tells us that they consider privacy a "fundamental human right"[1]. That simply does not square with their recent announcement of on-device scanning, and some would argue that it does not square with scanning or content analysis anywhere, especially on behalf of government.
treis 2021-08-18 16:30:38 +0000 UTC [ - ]
Yes:
https://www.dea.gov/press-releases/2014/07/18/fedex-indicted...
That was for drugs but conceptually the same for CSAM.
brandon272 2021-08-18 16:55:28 +0000 UTC [ - ]
FedEx was not held liable simply because their service was used to mail illegal drugs. They were held liable because not only did they know about the specific instances in which it was mailed, they allegedly conspired with the shippers to facilitate the mailings. They were knowingly mailing packages to parking lots where drug dealers would wait to pick them up:
> According to the indictment, as early as 2004, FedEx knew that it was delivering drugs to dealers and addicts. FedEx’s couriers in Kentucky, Tennessee, and Virginia expressed safety concerns that were circulated to FedEx Senior management, including that FedEx trucks were stopped on the road by online pharmacy customers demanding packages of pills; that the delivery address was a parking lot, school, or vacant home where several car loads of people were waiting for the FedEx driver to arrive with their drugs; that customers were jumping on the FedEx trucks and demanding online pharmacy packages; and that FedEx drivers were threatened if they insisted on delivering packages to the addresses instead of giving the packages to customers who demanded them.
They had drug addicts literally stopping FedEx trucks on the street to intercept packages from online pharmacies. I don't see how this specific situation is analogous to Apple at all.
The fact remains that FedEx is not scanning hard drives, DVD or other storage devices going through their network and they don't appear to be breaking any laws by not doing that scanning.
treis 2021-08-18 18:56:17 +0000 UTC [ - ]
Apple knows that CSAM is being sent and conspires to do so (i.e. transmits the image). Conceptually they are the same.
The rest of your post details the practical differences between sending physical packages and digital images.
brandon272 2021-08-18 22:20:59 +0000 UTC [ - ]
FedEx knows that CSAM is sent using its services in the same way that Apple does. That is to say that both companies know that CSAM has been sent historically and both know that is possible to send that type of material using its services, and one could argue that they should assume that their services are used to transfer that data.
Why does one of these companies have to go to such great depths to search out and report these materials and the other does not? If we suggest that Apple "knows" about CSAM on their network, we must also accept that Apple "knows" about many other crimes in which the devices they are sell are used in the planning and execution of those crimes. Why are they only focused on CSAM if they'd also have legal exposure in these other crimes simply for being a hardware and service provider?
Lastly, to suggest that Apple "conspires" to transmit this content is wholly inaccurate. Conspiracy implies two or more parties are in explicit agreement to commit a crime.
treis 2021-08-18 22:56:58 +0000 UTC [ - ]
It's not knowledge of a crime. Apple is committing a crime by possessing and distributing CSAM.
>Lastly, to suggest that Apple "conspires" to transmit this content is wholly inaccurate. Conspiracy implies two or more parties are in explicit agreement to commit a crime.
But that is exactly what they're doing. They are agreeing to possess and deliver images of which they know a portion are CSAM.
brandon272 2021-08-18 23:58:42 +0000 UTC [ - ]
Apple's liability for content is limited by Section 230. If they find out about specific instances of CSAM on their servers they are required to remove it.
They are not required to hunt for it or otherwise search it out. They are not required to scan for it. Anything currently being done in that regard is voluntary.
> But that is exactly what they're doing. They are agreeing to possess and deliver images of which they know a portion are CSAM.
When you use Apple's cloud services, you agree to multiple legally binding agreements that include provisions about not using their cloud services to store or transmit illegal materials including CSAM.
Someone using Apple's services to do that in contravention of those agreements does not constitute them "agreeing to possess and deliver" that content.
What WOULD constitute them conspiracy and them agreeing to possess and deliver that content would be if they were informed of the specific instance of content being transmitted and delivered and did nothing about it or otherwise enabled its continued storage and distribution.
khuey 2021-08-18 14:02:12 +0000 UTC [ - ]
dheera 2021-08-18 16:49:42 +0000 UTC [ - ]
(b) The world is not the United States
snowwrestler 2021-08-18 14:37:49 +0000 UTC [ - ]
1) Even if Apple does roll out this system, it’s not a system for scanning files on customer devices. It’s a system for comparing photos being uploaded to iCloud to a single standardized (everyone has the same one) list of hashes.
2) Apple likely has the legal power to refuse to alter that list of hashes, by the same argument they used against the FBI’s request to bypass the unlock code on specific iPhones.
3) Any argument about what Apple will and will not need to do with this system needs to explain how Microsoft Defender (or other AV products) interact with law enforcement, since those are software systems that scan client devices (ALL files, typically) for signature matches.
Grustaf 2021-08-18 13:14:19 +0000 UTC [ - ]
I'm guessing you need at least 5 images, perhaps much more, to trigger it. In any case, 1 image is definitely not enough.
sandworm101 2021-08-18 13:22:07 +0000 UTC [ - ]
It is akin to cops wanting to search a car. They don't need a warrant. They only need to follow the car until it breaks any number of traffic laws. Then the resulting traffic stops lets them talk to the driver, who seems evasive, which gives them probable cause, which gets them a warrant to perform a full search. The accidental collision, week evidence of an offending image on the phone, opens the door to whatever other investigation they want to do.
merpnderp 2021-08-18 14:22:39 +0000 UTC [ - ]
Why would people pay a company to falsely accuse them of horrific crimes?
JKCalhoun 2021-08-18 14:35:50 +0000 UTC [ - ]
sandworm101 2021-08-18 14:55:12 +0000 UTC [ - ]
snowwrestler 2021-08-18 15:06:54 +0000 UTC [ - ]
moe 2021-08-18 15:57:49 +0000 UTC [ - ]
This is not about child porn nor about what AV software can or cannot do.
It's about normalising mass surveillance and implementing populace control. They don't want another Snowden to scare the public with reports about "backdoors" and mass privacy violations.
They want the coming generation to perceive it as normal. Because all phones do it, hasn't it always been this way, and think of the children.
Oh, these dissidents in $distant_country that will be muted and killed using Apple's shiny surveillance tech? Well, evil governments do what they do. But we are not like that. Over here it is only about the child predators. Trust us.
Apple has been an opponent of these developments for decades. Now they are spearheading it.
sandworm101 2021-08-18 15:24:36 +0000 UTC [ - ]
Mine doesn't. If Ubuntu or ClamAV is scanning all my photos and reporting the results to Canonical against my will then I will soon be having words with Mr. Linus.
snowwrestler 2021-08-18 15:42:02 +0000 UTC [ - ]
sandworm101 2021-08-18 17:14:38 +0000 UTC [ - ]
Because if some local police agency like the FBI were altering code on an open source project as large as ubuntu then the world would know about it pretty quickly. The linux community would burn every bridge with Canonical and ubuntu would disappear overnight. The idea of the FBI adding malware to ubuntu unnoticed, malware that openly scans files and reports to remote servers, is the stuff of comic books.
MichaelGroves 2021-08-18 16:41:21 +0000 UTC [ - ]
jasamer 2021-08-18 16:27:09 +0000 UTC [ - ]
Grustaf 2021-08-18 13:25:08 +0000 UTC [ - ]
And the police probably won't be using any neural hashes if they have access to your device, so I'm not sure what you are talking about.
sandworm101 2021-08-18 13:28:16 +0000 UTC [ - ]
Unlike Lavabit, I don't think Apple will ever close up shop to protect customer data.
Grustaf 2021-08-18 13:32:40 +0000 UTC [ - ]
fsflover 2021-08-18 16:25:02 +0000 UTC [ - ]
Grustaf 2021-08-18 23:22:18 +0000 UTC [ - ]
fsflover 2021-08-19 16:42:06 +0000 UTC [ - ]
jasamer 2021-08-18 16:22:53 +0000 UTC [ - ]
brokenmachine 2021-08-19 01:44:37 +0000 UTC [ - ]
if (numMatches > matchThreshold) { snitch(customer) && customer.life.ruin() }
gorbachev 2021-08-18 14:15:51 +0000 UTC [ - ]
Platform owners like Google, Apple, Twitter and Facebook should really keep stuff like that in mind when they deploy algorithmic solutions like this.
Like dhosek said, the manual review step should reduce the risk quite a bit, though.
Grustaf 2021-08-18 14:28:55 +0000 UTC [ - ]
So that's per photo library, not per photo. The rate per photo in their testing was 3 in 100 million, then they added a 30x safety margin and assumed it's actually 1 in a million.
https://www.zdnet.com/article/apple-to-tune-csam-system-to-k...
JKCalhoun 2021-08-18 14:33:18 +0000 UTC [ - ]
To be fair though we do not know what the threshold is. But I would guess even higher than 5 — I would presume 12 or more.
I'm no criminologist (IANAC) but when you read about someone getting busted with child pornography they have hundreds or thousand of images — not one, not five. They're "trading cards" for these creeps.
dhosek 2021-08-18 13:47:26 +0000 UTC [ - ]
floatingatoll 2021-08-18 14:52:29 +0000 UTC [ - ]
(And yes, 90% of CSAM abusers are men, so y’all will have to find some other way to weaken my argument.)
mschuster91 2021-08-18 12:55:55 +0000 UTC [ - ]
Yet. The demands by "concerned parents" (aka fronts for secret services, puritans/other religious fundamentalists and law-and-order hardliners) to "do something against child porn" have grown ever more strong and insane over the last years. (And you can bet that what is used on CSAM will immediately be used to go after legal pornography, sex work, drug enforcement, ...)
Many current and powerful politicians don't understand a single bit about computers, to the point of bragging of never having used one (e.g. Japan's cybersecurity minister) or having assistants print out emails daily and transcribing handwritten responses (can't say more than that this is the situation for at least two German MPs) - but there are young politicians who do, and will replace them over the next decade hopefully. That means it's last chance for lobbying groups to get devastating laws passed through, and we must all be vigorous in spotting and preventing at least the worst of them.
And what is suspiciously lacking in Apple's response: what are they going to do when they are compelled to extend the CSAM scanner by law in the US, India, China and/or EU? It's feasible for Apple to say they'll just be sticking the middle finger towards markets such as Russia, Saudi-Arabia or similar tiny dictatorships, but the big markets? Apple can't ignore these, and especially India and China are heavyweights.
Grustaf 2021-08-18 13:27:47 +0000 UTC [ - ]
CSAM scanning is only useful for areas where Apple really doesn't want to even look at the actual material until they are extremely certain that it's a match. If they want to detect anti-government propaganda or something, there would be no such concerns, they would just do a regex search of your inbox. Infinitely more convenient.
sandworm101 2021-08-18 13:43:47 +0000 UTC [ - ]
Correct. It has plausible deniability built in. Apple is unable to verify that the images the government are looking for are actually CSAM. They could be political. They could be protest images. They could be Winnie the Pooh. Apple can plead ignorance as it blindly scans for whatever the requesting government asks it to scan for.
Nobody really minds that this system is going to be used for CSAM. What everyone recognizes is how ripe this system is for abuse, how easily it can be leveraged by oppressive governments. And Apple can play the innocent.
mlindner 2021-08-18 14:11:40 +0000 UTC [ - ]
I beg to differ. It doesn't matter how evil the content is, no scanning of my computers by outside parties, period. More so by scanning law enforcement can even plant legitimate child pornography on people's computers and get convictions all the easier because the system self-reports.
Grustaf 2021-08-18 14:18:21 +0000 UTC [ - ]
jasamer 2021-08-18 16:34:07 +0000 UTC [ - ]
Also, it has to be two requesting governments.
sandworm101 2021-08-18 16:49:13 +0000 UTC [ - ]
Grustaf 2021-08-18 14:00:07 +0000 UTC [ - ]
kps 2021-08-18 14:10:32 +0000 UTC [ - ]
Grustaf 2021-08-18 14:16:29 +0000 UTC [ - ]
Any spying really. It's much easier to just look at the images themselves.
ttflee 2021-08-18 15:29:34 +0000 UTC [ - ]
It's Joe Wong's joke, that it's like peeing in the snow in a dark winter night, while there is a difference but it's really hard to tell.
short_sells_poo 2021-08-18 14:47:29 +0000 UTC [ - ]
Grustaf 2021-08-18 16:18:36 +0000 UTC [ - ]
MichaelGroves 2021-08-18 16:46:03 +0000 UTC [ - ]
ethbr0 2021-08-18 13:49:40 +0000 UTC [ - ]
Which Apple will dutifully install and run, because they're required by local laws.
jodrellblank 2021-08-18 14:18:28 +0000 UTC [ - ]
Which Apple have stated that they won't do, and have designed the system so they can't do that without it being found out: https://news.ycombinator.com/item?id=28221082
Can't you at least post accurate information about this system and support outrage based on facts instead of fantasy?
mschuster91 2021-08-18 15:10:23 +0000 UTC [ - ]
For your random off-of-the-mill dictatorship, yes.
For the US? EU? China? India? No way they can refuse such a request from these markets. And if they could and get away with it, it would be a very worrying situation in itself regarding (democratic) control of government over global mega corporations.
jodrellblank 2021-08-18 15:28:04 +0000 UTC [ - ]
Although how do you think Linus Torvalds "manged to get away with refusing" adding backdoors? https://www.techdirt.com/articles/20130919/07485524578/linus...
mschuster91 2021-08-18 16:58:29 +0000 UTC [ - ]
jodrellblank 2021-08-18 14:08:05 +0000 UTC [ - ]
from https://daringfireball.net/linked/2021/08/09/apple-csam-faq - "we will not accede to any government’s request to expand it."
From https://www.msn.com/en-us/news/technology/craig-federighi-sa... - "“We ship the same software in China with the same database we ship in America, as we ship in Europe. If someone were to come to Apple [with a request to scan for data beyond CSAM], Apple would say no. But let’s say you aren’t confident. You don’t want to just rely on Apple saying no. You want to be sure that Apple couldn’t get away with it if we said yes,” he told the Journal. “There are multiple levels of auditability, and so we’re making sure that you don’t have to trust any one entity, or even any one country, as far as what images are part of this process.”" - Craig Federighi
jkestner 2021-08-18 14:20:24 +0000 UTC [ - ]
ksec 2021-08-18 14:55:09 +0000 UTC [ - ]
If they stand ground it makes the decision a lot easier for others. If they dont they will continue to use the boiling frog tactics.
After all they dont think they are wrong and they are still righteous. And It would be far more interesting to see how the world folds. The whole computing market, Google, Apple and Microsoft are currently operating with a moat that is literally impenetrable. This will be a small push towards a new world of possibilities.
Yes. I really hope they stand ground. I wish I won the lottery I could put a few million into the Social Media echo chamber to support this. Or you know, cough certain Apple competitor could do this.
danuker 2021-08-18 15:11:29 +0000 UTC [ - ]
mda 2021-08-18 17:43:34 +0000 UTC [ - ]
ksec 2021-08-18 20:55:48 +0000 UTC [ - ]
floatingatoll 2021-08-18 15:23:07 +0000 UTC [ - ]
dmix 2021-08-18 16:25:39 +0000 UTC [ - ]
I personally thought the unencrypted backups was enough of a death-knell as it provides everything on your phone but anything with real-time on device access is always a gold mine for surveillance hawks.
floatingatoll 2021-08-18 17:17:30 +0000 UTC [ - ]
neolog 2021-08-18 17:46:29 +0000 UTC [ - ]
How long until an group of governments tells Apple to add Tank Man to the list?
floatingatoll 2021-08-18 19:09:35 +0000 UTC [ - ]
Just pass legislation requiring in-country datacenters that can be decrypted by thoughtcrime enforcers, like Russia and China are doing. Trying to get this done via a CSAM list that's absurdly closely audited would be a huge waste of time and not provide any significant benefit, and if such a request were ever made public, would likely result in severe political and economic sanctions.
That's what everyone's missing in this argument. There's no need to be all underhanded and secretive when you can just pass laws and conduct military-backed demands upon companies using those laws. Trying to exploit the CSAM process would be a horrifically bad idea, and would result in public exposure and humiliation, rather than the much more useful outcome that simply passing a law would provide.
neolog 2021-08-18 21:12:18 +0000 UTC [ - ]
If Apple deploys on-phone scanning, governments can just tell Apple to support a new list. It won't be the NCMEC CSAM list. It will be a "public safety and security" list. I wouldn't rule out underhandedness either. [1]
[1] https://www.nytimes.com/2020/07/01/technology/china-uighurs-...
floatingatoll 2021-08-18 21:31:15 +0000 UTC [ - ]
How is Apple's new CSAM list somehow increasing the chances of Apple going rogue, given that we've all been living with that risk for the past X years?
neolog 2021-08-18 21:45:46 +0000 UTC [ - ]
floatingatoll 2021-08-18 22:02:47 +0000 UTC [ - ]
https://support.apple.com/guide/security/protecting-against-...
Each system is closed source, provides a mechanism for checking content signatures against files on disk, and is thought to report telemetry to Apple when signatures are found.
How is CSAM scanning new and different from those existing closed-source systems?
neolog 2021-08-18 22:39:09 +0000 UTC [ - ]
floatingatoll 2021-08-18 22:50:42 +0000 UTC [ - ]
neolog 2021-08-19 00:27:38 +0000 UTC [ - ]
https://stratechery.com/2021/apples-mistake/ is a smart tech commentator
https://www.nytimes.com/2021/08/11/opinion/apple-iphones-pri... are two security/encryption experts
hughrr 2021-08-18 18:25:27 +0000 UTC [ - ]
If they want to do business in China they will be forced by their legislation.
My entire argument is don’t build the mechanism.
floatingatoll 2021-08-18 19:19:39 +0000 UTC [ - ]
cyanite 2021-08-18 20:44:56 +0000 UTC [ - ]
China hosted, yes, but Apple denies China-decryptable, so that’s speculation unless you have a good source.
floatingatoll 2021-08-18 20:59:11 +0000 UTC [ - ]
tandav 2021-08-18 17:26:42 +0000 UTC [ - ]
cyanite 2021-08-18 20:45:33 +0000 UTC [ - ]
ksec 2021-08-18 20:48:15 +0000 UTC [ - ]
floatingatoll 2021-08-18 20:56:13 +0000 UTC [ - ]
Apple's CSAM implementation protects the user against algorithm defects, does not expose the user to legal trouble until they have at least 30 human-verified visual matches of CSAM content, and occurs on-device using a small set of confirmed and audited signatures to ensure that CSAM scanning requires a central system only for verifying the unverified, blurred, positive matches.
I would hesitate to compare Steve Jobs' views on PRISM to his likely views on something that is so clearly opposed to it in so many ways. So I do not yet understand your viewpoint that Apple CSAM scanning and PRISM would have been treated equally by Steve. Help me understand?
devwastaken 2021-08-18 14:26:13 +0000 UTC [ - ]
Apple is content with putting in backdoors. Theres collusion behind the scenes here, they know it's bad, but hey, it's apple, it's ran by manipulators and liars and forms it's own cult.
atoav 2021-08-18 14:44:24 +0000 UTC [ - ]
This kind of stuff should not be evidence, if anything it should be a indicator where to look.
hannasanarion 2021-08-18 19:03:02 +0000 UTC [ - ]
vegetablepotpie 2021-08-18 13:10:27 +0000 UTC [ - ]
Not to go off topic from your main point, but what did you move to?
ByteWelder 2021-08-18 13:43:47 +0000 UTC [ - ]
- Desktop: Manjaro Gnome, for that amazing macOS-like desktop. It even does the 3 finger swipe up to see all your apps with Apple's Touchpad.
- Phone: OnePlus 8T with microG variant of LineageOS (alternatives: Pixel line-up), because that allows me to still receive push notifications. I have over 40 apps and only found 1 that didn't work so far (which is Uber Eats, because they seem to require Google Advertisement ID). I pushed a modified Google Camera app to it, so my camera is better supported. I think only 3 out of 4 cameras are working, but I don't care.
- Watch: Amazfit GTR 2e with the official app. Alternatively it should work with Gadgetbridge if you don't want to use the offical app ("Zepp"). Amazfit GTR 2 is a better option if you want it to have WiFi and want to store music on it.
drvdevd 2021-08-18 17:31:51 +0000 UTC [ - ]
hughrr 2021-08-18 13:13:21 +0000 UTC [ - ]
rootsudo 2021-08-18 13:40:39 +0000 UTC [ - ]
I may just go LineageOS.
Sanzig 2021-08-18 14:10:33 +0000 UTC [ - ]
The big advantage to CalyxOS or GrapheneOS versus Lineage is they support re-locking the bootloader, which means that verified boot still works - important if your phone gets swiped.
sphinxcdi 2021-08-19 14:17:08 +0000 UTC [ - ]
Many apps are also compatible with GrapheneOS and installing https://grapheneos.org/usage#sandboxed-play-services provides broader app compatibility.
rootsudo 2021-08-18 21:40:47 +0000 UTC [ - ]
zepto 2021-08-18 14:02:37 +0000 UTC [ - ]
This can’t be used for a targeted attack. It’s no different from the previous false posting making this claim.
zekrioca 2021-08-18 18:01:11 +0000 UTC [ - ]
zepto 2021-08-18 18:12:23 +0000 UTC [ - ]
Imagine if the local photo storage is not the same as iCloud Photo Library…
Imagine if anyone who cared could simply switch off iCloud Photo Library…
drvdevd 2021-08-18 17:12:28 +0000 UTC [ - ]
This. This is hard enough to explain but with all the PR around whether the feature is good or bad it's even more difficult. Putting this functionality on device is equivalent to modifying the hardware directly, because software is eating the world.
gjsman-1000 2021-08-18 14:16:32 +0000 UTC [ - ]
cyanite 2021-08-18 20:50:31 +0000 UTC [ - ]
ballenf 2021-08-18 15:21:53 +0000 UTC [ - ]
Could be either malicious or accidental (less likely presumably).
Shorel 2021-08-18 15:23:09 +0000 UTC [ - ]
Now, the hashes' database is not public, AFAIK it is in the hands of the FBI.
robertoandred 2021-08-18 14:59:34 +0000 UTC [ - ]
spullara 2021-08-18 17:47:20 +0000 UTC [ - ]
dgb99 2021-08-18 14:56:16 +0000 UTC [ - ]
elisbce 2021-08-18 10:22:34 +0000 UTC [ - ]
1. Dumping iOS ecosystem and pick what? You think Android is better and won't have this? iOS is the strongest mobile system in terms of privacy protection available to this date. Hell, the FBI doesn't even need to ask Google to decrypt an Android.
2. Theoretically you can target attacks against anyone. It is just a matter of efforts. If you are a political target, they can already implant spywares around you to track and monitor you. They don't even need to break your phone.
3. If you are not possessing CSAM materials and not one of those targets, then you are not worth the efforts to be attacked or monitored. They don't care. And to be honest, this is the best(might be the only real) way to stay private.
user-the-name 2021-08-18 10:13:01 +0000 UTC [ - ]
This can attack that system itself, though, by overloading those humans with too much work looking at random noise, but that requires quite a large organised effort. It also requires getting a hold of actual blacklisted hashes, which I doubt anyone has, unless they have actual child pornography.
reuben_scratton 2021-08-18 10:17:53 +0000 UTC [ - ]
nomel 2021-08-18 16:48:16 +0000 UTC [ - ]
apetrovic 2021-08-18 12:50:37 +0000 UTC [ - ]
Apple is greedy, ruthless, tone-deaf machine, but I don't think they're stupid.
notriddle 2021-08-18 14:08:17 +0000 UTC [ - ]
apetrovic 2021-08-18 15:00:16 +0000 UTC [ - ]
read_if_gay_ 2021-08-18 19:00:43 +0000 UTC [ - ]
short_sells_poo 2021-08-18 12:09:53 +0000 UTC [ - ]
FabHK 2021-08-18 12:18:36 +0000 UTC [ - ]
short_sells_poo 2021-08-18 14:32:16 +0000 UTC [ - ]
Are you actually arguing in good faith here at all? Because I can't see how a "visual derivative" that's nevertheless good enough for manual confirmation is any better than the source image?
cyanite 2021-08-18 20:52:37 +0000 UTC [ - ]
Are you? Because you just seemed to claim that it could match against innocent pictures of your naked children, but this tells me that you don’t understand that this system looks for known pictures, not for something that looks like naked children.
Edit: if you do, apologies, but then I’d say that Apple has suggested that it’s a low resolution version of the picture. This should be contrasted with server side scanning, where the server accesses all pictures fully.
stetrain 2021-08-18 13:28:06 +0000 UTC [ - ]
What is show to the reviewer is a "visual derivative" which hasn't been clearly defined. A thumbnail image? Something with a censored section? We don't really know.
short_sells_poo 2021-08-18 14:30:17 +0000 UTC [ - ]
bengale 2021-08-18 15:25:55 +0000 UTC [ - ]
brokenmachine 2021-08-19 02:12:00 +0000 UTC [ - ]
Lots of people leave their cameras in burst mode.
robertoandred 2021-08-18 14:39:22 +0000 UTC [ - ]
short_sells_poo 2021-08-18 15:23:39 +0000 UTC [ - ]
stetrain 2021-08-18 16:04:50 +0000 UTC [ - ]
csande17 2021-08-18 10:18:51 +0000 UTC [ - ]
I've never personally seen or looked for any images like this, but if they weren't already proliferating online and widely available to criminals, why would we need to build an elaborate client-side scanning system to detect and report people who have copies of them?
user-the-name 2021-08-18 10:22:44 +0000 UTC [ - ]
tzs 2021-08-18 13:04:01 +0000 UTC [ - ]
Is there some darknet search engine that you access over Tor and type whatever illegal thing you seek such as "child porn" or "stolen credit cards" or "heroin" or "money laundering" into and it points you to providers of those illegal goods and services reachable over some reasonably secure channel?
Or is it one of those things where someone already using a particular child porn or stolen card or money laundering or heroin selling site has to put you in contact with them?
quenix 2021-08-18 13:36:21 +0000 UTC [ - ]
facebookwkhpilnemxj7asaniu7vnjj.onion
where the "Facebook" part is there for convenience and may likely not be present for actual illegal addresses. As such, you probably won't be able to find a CP website unless you actively brute force the entire onion address space.
I don't know how the sharing actually happens, but I don't think a real search engine for the dark net exists. There are websites that try (Ahlia, I think?) but they are super limited in scope.
I suspect addresses for drugs and CP are shared person-to-person.
colinmhayes 2021-08-18 13:40:54 +0000 UTC [ - ]
nabakin 2021-08-18 10:21:17 +0000 UTC [ - ]
atrus 2021-08-18 10:17:44 +0000 UTC [ - ]
hughrr 2021-08-18 10:18:21 +0000 UTC [ - ]
In the mean time they lock your account which means your entire digital life stops dead until their review process is done.
No way do I accept any of this.
Also the hashes are on the device I understand and it’s not going to be that difficult to extract them.
FabHK 2021-08-18 10:27:45 +0000 UTC [ - ]
> the hashes are on the device I understand and it’s not going to be that difficult to extract them.
----
A Review of the Cryptography Behind the Apple PSI System, Benny Pinkas, Dept. of Computer Science, Bar-Ilan University:
> Do users learn the CSAM database? No user receives any CSAM photo, not even in encrypted form. Users receive a data structure of blinded fingerprints of photos in the CSAM database. Users cannot recover these fingerprints and therefore cannot use them to identify which photos are in the CSAM database.
https://www.apple.com/child-safety/pdf/Technical_Assessment_...
----
The Apple PSI Protocol, Mihir Bellare, Department of Computer Science and Engineering University of California, San Diego:
> users do not learn the contents of the CSAM database.
https://www.apple.com/child-safety/pdf/Technical_Assessment_...
----
A Concrete-Security Analysis of the Apple PSI Protocol, Mihir Bellare, Department of Computer Science and Engineering University of California, San Diego:
> the database of CSAM photos should not be made public or become known to the user. Apple has found a way to detect and report CSAM offenders while respecting these privacy constraints.
https://www.apple.com/child-safety/pdf/Alternative_Security_...
hughrr 2021-08-18 10:42:04 +0000 UTC [ - ]
nyuszika7h 2021-08-18 10:46:49 +0000 UTC [ - ]
Do you have a source for this, or are you just making things up?
user-the-name 2021-08-18 10:21:57 +0000 UTC [ - ]
hughrr 2021-08-18 10:22:52 +0000 UTC [ - ]
There’s some irony in that.
henrikeh 2021-08-18 10:30:00 +0000 UTC [ - ]
If you don’t understand the details of this, I’ll recommend this podcast episode which sums it up and discusses the implications https://atp.fm/443
hughrr 2021-08-18 10:39:11 +0000 UTC [ - ]
et2o 2021-08-18 11:16:49 +0000 UTC [ - ]
henrikeh 2021-08-18 10:49:29 +0000 UTC [ - ]
shmatt 2021-08-18 13:47:23 +0000 UTC [ - ]
Yes, I can't give you an open link, but I imagine at least hundreds of thousands of people have access to download them. Some companies with better security than others
It only takes 1 guy to sell it to an NSO-type company, who would then use it to target attacks for $$$
user-the-name 2021-08-18 18:05:00 +0000 UTC [ - ]
Grustaf 2021-08-18 13:23:01 +0000 UTC [ - ]
This technology doesn't make it even an ounce easier for Apple, or some evil three letter agency, to spy on you. If they want to see if you're a Trump supporter, anti-vaxxer or whatever, they already have access to all your photos and emails, on device. They even have OCR and classification of your photos. An intern could add a spying function in an afternoon.
if OCR(photo[i]).containsString("MAGA") { reportUser() }
argvargc 2021-08-18 14:48:08 +0000 UTC [ - ]
Introducing the people asking us to trust them with the responsibility of mass, automated crime accusations.
hannasanarion 2021-08-18 19:21:55 +0000 UTC [ - ]
The real world isn't as stupid as the computer one. The justice system is not deterministic and automatic. Nobody is going to look at this grey blob and go "welp, we have no choice but to throw you in prison forever"
argvargc 2021-08-18 20:24:23 +0000 UTC [ - ]
Apple is trying to automate and computerise a process that was not automated previously, apply it to a huge number of people, and with disastrous potential consequences should their wonderful design be found lacking.
And within days, they have already utterly failed to provide one of their own self-stated and incredibly obvious requirements. What else could possibly go wrong?
Further, if you think that because the first pre-image failure found (in days...) was a grey blob, that this means all future possible cultivated collisions will only ever be grey blobs, well - good luck with that.
hannasanarion 2021-08-18 20:47:20 +0000 UTC [ - ]
The Apple system sends flagged photos to reviewers, and if the reviewers find them suspicious, they send them to the poor souls at NCMEC, who will compare the flagged photo with the original illegal photo that's supposedly a match, and inform law enforcement if they are in fact a match.
Nobody will get cops at their door because somebody sent them a grey blob image.
The process that's being "automated" is merely an automatic flag that initiates a several-step process of human review. Apple isn't rolling out robocops.
Seriously, what are the "disasterous consequences" that you envision? What is the sequence of events where a hash collision leads to any inconvenience whatsoever for a user?
argvargc 2021-08-19 08:06:01 +0000 UTC [ - ]
hannasanarion 2021-08-19 15:47:59 +0000 UTC [ - ]
There are multiple stages of human review in the process and a high threshold for activation of the proccess because collisions are inevitable. The fact that collisions exist doesn't impact the safety of the product whatsoever.
I ask again, what is the sequence of events where a hash collision from a benign image can lead to any consequence at all for the user?
smlss_sftwr 2021-08-18 16:50:51 +0000 UTC [ - ]
spoonjim 2021-08-18 20:24:17 +0000 UTC [ - ]
cyanite 2021-08-18 20:32:51 +0000 UTC [ - ]
But this will be evident by reading and understanding the technical description.
spoonjim 2021-08-18 21:31:10 +0000 UTC [ - ]
shadowgovt 2021-08-18 11:36:39 +0000 UTC [ - ]
To my mind, that's far more disquieting than the risk of someone staging and elaborate attack on an enemy's device.
zug_zug 2021-08-18 13:25:56 +0000 UTC [ - ]
ptidhomme 2021-08-18 09:56:28 +0000 UTC [ - ]
Edit : well that was a hint to Assange of course. Probably not true in general. So yes, I mean false accusations.
dang 2021-08-18 18:45:00 +0000 UTC [ - ]
reayn 2021-08-18 10:35:37 +0000 UTC [ - ]
hosteur 2021-08-18 11:34:38 +0000 UTC [ - ]
d33 2021-08-18 11:49:00 +0000 UTC [ - ]
coldtea 2021-08-18 12:17:26 +0000 UTC [ - ]
And besides the "victim" in the latter case there was a whole lot of diplomatic pressure and political commotion to set him up, with carrots and sticks and the aid of friendly satellite states.
_jal 2021-08-18 12:36:33 +0000 UTC [ - ]
Normal humans are not required to prove anything in order to think them.
coldtea 2021-08-18 12:45:21 +0000 UTC [ - ]
No, but decent human beings are required to not think them true just because they're out there...
whimsicalism 2021-08-18 13:29:39 +0000 UTC [ - ]
this is true whether you believe or disbelieve a rape claim, because if you disbelieve it you typically must believe the corrolary that the victim was committing the crime of lying to the police.
JohnWhigham 2021-08-18 14:34:24 +0000 UTC [ - ]
Griffinsauce 2021-08-18 15:24:23 +0000 UTC [ - ]
coldtea 2021-08-18 15:40:20 +0000 UTC [ - ]
And even if it didn't hold in my country, it's still a tenet I think is fair.
So, I didn't intended invoke what's the legal requirements, but what should be the "right" thing (and what I think should also be legally mandated, even if it's not).
xpe 2021-08-18 14:28:38 +0000 UTC [ - ]
I respect this ethical claim; however, there is considerable variation in how legal systems operate in this domain.
enriquto 2021-08-18 12:12:52 +0000 UTC [ - ]
this is not how the concept of accusation works
brigandish 2021-08-18 10:38:12 +0000 UTC [ - ]
I also think that those downvoting you might've applied the principle of charity and taken the best interpretation of what you've written or at least ask.
sneak 2021-08-18 10:39:22 +0000 UTC [ - ]
This is a fundamental tenet of human rights in western, small-l liberal free societies.
The fact that this is controversial these days is literally insane to me.
The consequences of throwing this fundamental system out the window is that you get the sort of nonsense that happened with Assange, where he was literally never even charged yet completely and thoroughly discredited due to headlines containing the word "rape" when no such thing ever happened.
(If you have been misled to believe otherwise, I encourage you to read the direct statements of the women involved.)
DebtDeflation 2021-08-18 13:09:04 +0000 UTC [ - ]
Ensorceled 2021-08-18 12:10:30 +0000 UTC [ - ]
No, they need to be treated as unproven, a very critical difference.
Just to be clear, witness testimony, including testimony FROM THE VICTIM, is evidence of the crime. Just for some reason, in rape cases, we go all wonky with this principle.
technothrasher 2021-08-18 12:24:31 +0000 UTC [ - ]
Right. I have a jar of gumballs and tell you I think there is an even number of gumballs in it. Do you believe me? If you don't, does that mean you believe there are in fact an odd number of gumballs in it? No, you do not believe either that there are an odd or an even number, because you just don't know. For a criminal accusation , your initial belief should not be guilty or innocent, it should be, "I just don't know".
Ensorceled 2021-08-18 12:44:59 +0000 UTC [ - ]
EDIT: judging by the downvotes, they are also making the leap to the original gumball counter must be lying ...
ralfn 2021-08-18 12:28:06 +0000 UTC [ - ]
2. The state is the only authorized monopoly of violence and they should treat unproven and untrue as identical, and the only place where that decision is made is in a courtroom.
3. The 'believe the victims' activists however are rightfully (IMHO) suggesting to break principle #2 because there is institutional and systemic supression of the rule of law being applied properly. I would consider this civil disobedience.
4. The state needs to get it's act together and administer the law properly. Only through reform can they regain the legitimacy that is required for #2.
5. Those reforms should focus on racism, sexism and classism. It should focus on what part of the law is too ambiguous for either law enforcement and the judiciary branch.
6. This might include actually blinding the court, i.e. offer only verifiable facts that are admissable and can not be utilized as widgets for race, gender or social economic positions.
If you don't think the legal system has a problem ask yourself why every lawyer recommends the defendant to wear a suit to court. How could it matter if the process was assumed to be without bias.
bscphil 2021-08-18 13:40:29 +0000 UTC [ - ]
This is completely wrong. If the state treated all accusations as untrue prior to conviction, they would not send armed men to haul you to prison, bar you from release unless you can bail yourself out (or sometimes not at all), and not have a prosecutor charge you with a crime.
This is no petty distinction. In order to function in its judiciary role, the state in fact must distinguish between plausibly true accusations and not plausible accusations, and must treat certain plausibly true accusations as "unproven" but not untrue, and take steps to make sure the accused does not flee into another country or commit further crimes.
ralfn 2021-08-18 18:59:15 +0000 UTC [ - ]
Although arresting and jailing someone is not being justice being administered, it is facilitating justice and fact finding and it itself is indeed an act of violence.
brigandish 2021-08-19 03:09:59 +0000 UTC [ - ]
whimsicalism 2021-08-18 13:32:26 +0000 UTC [ - ]
Sebb767 2021-08-18 12:18:47 +0000 UTC [ - ]
Ensorceled 2021-08-18 12:20:53 +0000 UTC [ - ]
ziml77 2021-08-18 12:44:43 +0000 UTC [ - ]
Ensorceled 2021-08-18 12:50:22 +0000 UTC [ - ]
There is a reason the verdict is "not guilty" instead of "innocent". After a not guilty verdict the legal system still does not assume the accuser was making a false accusation.
Sebb767 2021-08-18 12:56:03 +0000 UTC [ - ]
Both verdicts exists, actually; the latter one is just far rarer (since it is both harder to prove and usually not what is argued about).
Ensorceled 2021-08-18 16:38:18 +0000 UTC [ - ]
Ironically, a provable case of false rape accusation might result in this verdict.
Sebb767 2021-08-18 12:54:54 +0000 UTC [ - ]
That's also not what I was trying to say. I only explained why testimony is far more doubted in rape cases than in most other cases. In general, I agree with you that accusations should be treated as unproven, so neither false nor true; we have a justice system to give a final verdict.
deelowe 2021-08-18 12:52:16 +0000 UTC [ - ]
Sorry, it needs to be said. It's easier to take one side of an argument when it's less likely to affect you personally.
whimsicalism 2021-08-18 13:34:11 +0000 UTC [ - ]
No doubt some are false, but have not seen any evidence suggesting the majority are false.
Ensorceled 2021-08-18 12:54:46 +0000 UTC [ - ]
deelowe 2021-08-18 14:54:42 +0000 UTC [ - ]
This is why due process is important.
Ensorceled 2021-08-18 16:39:44 +0000 UTC [ - ]
brigandish 2021-08-19 03:03:09 +0000 UTC [ - ]
kergonath 2021-08-18 14:14:34 +0000 UTC [ - ]
Witness testimony on its own is circumstantial evidence in general. Witnesses are very unreliable.
Ensorceled 2021-08-18 18:56:26 +0000 UTC [ - ]
I'm saying the accusation, the witness testimony, is NOT assumed false, the accusation is simply unproven.
If the jury/judge believes the testimony, they may convict on that testimony. Then the accused is proven guilty.
That the accusation started off being false and then magically became true when when the jury believed it is the bonkers belief here.
The fact that witness testimony is unreliable is a big part of WHY the accused is presumed innocent.
weimerica 2021-08-18 12:26:28 +0000 UTC [ - ]
People lie, people have memory issues. Complicating things is the lovely issue modernity has brought, of two adults getting intoxicated and fornicating followed by regret and rape accusations sometime later. Then we have weaponized accusations - just like we have keyboard warriors making false police reports to have their opposition receive a SWAT team visit, we have people that will make false accusations to get revenge after a perceived slight.
What is wonky to me is the people who hear an accusation and treat it like the Gospel, destroying the accused depriving them of any possible justice. This is what is truly, absolutely, criminally insane.
skhr0680 2021-08-18 12:24:36 +0000 UTC [ - ]
In a criminal trial:
Victim: This person did it
Defendant: No I didn't
Not guilty
Ensorceled 2021-08-18 12:47:21 +0000 UTC [ - ]
chitowneats 2021-08-18 13:39:28 +0000 UTC [ - ]
cma 2021-08-18 11:00:17 +0000 UTC [ - ]
Sebb767 2021-08-18 12:20:36 +0000 UTC [ - ]
Ensorceled 2021-08-18 12:05:35 +0000 UTC [ - ]
There are far too many people who assume/assert false accusations of rape are the norm for the principle of charity to apply here.
brigandish 2021-08-19 04:36:14 +0000 UTC [ - ]
OJFord 2021-08-18 11:06:31 +0000 UTC [ - ]
account42 2021-08-18 11:36:46 +0000 UTC [ - ]
Which is pretty ironic, considering that kind of reaction is exactly what the comment is about.
da_big_ghey 2021-08-18 19:45:27 +0000 UTC [ - ]
thulle 2021-08-18 11:52:10 +0000 UTC [ - ]
pthrowaway9000 2021-08-18 14:03:31 +0000 UTC [ - ]
sandworm101 2021-08-18 13:12:53 +0000 UTC [ - ]
Smug mode activated.
farmerstan 2021-08-18 14:06:58 +0000 UTC [ - ]
rootsudo 2021-08-18 13:21:30 +0000 UTC [ - ]
Not an expert, but I do enjoy how quickly this was absconded and disproven.
Ajedi32 2021-08-18 13:30:23 +0000 UTC [ - ]
guywhocodes 2021-08-18 13:26:43 +0000 UTC [ - ]
Traubenfuchs 2021-08-18 13:41:43 +0000 UTC [ - ]
wisienkas 2021-08-18 11:26:08 +0000 UTC [ - ]
enedil 2021-08-18 11:43:02 +0000 UTC [ - ]
Grollicus 2021-08-18 11:58:21 +0000 UTC [ - ]
In certificates, md5 - and sha1 - was used quite some time after it was known to be weak, I suspect OP was thinking of that.
This article seems to give a good summary what happened with sha1, mentions md5 in passing and links the related chromium issue: https://konklone.com/post/why-google-is-hurrying-the-web-to-...
Retr0id 2021-08-18 12:04:05 +0000 UTC [ - ]
Even if they were, it wouldn't matter, because NeuralHash is non-cryptographic by design.
wisienkas 2021-08-19 08:34:22 +0000 UTC [ - ]
Retr0id 2021-08-19 14:30:27 +0000 UTC [ - ]
zaptheimpaler 2021-08-18 10:27:52 +0000 UTC [ - ]
> To put this in perspective, in 2019 Facebook reported 65 million instances of CSAM on its platform, according to The New York Times. Google reported 3.5 million photos and videos, while Twitter and Snap reported “more than 100,000,” Apple, on the other hand, reported 3,000 photos.
ALL of those services are already scanning all your photos server side implying complete access to the photo for other purposes.
Apple went above and beyond what anyone else does and moved the scanning to be client side! I can't believe they are getting shit for this. They are doing MORE than anyone else to protect your privacy, while still complying with federal laws. If you object to this, you should object to all cloud based photo storage services, and with the federal laws. Apple not only made it more private, but also transparently announced what they do and they are getting shit for it.
[1] https://www.engadget.com/apple-child-safety-csam-detection-e...
kmbfjr 2021-08-18 11:08:34 +0000 UTC [ - ]
I worked with a guy who was convicted of this very crime. Do you know what the FBI installs on his computer and phone? This kind of monitoring.
I am not going to be treated like I am on parole.
raxxorrax 2021-08-18 15:23:36 +0000 UTC [ - ]
WA 2021-08-18 10:40:35 +0000 UTC [ - ]
Now they can’t even use their phone for storing photos.
The thing with "only when iCloud is enabled" is only for now. It’s trivial to make Scanning all photos default in a future version.
nyuszika7h 2021-08-18 10:53:28 +0000 UTC [ - ]
Sure, this system still has the potential to be abused, but if I had to choose between "end-to-end encrypted with local scanning before upload, which can maybe be abused but has a higher chance of being noticed if it is, and can be disabled with a jailbreak if you're really paranoid" and "not end-to-end encrypted, Apple can inspect my whole photo library whenever they desire, which can go completely unnoticed due to gag orders forcing them to hand data over to governments, not even requiring a software update, and you are completely powerless to do anything about it except for not using iCloud Photos at all", I would definitely choose the former.
sodality2 2021-08-18 12:07:40 +0000 UTC [ - ]
Facebook et al scans server side because they have liability. Tell me why apple thinks they need to scan locally? They dont have liability, which is even worse. It means they're doing it for other reasons. Plenty of people read that as "wow apple is so kindhearted they did this without the business incentive". And you know what? They didn't. They might even have good intentions. But, as the saying goes, the path to hell is paved with good intentions.
WA 2021-08-18 11:15:53 +0000 UTC [ - ]
I choose no cloud storage plus device that doesn’t scan my data.
You argue for the former, which isn’t implemented, vs the latter, which I assume is reality anyways for all cloud storage providers.
We can have this discussion if Apple implements the former. If this was the goal, they would’ve announced it like you suggested.
Hackbraten 2021-08-18 11:23:38 +0000 UTC [ - ]
Except that if I’m really paranoid, I’m not going to skip security updates. And those updates will probably render known jailbreak exploits useless.
CrimsonRain 2021-08-18 12:59:25 +0000 UTC [ - ]
[1] 18 US Code 2258A
[2] 18 US Code 2258A -> F.1, F.2, F.3
dang 2021-08-18 19:11:40 +0000 UTC [ - ]
Apple defends anti-child abuse imagery tech after claims of ‘hash collisions’ - https://news.ycombinator.com/item?id=28225706
Convert Apple NeuralHash model for CSAM Detection to ONNX - https://news.ycombinator.com/item?id=28218391