Hugo Hacker News

Apple’s device surveillance plan is a threat to user privacy – and press freedom

35803288 2021-08-18 22:51:01 +0000 UTC [ - ]

This technology will soon be out of Apple’s control. Higgins correctly highlights the immense pressure Apple will get from governments and other actors to bend the technology and use it for something else than csam. It will happen, people are probably already thinking how to apply such pressure. Sooner or later Apple will cave in and they will have only themselves to blame when freedom supports in Sudan or LGBTQ activists in Saudi Arabia will be jailed. The issue here is not where to draw the line, but who will draw it (spoiler: not Apple).

greyface- 2021-08-19 00:27:55 +0000 UTC [ - ]

Exactly. Governments tend to accept "we lack the technical capability to comply with your request" (unless said capability is legally mandated, e.g. so-called "lawful intercept"). They do not tend to accept "we possess the technical capability to comply with your request but choose not to do so".

2021-08-19 00:54:03 +0000 UTC [ - ]

35803288 2021-08-18 22:58:19 +0000 UTC [ - ]

Ten years from now we will read a leaked document stating that US authorities requested FISA Courts warrants (or similar) and made Apple scan for things other than csam. It already happened. Court orders are easier than hacks once you iron out the legal opinions.

sparker72678 2021-08-18 23:04:09 +0000 UTC [ - ]

Even if that's true, this method at least drops some part of the process on the client where it can be inspected, vs. all scanning happening in the cloud, entirely behind closed doors.

Does that mean nothing bad can happen? No. But it does mean that when something changes, we at least know something changed.

jtbayly 2021-08-18 23:51:05 +0000 UTC [ - ]

The database will change every update. It’s not like the NCEMC db already has all possible CSAM. Every time they find more, they add it to the DB. Thus, it will probably change every time Apple pushes an OS update.

Lamad123 2021-08-19 08:37:27 +0000 UTC [ - ]

10 years seems a little too much for US gov's patience

ohazi 2021-08-18 23:02:18 +0000 UTC [ - ]

Why does anyone even assume that a bad actor would need to apply pressure?

These databases are unauditable by design -- all they'd need to do is hand Apple their own database of "CSAM fingerprints collected by our own local law enforcement that are more relevant in this region" (filled with political images of course), and ask Apple to apply their standard CSAM reporting rules.

That's it... Tyranny complete.

tzs 2021-08-19 00:01:00 +0000 UTC [ - ]

Apple does not need to be able to audit the database to discover that it is not a CSAM database. Matches are reviewed by Apple before being reported to the authorities, so they would see that they are getting matches on non-CSAM material.

They wouldn't necessarily be able to tell if it was a false positive matching real CSAM material or a true positive matching illegitimate material in the databases put there by a government trying to misuse it, but they don't need to know whether it is or not. They just need to see that it isn't CSAM and so does not need to be reported.

gambiting 2021-08-19 00:14:59 +0000 UTC [ - ]

That's even worse - so now apple is deciding whether to report something even if it matches the data provided by the government. It's not their role to judge the contents, only whether the match is correct or not, otherwise even with actual CSAM content, are they going to be making judgement calls? What if the system matches loli content which I imagine is in that database but legal in places? Are they going to then get the user's location(!!!!) To see whether it's legal there or not, or....guess? Or what? Because the only way to make this system work is to report every match and then let actual law enforcement figure out if it's illegal or not.

So yeah, the entire system is fucked and shouldn't exist. Apple is not law enforcement and them saying "we'll just prescreen every submission" is actually worse, not better.

tzs 2021-08-19 00:45:11 +0000 UTC [ - ]

> It's not their role to judge the contents, only whether the match is correct or not.

Which is what they would be doing.

Some government gives Apple a purported CSAM hash database, which Apple only accepts because it is a CSAM database. An image gets a match. Apple looks at it and it is not CSAM. Therefore, unless the government lied to them about the database, it must be a false positive and gets rejected as an incorrect match.

The rejection is not because Apple judged the content per se. They just determined that it must be a false positive given the government's claims about the database.

gambiting 2021-08-19 06:50:55 +0000 UTC [ - ]

My point was, what if you have content in there that is CSAM in some places but isn't in others(for instance - drawings). If apple employees report it to authorities in a state where it isn't illegal, they just suspended your account and reported you to authorities without any reason. So like I said, then you get into this trap of - do apple employees start judging whether the match "should" count? What if a picture isn't actually pornographic but made it into the database(say a child in underwear, maybe it's there because of a connection to an abuse case, but it isn't a picture of abuse per se). Again, is this random person at apple going to be making judgement calls about validity of matches against a government provided database? Because again, I don't believe this can ever work. Maybe those are edge cases, sure, but my point is that as soon as you allow some apple employee to make a judgement, you are introducing new risks.

alwillis 2021-08-19 07:15:35 +0000 UTC [ - ]

My point was, what if you have content in there that is CSAM in some places but isn't in others(for instance - drawings). If apple employees report it to authorities in a state where it isn't illegal, they just suspended your account and reported you to authorities without any reason.

The only CSAM Apple will flag has to come from multiple organizations in different jurisdictions; otherwise, those hashes are ignored.

And since no credible child welfare organization is going to have CSAM that matches stuff from the worst places, there's no simple or obvious way to get them to match.

gambiting 2021-08-19 08:11:56 +0000 UTC [ - ]

>>The only CSAM Apple will flag has to come from multiple organizations in different jurisdictions; otherwise, those hashes are ignored.

Have they actually said they would do that? I was under the impression that they just use the database of hashes provided by the American authority on prevention of child abuse.

>>And since no credible child welfare organization is going to have CSAM that matches stuff from the worst places

I'm not sure I understand what you mean, can you expand?

zimpenfish 2021-08-19 09:29:31 +0000 UTC [ - ]

> Have they actually said they would do that?

In [1], "That includes a rule to only flag images found in multiple child safety databases with different government affiliations — theoretically stopping one country from adding non-CSAM content to the system."

[1] https://www.theverge.com/2021/8/13/22623859/apple-icloud-pho...

judge2020 2021-08-18 23:07:39 +0000 UTC [ - ]

https://www.neowin.net/news/apple-says-its-new-child-safety-...

Yes, they could change it at any time. But this was always the case with all software that implements OTA updates.

sparker72678 2021-08-18 23:11:40 +0000 UTC [ - ]

1) The DB must be updated on both the server and the client. Apple's solution requires the DBs to match, essentially. So you can't update the DB without everyone knowing it was changed.

2) Apple has trillions of dollars to lose by selling out to a shitty change like that.

Do those things mean nothing bad can come of it? Hell no. But, right now we have FISA courts and silent warrants sucking in data without anyone knowing or being able to talk about it. It's not like we're a panacea at the moment.

Apple's approach creates a possibility of slowing down the politics already trying to move against E2EE data.

This is a political fight, and if we all act like ideologues we're going to lose it all in the end.

ScoobleDoodle 2021-08-18 23:33:30 +0000 UTC [ - ]

Apples move gives away another piece of the puzzle to mass monitoring and surveillance.

Apples move brings it one step closer for FISA courts and silent warrants to have access beyond what we send over the network to now what resides on our phones.

Apple is giving mass surveillance a foot hold into living on and monitoring data on our personal phones.

Apples has created the back door for increased surveillance: https://www.eff.org/deeplinks/2021/08/if-you-build-it-they-w...

Tell Apple don't scan our phones: https://act.eff.org/action/tell-apple-don-t-scan-our-phones

xkcd-sucks 2021-08-18 23:27:41 +0000 UTC [ - ]

> Apple's approach creates a possibility of slowing down the politics already trying to move against E2EE data.

This is "appeasement" or "Danegeld", and we all know how that works out in the end

TimTheTinker 2021-08-19 00:44:51 +0000 UTC [ - ]

> 2) Apple has trillions of dollars to lose by selling out to a shitty change like that.

Apple has trillions to lose by building this system in the first place. All it takes is one court order to do non-CP scanning with the existing system.

zepto 2021-08-19 14:52:51 +0000 UTC [ - ]

It really doesn’t have much to lose. None of the non-tech people I have spoken to think of this as a bad thing.

TimTheTinker 2021-08-19 16:24:18 +0000 UTC [ - ]

My family, my wife's family, and much of our extended families are all privacy conscious (we chat over Signal, just like we close our window blinds at night)... and they are all alarmed over this and ready to ditch Apple if they don't change their stance on this.

All it takes is a wiretap warrant and Apple would have to scan on-device pictures and iMessages for whatever the wiretap says. This is true even if the phone has iCloud switched off (most of us already do), since all Apple has to do is change a Boolean variable's value, or something similar requiring no creative effort (and hence legally coercible).

zepto 2021-08-19 17:07:16 +0000 UTC [ - ]

> All it takes is a wiretap warrant and Apple would have to scan on-device pictures and iMessages for whatever the wiretap says.

This is total and utter bullshit. It is a complete misunderstanding of how the system works.

If you and your family think this is true, then of course you are alarmed.

jtbayly 2021-08-18 23:42:45 +0000 UTC [ - ]

Apple can’t be selling out if they literally have no way of knowing what is in the database of illicit content.

That is why they said that the content has to be in two separate nation’s databases. Of course, there is no information that I’ve seen about what other nation’s db they would use. And without another nation, there will be no content in the database? I doubt it.

Regardless, it’s a moot issue, since we already know that the 5 eyes all conspire and would gladly add content for each other, the same as several middle-east nations.

sparker72678 2021-08-18 23:49:25 +0000 UTC [ - ]

Five eyes already does whatever the heck they want, and this isn’t going to change that.

jtbayly 2021-08-19 00:52:30 +0000 UTC [ - ]

It very well may change it.

Right now they have no ability to scan every photo on every iOS device for “objectionable” content (as defined by them on that day, based on their mood). But soon they will. All they have to do is add photos to the ncemc and an equivalent db in another country.

salawat 2021-08-18 23:50:11 +0000 UTC [ - ]

>1) The DB must be updated on both the server and the client. Apple's solution requires the DBs to match, essentially. So you can't update the DB without everyone knowing it was changed.

Yes... Because it is impossible for different servers to be configured as backends depending on where handsets are destined to be sold. It's not like it's possible to quickly whip up a geolocation aware API that can swap things out on the fly. C'mon. This isn't even hard. These are all trivially surmountable problems. The one thing standing in the way of already having done this, was there was no way in hell anyone would have been daft enough to even try doing something like this with a straight face. For heaven sake, even Hollywood lampshaded it with that "using cell phones as broadband sensors" shtick in the Dark Knight or whatever it was.

>This is a political fight, and if we all act like ideologues we're going to lose it all in the end.

The fight was lost the moment someone caved to "think of the children". Every last warning sign that has been left all over the intellectual landscape ignored, history, risk, human nature, any semblance of good sense ignored.

Honestly, I'm sitting here scratching my head wondering if I took a wrong turn or something 20 years ago. This isn't even close to the place I lived anymore.

ScoobleDoodle 2021-08-18 23:34:38 +0000 UTC [ - ]

raxxorrax 2021-08-19 10:47:30 +0000 UTC [ - ]

Western governments had enough room to select a different way. Instead we are captured by a destructive war on terror surveillance ambitions. Do you think those defense and security contractors will ever go away? That they won't summon dangers if they have nothing else to do?

Here governments could have opted for a contrast but they neglected these opportunities.

simondotau 2021-08-19 00:35:48 +0000 UTC [ - ]

I realise that you're talking about global Governments, but when it comes to the United States, Government pressure cannot compel Apple to expand the on-device searching because that would be an unequivocal violation of the 4th Amendment of the US Constitution. Because it is a search of your private property compelled by the Government.

(After a photo is uploaded to a cloud service, a search of photos stored on servers doesn't enjoy the same 4A protection as this falls under the so-called "third party doctrine".)

(Apple searching for CSAM is also not a 4A violation because it was Apple's free choice as a private company to do so, and you will have agreed to it as part of the Terms of Service of the next version of iOS.)

cyanite 2021-08-18 22:59:42 +0000 UTC [ - ]

Since it’s speculation at this point, I’d say we deal with it if and when it happens. I also don’t see how this doesn’t apply even worse to doing cloud side scanning, like companies otherwise do. Is that out of control? Is there any evidence of that?

jjcon 2021-08-18 23:51:51 +0000 UTC [ - ]

>I’d say we deal with it if and when it happens.

By the time we know that it is happening it will be far more difficult to do something about it (see Snowden and the Patriot Act).

zepto 2021-08-19 14:55:33 +0000 UTC [ - ]

Now is the time to do something about it, but aiming at Apple is just an ineffective approach.

The only protection against future abuse, whether or not this mechanism is deployed, is a legal system that cares.

JohnFen 2021-08-18 23:08:49 +0000 UTC [ - ]

It's worse because the cloud is someone else's computer. When you're in someone else's house, you go by their rules.

Your devices, however, are your own house.

sparker72678 2021-08-18 23:20:59 +0000 UTC [ - ]

So don't use iCloud Photo. The tech literally can't work without the server component.

99mans 2021-08-18 23:43:39 +0000 UTC [ - ]

This statement is everything wrong with how Apple's privacy is interpreted. The problem isn't iCloud Photo - it's that Apple can install ANY software they want at ANY time for ANY reason on ANY of their devices without your permission, consent or even notice.

eternalban 2021-08-18 23:19:21 +0000 UTC [ - ]

> the cloud is someone else's computer.

So by your logic, if I use a post office box, that means the postal service has the right to open all my packages?

Jtsummers 2021-08-18 23:31:12 +0000 UTC [ - ]

The USPS is restricted by the Constitution and other legislation on what it can do.

From: https://www.uspis.gov/wp-content/uploads/2019/05/USPIS-FAQs....

> 4. Can Postal Inspectors open mail if they feel it may contain something illegal? First-Class letters and parcels are protected against search and seizure under the Fourth Amendment to the Constitution, and, as such, cannot be opened without a search warrant. If there is probable cause to believe the contents of a First-Class letter or parcel violate federal law, Postal Inspectors can obtain a search warrant to open the mail piece. Other classes of mail do not contain private correspondence and therefore may be opened without a warrant.

Companies hosting your data aren't similarly restricted (though if the US government wants access then they'd be restricted by the Constitution and legislation again). A company's ability to look at whatever you give them is only restricted by your contract with them and the technical limitations created by how you share it (upload encrypted files where they don't have the key? they can't really do much). They may have some legal restrictions on some kinds of data, but it's not going to be uniform across the globe so you'll have to take care with which companies you choose to host your unencrypted data.

LdSGSgvupDV 2021-08-19 01:41:46 +0000 UTC [ - ]

It seems like big techs are taking the place of court in some domains. They draw the lines and execute the rules. Even more, it is almost impossible to know where are the real lower and upper bound of rules. There are already some examples (e.g. app is banned for unknown/vague reasons).

naravara 2021-08-18 23:09:29 +0000 UTC [ - ]

> Higgins correctly highlights the immense pressure Apple will get from governments and other actors to bend the technology and use it for something else than csam. It will happen, people are probably already thinking how to apply such pressure. Sooner or later Apple will cave in and they will have only themselves to blame when freedom supports in Sudan or LGBTQ activists in Saudi Arabia will be jailed.

I'm having trouble seeing how Apple's actions make this any more or less likely. It's not like matching photos is some esoteric concept that no repressive government has ever thought of before. It's not even like it's particularly hard. Apple's implementation is the most privacy sensitive way of doing it, but if the rules were going to come down they were going to come down, and they'd be implemented in less privacy sensitive ways.

simondotau 2021-08-19 00:55:20 +0000 UTC [ - ]

Apple can also choose to exit that country. There aren't many countries which Apple would even consider risking its global reputation in order to retain that market. US, China, Europe, maybe the UK. That's about it.

If Saudi Arabia or Sudan tried to turn the screws, the business case for Apple is absolutely clear-cut: they leave. This isn't even up for debate. There's far too much at risk globally than there is to gain domestically from compliance.

Not only do they avoid serious damage to their global reputation (something they'll be extremely sensitive to, as the last two weeks have taught them) it would represent a massive opportunity for Apple to earn weeks of free media coverage that aligns with their security narrative.

robertoandred 2021-08-19 00:48:59 +0000 UTC [ - ]

If Sudan or Saudi Arabia wants to arrest people, there are much easier ways. What “freedom” or “LGBTQ” photos would you even search for?

sparker72678 2021-08-18 22:56:44 +0000 UTC [ - ]

I'd love to be a privacy purist on this, but the fact is that there is not going to be any going back on CSAM content scanning of images in iCloud.

So either we take this approach, or end up with a worse one, like pure on-cloud scanning with no transparency whatsoever.

I read the technical paper today, and this solution is super clever, well considered, and checks just about every box a crypto-solution-phile would want. I don't know how it gets better than this. (Technical note: Apple's solution on this requires client and server communication to flag an image, which is part of the reason it's totally off if you turn off iCloud Photo).

If this opens the door to Apple offering E2EE on all photos in iCloud (which it could), then this ends up being a big win over the status quo, in my opinion. And even if not, there are far, far worse alternatives.

[Edited for spelling].

joe_the_user 2021-08-19 00:12:06 +0000 UTC [ - ]

So either we take this approach, or end up with a worse one, like pure on-cloud scanning with no transparency whatsoever.

With cloud computing, I can choose my cloud provider and I can upload encrypted files. On device scanning is evil because it's move to create a computing architecture entirely outside the user's control.

carom 2021-08-19 00:49:21 +0000 UTC [ - ]

This is disabled if you choose to not use iCloud.

hda2 2021-08-19 03:00:49 +0000 UTC [ - ]

For now.

almostdigital 2021-08-18 23:13:16 +0000 UTC [ - ]

> Apple's solution on this requires client and server communication to flag an image

Well of course it does, would be a very pointless surveillance system if it scanned for stuff and then threw away the result.

I've also read the technical papers and all other info put out by Apple so far and my takeaway is that the system is generic, it's not designed specifically for images but can be used with any old fingerprint and arbitrary data can be uploaded in the "security vouchers".

It does not take much imagination to see how it could be used to mass-surveil E2EE communications, hook it into Siri's intent system, add a bunch of banned intents and throw the message into the voucher. If a threshold of bad intents is reached a reviewer will check your messages to verify that you actually had some bad intent before dispatching a patrol to your location.

sparker72678 2021-08-18 23:17:14 +0000 UTC [ - ]

Sure, you're totally right.

But right now all that data is available to Apple.

If they wanted to they could just generate a set of keys, replicate one of your iCloud devices, and send all your iMessages to a 3rd party, etc.

If CSAM scanning is coming, show me a better solution. I don't think there's a world ahead of us that doesn't include it for all platform-hosted photos.

almostdigital 2021-08-18 23:34:22 +0000 UTC [ - ]

I reject the premise that CSAM scanning is a solution to anything, especially if the goal is to E2EE encrypt the rest of the photos.

My understanding from following this debacle is that law-enforcement don't have resources to investigate everyone who just have CSAM they focus on people who match and have other photos of novel abuse they can track down, and here they will only get an account name and a list of 30 CSAM photos the user had.

All this system will do is drive up Apples numbers on a "total child abuse found" scoreboard and do very little to help any actual children in need.

sparker72678 2021-08-18 23:59:39 +0000 UTC [ - ]

I agree it’s not a real solution to a crime problem.

Still seems to be the direction the political winds are blowing.

joe_the_user 2021-08-19 00:19:14 +0000 UTC [ - ]

Accepting this stuff allowing yourself to blown by these evil winds, don't do that.

Part of the reason for these voluntary scans is because the state would have a difficult time implementing universal spyware on its own and it needs Apple for this. This stuff is by no means certain and how "we" helps determine whether it happens. Also, the state has secretly spied and attempted to legitimatize universal spying but we've had push back on many occasions. We should keep that up.

fmajid 2021-08-18 21:54:39 +0000 UTC [ - ]

Of course. Politicians and officials hate whistleblowers far more than pedophiles, and it's obvious this mechanism will be used to find future Chelsea Mannings and Reality Winners with better opsec to make examples out of them.

simondotau 2021-08-19 01:36:14 +0000 UTC [ - ]

Any such searches, because they are compelled by the Government and occur on private property, would be an unequivocal violation of the 4th Amendment of the United States Constitution.

hda2 2021-08-19 03:04:03 +0000 UTC [ - ]

What if those searches were conducted by another country? Say another five-eye nation?

simondotau 2021-08-19 03:25:21 +0000 UTC [ - ]

By raising five-eyes, you are suggesting that another country could launder the request on behalf of the USA in order to circumvent the 4th Amendment. Okay then, let's play that out.

Australia demands that Apple augment CSAM detection so that every iPhone in the USA starts scanning for (and reporting back to Australia about) some other set of images. There's absolutely no scenario where Apple wouldn't walk out of the room laughing.

End of hypothetical.

sparker72678 2021-08-18 23:27:45 +0000 UTC [ - ]

How?

I'd love to be convinced that this mechanism could be easily extended to any content at any time without it being totally obvious to the rest of us that it's being used that way.

jjcon 2021-08-18 23:57:10 +0000 UTC [ - ]

Not OP and not speaking with any authority on this more just curious:

If alongside this Apple rolled out the ability for them to initiate a scan on any phone for any thing (not a tool for mass surveillance but targeted surveillance) how would we know given Apples closed ecosystem?

This doesn't necessarily speak to the issue here but more to the Apple ethos in general, but I am curious.

zepto 2021-08-19 14:58:03 +0000 UTC [ - ]

Good point, but equally if instead of this Apple rolled out the ability for them to initiate a scan on any phone for any thing (not a tool for mass surveillance but targeted surveillance) how would we know given Apples closed ecosystem?

simondotau 2021-08-19 01:38:36 +0000 UTC [ - ]

techdragon 2021-08-19 02:36:07 +0000 UTC [ - ]

Very good breakdown of the situation. But cold comfort for anyone not in the USA. Because we know once Apple ships this it will be expanded to other countries with similar CSAM laws … many of which do not have robust Fourth Amendment type rights enshrined in their legal systems.

simondotau 2021-08-19 03:40:23 +0000 UTC [ - ]

This is true, but there aren't many countries which Apple would even consider risking its global reputation in order to retain that market. US, China, Europe, maybe a couple of others.

To begin with, there's no prospect of this being done in secret. So we're talking about Apple complying to demands being made in public.

In the case of the EU, it would have to be passed into law in a way that makes equal demands upon all of Apple's competitors. And in that case, everyone must comply with the law irrespective of whether any existing CSAM detection apparatus was in place. (Thus it's a sheer cliff, not a slippery slope.)

If Saudi Arabia or Sudan tried to turn the screws, the business case for Apple is absolutely clear-cut: they would leave. It wouldn't even be up for debate. There would be far too much at risk globally than they'd ever hope to gain/retain in that domestic market from compliance. Not only would they never risk serious damage to their global reputation (something they'll be extremely sensitive to, as the last two weeks have taught them) it would represent a massive opportunity for Apple to earn weeks of free media coverage that aligns with their security narrative.

puddingforears 2021-08-19 02:42:39 +0000 UTC [ - ]

That’s a lot of assumptions of good faith within publicly known constraints

HNPoliticalBias 2021-08-18 21:57:44 +0000 UTC [ - ]

Any population that allows themselves to be ruled by tyrants, will be, as we're seeing with the West slowly morphing into the USSR.

jeremygaither 2021-08-18 22:23:04 +0000 UTC [ - ]

I agree with the article that not only are implementation problems a potential threat, but it also sets a dangerous precedent. I also understand that Apple is attempting to compromise with governments that are still pressuring it to create an encryption back door or hand over keys (which it did in China by having another company operate iCloud there). Governments have often used the pursuit and prosecution of child abusers and terrorists as a scapegoat for eroding privacy for all its citizens. Privacy and liberty versus the lawful pursuit of criminals are challenging problems to balance, especially when governments and law enforcement want a free pass searching for whatever content they deem illegal. Governments decide what photos, communications, and even thoughts (if they could) are legal and which one are illegal. Who knows what some (new) governments may decree new obscenity laws? They could legislate that some pictures once considered lawful and innocent are now unlawful and obscene, retroactively, and ask or attempt to force Apple to search for those newly criminal images.

Apple’s new image scanning technology is almost as much of a backdoor to individuals' privacy as handing out their device decryption keys on demand. Except with this new tech, finding criminal activity is even easier for law enforcement, as Apple will attempt to locate criminals for them.

That said, there are better privacy tools available for those that need as much privacy as possible, but they are nowhere near as user-friendly as Apple's devices. For instance, if this new search tool only searches images uploaded to iCloud, as claimed, what stops a pedophile or a journalist from disabling iCloud photo storage and synchronizing photos manually? It is extra work, but not difficult. Keeping average everyday things and activities, such as web browsing history, mostly private requires more effort and expertise than manually copying photos between a phone and a computer.

jasamer 2021-08-18 23:20:47 +0000 UTC [ - ]

The CSAM scanning solution allows law enforcement to do one thing, assuming they have full control over the image hash database: find people who have multiple photos that are already known to law enforcement that they upload to iCloud.

How would you use this to find criminals in general? I have a hard time to think of a very useful example. Maybe photos of bomb plans downloaded somewhere from the internet that are known to be used by criminal groups?

As to what stops a pedophile or a journalist form disabling iCloud photo storage: Absolutely nothing. This solution is supposed catch people storing CSAM photos in iCloud, that's it. A careful pedophile will not be caught by this.

JohnJamesRambo 2021-08-18 23:51:05 +0000 UTC [ - ]

Apple has lost the nerds and I can’t think of a time that has ever gone well for a company. We are the people advising other people what to get, what is cool. I can’t think of anything less cool than an iPhone right now.

I dumped mine and I don’t think I’m alone.

2021-08-19 01:10:21 +0000 UTC [ - ]

99mans 2021-08-19 01:29:35 +0000 UTC [ - ]

Right now? We've known Apple is backdoored since at least 2013 with Snowden revelations. Some of us chose to pay attention, others seemingly went back to sleep ...

shadowfacts 2021-08-18 22:36:24 +0000 UTC [ - ]

While the precedent this sets is indeed concerning, the specific hypotheticals this article give are nonsensical.

> an adversary could trick Apple’s algorithm into erroneously matching an existing image

In which case the malicious, adversary-controlled images are sent to Apple. After which—the implication is—they can be re-obtained by... the adversary that created them. So what?

An adversary could conceivably lower the reporting threshold by getting the victim to save a bunch of false-positive images. Again, so what? Surely if the adversary has reason to believe there are some number of CSAM images on a user's phone, there are more direct ways of going after them.

> These kinds of false positives could happen if the matching database has been tampered with or expanded to include images that do not depict child abuse

An adversary would either have to:

A) carry out a supply-chain attack on Apple, B) ship different iOS images in different countries, or C) insert entries into the database on a specific users phone.

Options A and C are irrelevant: if the phone is compromised, the CSAM database being modified is the least of your concerns. And option B is independently verifiable (granted, Apple does not do enough to make third-party auditing of iOS easy, but it is possible).

kemayo 2021-08-18 22:42:47 +0000 UTC [ - ]

> An adversary would either have to:

More importantly, they'd need to suborn Apple's human-review process, because the people doing the review would need to know what not-CSAM they're looking for.

Could Apple be coerced into (or willingly) do this? I have no idea. But it's a very different threat-model than the article suggests, that boils down to the "do you trust your OS vendor?" question.

intricatedetail 2021-08-18 22:57:46 +0000 UTC [ - ]

How a human looking at low res CSAM matching collision picture that looks "innocent" will be able to tell for sure it is a false positive? Can they know with certainty that it is a false positive and not a manipulated real image? It seems to me that they would have to report these anyway.

kemayo 2021-08-18 23:05:10 +0000 UTC [ - ]

Is your argument "a human review step is fundamentally a rubber-stamp that won't reject a false-positive?" Because I don't personally think that's how it'd work out, but I'll acknowledge that I might just be an optimist.

(I mean, assuming that you need ~30 matches to trigger the review phase of the process, I'd think it'd be weird to a reviewer looking for child porn if you got 30 pictures of apparently-random political documents or subversive memes.)

meowster 2021-08-19 11:14:07 +0000 UTC [ - ]

Police often rubber-stamp computer-generated evidence of infractions (facial-recognition that does not visually match is one very-applicable example), and content moderators are probably paid less.

intricatedetail 2021-08-19 00:15:54 +0000 UTC [ - ]

I am trying to say it's not possible to tell with certainty from a lowres picture that you are looking at false positive. For example low contrast CSAM imposed on a document could trigger NeuralHash match but the lowres image will look like a false positive.

kemayo 2021-08-19 00:25:56 +0000 UTC [ - ]

For your example, wouldn't that only work to make the original source image that's polluting the CSAM database look like CSAM in lowres? The actual document-image the oppressive government is looking for that'd trigger the match wouldn't have the CSAM included.

That said, I do think it'd be nice to have a better demonstration of exactly what this "derivative" the reviewers would be looking at is. There's a lot of variations there, balancing false-positive privacy concerns, the mental health of the reviewers, potential downsampling issues, etc.

simondotau 2021-08-19 01:34:31 +0000 UTC [ - ]

I agree, it would be useful if Apple could be clearer by what they mean by a derivative. I recall reading somewhere that it's a reduced resolution, grayscale copy of the image. I can't vouch for that, but that would be a plausible notion of what the "derivative" would be.

Personally I would also be placing a hard watermark in the middle of the image, or maybe some hard slashes randomly through the image, so that "clean" images cannot leak out of human review.

Let's imagine that the derivative is a 0.5 megapixel, grayscale, watermarked, HEIC-compressed copy of the original image. This would be plenty to determine with zero ambiguity that the image is actually "A1" classified, i.e. depicts a prepubescent minor ("A") engaged in a sex act ("1").

bitwise-evan 2021-08-19 00:38:32 +0000 UTC [ - ]

> an adversary could trick Apple’s algorithm into erroneously matching an existing image

This is a very real, possible attack. Apple ships its CSAM model on device so any attacker can have a copy of the model. Then the attacker creates an image that triggers CSAM but looks like a panda [1]. Now the attacker sends tons of triggering photos to the unsuspecting victim, who now gets questioned by the FBI.

1: https://medium.com/@ml.at.berkeley/tricking-neural-networks-...

shadowfacts 2021-08-19 01:53:50 +0000 UTC [ - ]

> Now the attacker sends tons of triggering photos to the unsuspecting victim, who now gets questioned by the FBI.

That's glossing over the middle part where a human from Apple (before it even gets to law enforcement) actually look at the images and goes "oh, these are actually pandas" and realizes they were erroneously detected.

carom 2021-08-19 00:52:02 +0000 UTC [ - ]

So the attacker creates an image then the user has to download it. Then the FBI digs in and see it was a crafted false positive, then begin to investigate who sent it and why. Then the user takes civil action against the person who sent it for harassment.

simondotau 2021-08-19 01:05:30 +0000 UTC [ - ]

More precisely, 30 carefully crafted false positives. All of which need to be imported into your iPhone's photo library to sit alongside pictures of your dog and your mum. And then they have to get past human review. Not impossible, but so far beyond implausible that it can be dismissed as ridiculous.

And if this trick ever works, it could only be done once before Apple has the opportunity to plug holes in their NeuralHash algorithm and fix any deficiencies in the manual review process.

gerash 2021-08-19 06:12:37 +0000 UTC [ - ]

If someone is ok with Apple handing over iCloud keys to the Chinese government to comply with local laws then scanning for CSAM is no different.

Basically, if you have issue with this go change the law rather than ask large corporations to act illegally for some greater good.

mrharrison 2021-08-18 22:13:34 +0000 UTC [ - ]

There is an easy way to cast your vote for saying yes to Privacy. Turn off auto-updates and don't update to iOS 15. Spread the word.

amelius 2021-08-18 22:30:36 +0000 UTC [ - ]

This is not guaranteed to work. At some point Apple might show you an update dialog, and you might click "yes" by mistake.

hypothesis 2021-08-19 00:54:08 +0000 UTC [ - ]

I think it requires you to enter your passcode, at least it does for me, when auto-update is disabled.

hypothesis 2021-08-18 22:19:36 +0000 UTC [ - ]

Presumably, there will be a parallel track to stay on iOS 14 with security updates, at least for some time.

OneLeggedCat 2021-08-19 00:41:35 +0000 UTC [ - ]

If enough people do that, then no reason that they won't just enable it in 14 too.

hypothesis 2021-08-19 00:52:13 +0000 UTC [ - ]

Assuming that I was not gaslit by other people, I was under impression that would at least require some change to EULA.

Otherwise it sounds like a nuclear option by Apple, with dire effects.

greyface- 2021-08-19 00:57:54 +0000 UTC [ - ]

iOS 14 EULA:

> By using the Apple Software, you agree that Apple may download and install automatic Apple Software Updates onto your Device and your peripheral devices.

> Apple and its licensors reserve the right to change, suspend, remove, or disable access to any Services at any time without notice.

hypothesis 2021-08-19 01:21:01 +0000 UTC [ - ]

Sure, they could probably push it through into any iOS, but all currently available documents claim it’s a iOS 15 feature and presumably there will be lot’s of new legalese like specifically for those new features.

greyface- 2021-08-19 01:29:15 +0000 UTC [ - ]

You're not wrong about Apple's messaging around this. I was specifically responding to your claim that pushing it to iOS 14 would require modifications to the EULA. It would not.

hypothesis 2021-08-19 03:57:21 +0000 UTC [ - ]

Fair enough.

There are a couple things that are nagging me though: 1) Apple has loads of legalese for each and every feature and 2) it sounds like they could just cut all that down to two points that you cited and be golden.

I definitely see your point and I guess we can leave it at that until a lawyer can explain this...

SSLy 2021-08-18 23:03:58 +0000 UTC [ - ]

This tech is in iOS since 14.3

hypothesis 2021-08-19 00:44:35 +0000 UTC [ - ]

The neuralHash part, not the whole CSAM scanning daemon/pipeline, right?

Security updates to iOS 14 will let people (who might not be ready/able to switch from iOS) to actually do what GP said, without unnecessarily exposing themselves to security bugs in outdated OS.

jug 2021-08-18 23:07:55 +0000 UTC [ - ]

There’s been such a debate over this now that I don’t know why Apple are still trying? It’s not even their job to catch child porn. There are many far-reaching cooperations like at Interpol and Europol that successfully bust child porn groups? Even anonymizing networks like Tor are not safe havens. These guys don’t have it easy even today.

So why is Apple suddenly extremely adamant about this? Would it even cost them anything to not go here?

They’re a hardware and services company making mobile and computing devices mostly geared towards entertainment and creative work. How did they end up here?

sparker72678 2021-08-18 23:13:18 +0000 UTC [ - ]

One possibility is that they'd like to go E2EE with all iCloud data, including photos, and this lets them do that without the politics (DC level) falling out against them.

heavyset_go 2021-08-18 23:27:22 +0000 UTC [ - ]

I doubt it, considering they give up customers' data when the government requests it for 150,000 users/accounts a year[1]. Not only do they choose to give data when requested, they have to comply with court orders, as well.

Also, governments care about much more than just CSAM. They care about terrorism, drug manufacturing, drug and human trafficking, organized crime, gangs, fraud etc.

This speculation only makes sense if Apple intends to use their CSAM detection and reporting system to detect and report on those other things, as well.

Governments won't like that either, because during investigations and discovery, they'll want all of the customer's data that could be evidence of crimes. These detection and reporting systems, as implemented and by themselves, are only useful for flagging suspicious people. They're no good for complying with warrants or subpeonas.

[1] https://www.apple.com/legal/transparency/us.html

shuckles 2021-08-19 00:03:39 +0000 UTC [ - ]

I’ll take the other side of this and bet that Apple offers the ability to remove themselves from escrowing iCloud keys before they expand CSAM detection to other kinds of content. Nerd points are at stake.

jjcon 2021-08-18 23:45:46 +0000 UTC [ - ]

Can you really call it E2EE if one of the ends is wide open?

2021-08-18 22:48:59 +0000 UTC [ - ]

threeseed 2021-08-18 22:08:58 +0000 UTC [ - ]

This article is full of inaccuracies.

a) People have extracted a pre-release version from an older iOS phone. It is not the one that will be released and not the same one Apple uses server-side to verify the process.

b) Adversaries can not reasonably break this process by flooding it with bad data across a range of compromised phones. The client side version is there to prevent this as is the fact it would require jailbroken devices with iCloud logins.

c) If you can't trust Apple not to include non-CSAM hashes in the database then how can you trust them not to compromise the operating system itself ? It's illogical. Apple verifies hashes against two sources and two manual checks are required thus needing multiple failure points in the process.

mattnewton 2021-08-18 22:13:20 +0000 UTC [ - ]

For b), I can't see how creating spurious icloud accounts and spoofing it will be hard at it's face. I have made dozens of icloud accounts personally for testing in the past. They may take steps to address this attack vector, it's not an unsolvable problem but it isn't solved by requiring a device or icloud account as far as I can tell.

For c), the problem isn't just trusting Apple, for whom the hashes are somewhat opaque, my understanding is that they are provided by a governmentally regulated third party.

kemayo 2021-08-18 22:38:48 +0000 UTC [ - ]

For c) you still need to imagine a scenario where Apple is deliberately complicit, because of the human review step. If non-CSAM hashes are inserted into the database, the human reviewers need to know what non-CSAM they're looking for -- otherwise they'll see the e.g. picture of a leaked document, observe that it's not child porn, and flag it as a false positive.

JohnFen 2021-08-18 23:14:39 +0000 UTC [ - ]

> you still need to imagine a scenario where Apple is deliberately complicit

Which is very easy to imagine. All I have to do is imagine a national security letter.

kemayo 2021-08-18 23:24:34 +0000 UTC [ - ]

Isn't that no greater risk than the status quo, though?

Apple already has all these photos uploaded to their servers and knows the keys as iCloud Photos isn't end-to-end encrypted, so if a government can coerce them into cooperating with running scans like that, this new system doesn't seem to make any difference.

(If the argument is that they can be coerced into changing the new system to scan non-uploaded photos and reporting them outside of the iCloud Photos upload process... that's a threat that applies just as much to every phone. It's an argument for not trusting any OS that you didn't compile from source yourself.)

That is to say, it takes this away from the threat the article is presenting as "governments could sneakily use Apple's system to find dissidents despite Apple's best intentions" and into "governments could use the law to make Apple do things".

simondotau 2021-08-19 01:41:22 +0000 UTC [ - ]

A national security letter can do a lot, but it cannot supersede the United States Constitution. Any attempt to co-opt the on-device search mechanism would be an unequivocal violation of the 4th Amendment of the United States Constitution—it's a search of private property being compelled by the Government.

(As distinct from Apple voluntarily searching for CSAM, which will be part of the terms of service. And distinct from being compelled to search cloud servers, which is exempt from 4A under the "third party doctrine".)

greyface- 2021-08-19 02:03:03 +0000 UTC [ - ]

If you receive an unconstitutional National Security Letter, you can't just ignore it. You must endure a lengthy, expensive, and stressful legal battle. And due to the nondisclosure requirement in the NSL, you also have to do this in secret, without going to the public for support. Or you can fold and comply.

simondotau 2021-08-19 02:15:54 +0000 UTC [ - ]

In this hypothetical scenario, the NSL would have to go to Apple the corporation, not to any individual. There isn't any individual at Apple that could implement the demand. There probably isn't even a group of ten individuals at Apple that could do it without other employees finding out. And as smart as you think the US Government is, there's no way they could possibly know who those ten people were.

So the letter goes to Apple. They have ample time and resources to push back indefinitely. Demanding that Apple implement a Government dragnet across tens of millions of private devices is so far beyond unconstitutional that complying wouldn't even be fleetingly contemplated as an option. In fact, Apple is the sort of company that would move heaven and earth to ensure that this unconstitutional NSL becomes public. If nothing else, their defiance of it would be fantastic PR.

Therefore it would be a massively stupid-ass move for the Government to try. They would have zero prospect of a positive outcome and they know it.

mattnewton 2021-08-18 23:10:49 +0000 UTC [ - ]

Constantly flagging my account to Apple is an attack of its own. I don't need Apple to be complicit, just make a mistake by erring on the side of caution when reporting.

simondotau 2021-08-19 01:49:43 +0000 UTC [ - ]

I don't see how there's a "side of caution" when it comes to images classified as "A1" of prepubescent minors ("A") engaged in sex acts ("1").

What does it take to have 30 or more images in your photo library that have been all been manipulated to be a perceptual hash match to 30 or more A1-classified images, while also being able to pass human review ("erring on the side of caution") with respect to them being maybe fitting the A1 classification.

There's no ambiguity in prepubescent minors engaged in sex acts.

meowster 2021-08-19 11:20:31 +0000 UTC [ - ]

The content reviewer isn't going to see an image in enough detail to determine that it is obviously A1 material, so yes, the content reviewer will err on the side of caution.

simondotau 2021-08-19 14:31:21 +0000 UTC [ - ]

You seem to be fairly certain about exactly how much detail is in the visual derivative contained within the safety token. Care to share your source?

intricatedetail 2021-08-18 23:02:28 +0000 UTC [ - ]

How would they know it is a false positive and not a CSAM image concealed in a document image that NeuralHash was able to "see"? It will be impossible to tell from a lowres picture with certainty.

kemayo 2021-08-18 23:07:05 +0000 UTC [ - ]

My understanding is that NeuralHash is only supposed to be looking at visible pixels. Granted, I'm taking their word on that one, but I've not seen any evidence indicating that it does look for stenographically concealed images.

intricatedetail 2021-08-19 00:11:08 +0000 UTC [ - ]

Not necessarily steganography. For example if the image has drastically lowered contrast, the lowres image will look like a false positive, but it will not be.

simondotau 2021-08-19 02:00:34 +0000 UTC [ - ]

Okay, but now you're positing a scenario which is entirely pointless. Seriously, what would be the point? If you know you need to conceal the images to put them into your photo library, you surely know it's a stupid idea to put them there in the first place. It's all but impossible to find a supply of CSAM material without being made acutely aware of their illegal status.

Here's what I don't get: who are these people importing CSAM into their camera roll? I for one have never felt the urge to import regular, legal porn into my camera roll. So why would anyone do that with stuff they know could land them in prison? Who the hell co-mingles their deepest darkest dirtiest secret amongst pictures of their family and last night’s dinner?

If someone wants to conceal their CSAM library, I'm sure there's probably dozens of apps in the App Store that can store photos securely behind an additional layer of encryption.

jasamer 2021-08-18 23:31:55 +0000 UTC [ - ]

I'm not arguing that these aren't real problems, but neither are unique problems of the the client side scanning solution. It's the same or maybe even worse with server side scanning.

I'd assume FB, Google & co have some solution to b), so Apple should be able to figure out something. For c), at least Apple takes extra precautions by requiring the photos to be in two separate database provided by different governments. (Maybe other cloud providers do that, too, I don't know).

mattnewton 2021-08-19 00:52:27 +0000 UTC [ - ]

Yeah I agree that B is likely a non issue, at least not an issue that affects the users directly. But C doesn't make me feel any better because the fundamental issue is where the scanning happens. If it happens off my device it cannot happen to photos I don't send off my device. I understand they have policy governing this, but that's not addressing the core conceptual problem of crossing the network boundary onto what I pretend is "my" device.

threeseed 2021-08-19 02:05:30 +0000 UTC [ - ]

It is YOUR device.

But the second you enable iCloud Photos sync, Apple has the right to not allow CSAM on THEIR servers.

kevingadd 2021-08-18 22:22:12 +0000 UTC [ - ]

Putting some extra hashes into a database - especially one tucked away in a corner of the product - is a lot easier than compromising kernel code or a driver. And in this case the database is not exclusively the work of Apple, so third parties have the ability to insert hashes.

encryptluks2 2021-08-18 21:38:59 +0000 UTC [ - ]

There business model has always been a threat to user privacy and press freedom, many people were just okay with it cause Apple told them otherwise. Open source is the only model where you can verify though, and so Apple was about blind trust and never about privacy.

npteljes 2021-08-19 08:55:48 +0000 UTC [ - ]

I agree, this is what's to be expected when people rely on proprietary and other people's computers. Stallman's essays, among others, depicted scenarios like this for ages. But people don't like to miss the latest shiny, the convenience of the smart phone tech, so, here we are. They also don't like a told-you-so.

99mans 2021-08-18 22:11:11 +0000 UTC [ - ]

Why is this being downvoted? Depressing to see that open source software is not respected here.

sandworm101 2021-08-18 22:38:53 +0000 UTC [ - ]

People downvote things that make them angry. The things that make people most angry are those things that strike closest to home. An incorrect or unacceptable comment will be downvoted and commented on, even reported. A perfectly valid and true statement, one that is so true that it scares people, will be silently downvoted. It is a form of denial, an attempt to remove a painful truth.

simondotau 2021-08-19 01:15:01 +0000 UTC [ - ]

I didn't downvote it, but I can understand why some people might have. Because while it's superficially true, it's disingenuous in practice. All security is, at its core, a network of trust between you and other entities. Open source isn't secure because it's open, it's secure because you trust Canonical, you trust Linus Torvalds, you trust GNU, etc. And those people trust other people—hence the "network" of trust.

Saying that you can inspect the source code is true in theory. But unless you've done the full audit yourself, you're personally as blind as you are with closed source code. You're choosing to trust whichever security researchers deeply understand the security implications of all the gobbledegook in all the USB drivers, and you're trusting that Canonical is shipping you the same version that security researchers have validated. For 99.9% of users, it all comes down to blind trust.

As Linus himself once said: "If you have ever done any security work and it didn't involve the concept of a network of trust, it was not a security work. It was masturbation."

2021-08-18 21:40:02 +0000 UTC [ - ]

BTCOG 2021-08-18 23:24:13 +0000 UTC [ - ]

Fuck the cloud. Offer 256 and 512GB iPhones and up and give a feature set to entirely and permanently disable cloud function. Save that, and I'll throw my iPhone in the God damn trash. I'll use a raspberry Pi or flip phone if I have to do so.

542458 2021-08-18 23:26:22 +0000 UTC [ - ]

Real talk, a Raspberry Pi phone could be amazing. A $150 (or similarly low cost) hackable phone with a near-guaranteed solid community around it - I’d buy that.

One of my big disappointments is that canonical decided to go super-premium with their attempt at phones. I wish Firefox OS had done better (the f0x phone in particular was incredibly cool) but they never managed to get the sort of community traction they needed.

techdragon 2021-08-19 02:42:03 +0000 UTC [ - ]

Check out the PinePhone. While not exactly a “Raspberry Pi Phone” it’s pretty close to that in its basic concept.

sparker72678 2021-08-18 23:26:05 +0000 UTC [ - ]

You can use an iPhone without ever logging into iCloud. You can even install MDM profiles to prevent it.

BTCOG 2021-08-18 23:32:12 +0000 UTC [ - ]

And I already do, but as a default standard I would just much rather see a phone large enough to never need iCloud but of course I didn't address the other issues that will now come from a proprietary phone. I will make good on buying a Librem 5 in the interim and throw the iPhone right in the dumpster though if they push forward with on device scanning and such. I never leave GPS turned on, I don't give any apps access to such features, and I buy phones outright in cash and put them on prepaid from the start. This is not good enough. If the next set of surveillance features are scanning messages, photos, videos on personal devices then it's nothing but a rented device and the only option are custom fully open source devices.

Jtsummers 2021-08-18 23:35:10 +0000 UTC [ - ]

iPhones already come in 256GB and 512GB versions, though you have to go up to the iPhone Pro line to get 512GB.

BTCOG 2021-08-18 23:43:43 +0000 UTC [ - ]

Good point, and well taken. I'm still on an 8Plus though.

2021-08-19 00:08:07 +0000 UTC [ - ]