Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Apple joins opposition to encrypted message app scanning (bbc.com)
321 points by pseudolus on June 28, 2023 | hide | past | favorite | 339 comments


> companies should only implement end-to-end encryption if they can simultaneously prevent abhorrent child sexual abuse on their platforms

And houses with walls should be banned unless the builder can guarantee no children will be harmed inside.


It's more like banning locks to private homes because there might be a bad actor in one of the houses.

It's a stupidly insane idea.


Paedophiles drive cars.


They also drink water and breathe air. Thankfully, making those unsuitable for consumption by any pedophile globally is an ongoing effort already.


Ban cars and force everyone to travel together in large vehicles where we can see what they are doing!


Let’s call such a vehicle a “bus”


Oh you drive a car? How long have you been a pedophile sympathizer for?


As a child hearing the news of Dutroux, I recall thinking that it would be better if buildings just collapse if they had this going on.


While I agree with your point somewhat, the scale is completely different. Its not a black and white thing. I know a lot of people will disagree with this statement though.


The main issue in preventing child abuse is that there are simply not enough resources allocated to handle each case - especially in a timely manner.

More surveillance just means more cases in the backlog, so contrary to what people wanting to implement it are saying, it doesn't get us any closer to the goal.



I know here in Illinois, USA if your parole specifies treatment (e.g. sex counseling, criminal behavioral thinking counseling etc) then you can voluntarily skip all that by just spending half your parole time in prison and leave with no restrictions and no therapy.

This happens to a lot of inmates involuntarily because the prison system can't find anywhere to house them, so they are forced to keep them in prison and then release them with no help and no support.


What do you mean the scale is different


Yeah, how many houses are there?


I don't see how scanning for the material in major messaging platforms would impact anything. The people will just move to the next service that is encrypted. The technology exists. It can not be banned.


How big is this scale difference in orders of magnitude?


CSAM's going to get a lot weirder with AI image manipulation and autocompleting and nudify-ing images.

And combined with this ruling about Japanese hentai with depiction of minors is legal to possess and sell: https://en.wikipedia.org/wiki/United_States_v._Handley

So, what happens when (not if) someone uses https://thispersondoesnotexist.com/ and gets children pictures, then generates them nude and doing sex acts? This is basically the same argument as above. The children aren't real, and the sex acts arent real. The bigger problem is "how do you tell if this image is a proof of a child rape or just AI generated?"

I'm going more and more towards "CSAM is evidence of a crime, and shouldn't be a crime in of itself" side of things.

However, even though I'd advocate in removing strict liability of possession of CSAM, I'm still for death penalty for any human who rapes children (we're talking under puberty, not 17yr olds etc).


Framed as crime scene evidence, it is kind of strange that simply possessing it, for any reason, is a serious crime worth decades in prison. The pedophiles are voluntarily taking pictures and videos of themselves committing crimes and sharing it on the internet. I am sure that if internet sleuths were allowed to investigate such material, like law enforcement is, many, many more child abusers would ultimately get caught. Distributing it should obviously still be illegal, but banning simple possession doesn't stop traffickers (obviously) and eliminates an enormous free workforce of amateur sleuths who I am sure would seriously help the problem. They have helped solved many cold case murders that police never could.

It's kind of creepy in a way when you think about it. No other crime scene evidence is guarded in that way. Is it illegal to possess photos/videos of someone getting stabbed, shot, beheaded, etc? Surely there are people who derive sexual pleasure from that material too, yet it proliferates and no one really cares. It's almost as if the powers that be REALLY don't want the general public to know who's behind some of this stuff. We already know very rich and powerful people are routinely involved in child sex rings...

And the AI image generation stuff makes it even more dangerous and ridiculous. So we're approaching a time when someone can just generate some images locally with stable diffusion or whatever of what looks like CSAM (but no actual children involved), get it into your possession (either remotely or just dropping an SD card in your house somewhere you don't see) and now you are liable for decades in prison...hopefully you don't have any enemies.


Very good point I haven't ever seen posed before. And here's another angle: the unconditional felony possession thing is extremely convenient if you're looking to frame someone. Whether it's drugs, CP, or some other material.


I had a competitor to my TV torrent site who would call PayPal on a monthly basis and claim our site was selling CSAM. PayPal would be forced to immediately block the account and do a thorough investigation to exonerate us (though weirdly they didn't care about the piracy). It is a fantastic ruse to destroy someone's legitimate business though as the payment providers cannot ignore the complaint, even if it is made repeatedly and regularly and is clearly bogus.


The case you cited is only federal law. Many states in the USA banned CGI depictions of CSAM as a result of cases like that, so computer-generated imagery, and child sex dolls are not permitted across a large section of the country, for better or worse. I have no opinion on it as I don't know if access to items such as these which involve no real children make the majority of those interested more or less likely to physically offend. I am sure for some it mitigates their desires, and for others it accentuates them. I am not a psychologist.


CSAM AI articles have just popped up.


Didn't apple also recommend on-device scanning for these kind of pictures as well a while ago?

Glad to see they now oppose it but it's hard to gauge how sincere it is given their history of trying to implement it themselves.


I’d argue their attempts at having a plan for CSAM were preempting these exact types of arguments being made in bad faith by governments like the UK, who are actively engaging children’s charities to give their goal of breaking e2ee have some emotional societal weight.


CSAM is a real problem. Its not merely emotional weight.


Of course. But that emotional weight is very often cynically used in order to get people to go along with things they consider unacceptable.


I understand your argument, and I appreciate it. The same could be said of of the "other side" of this argument though. The "government will spy on you!"--is also emotional weight cynically being used to get people to go along with technical solutions that support CSAM.

I'd argue society would generally agree CSAM is unacceptable and we're allowing it right now. Many are quick to adopt the assumption that this law is an intentional and cleverly designed "attack on our rights to e2ee", as fact. If one can't recognize it's at least a poor attempt to balance the right to privacy with protecting kids, then I struggle to see how they're being honest with themselves.

I'm not arguing this law proposes the correct solution that should be adopted. I'm not arguing the surveillance state isn't licking their chops. I'm simply trying to point out that CSAM is a real problem. People really care about it. And it's factually wrong to claim otherwise.

It's also wrong and harmful to attribute motive based on assumptions. While you haven't done this, many others have. Instead of taking part in a constructive debate by arguing specific points, they try to paint a false motive on the opposition. "it's just so they can spy on you!" is roughly equivalent to "you view CSAM and that's why you don't want to protect our kids". These approaches are not how you reach a good solution that balances both sides of the argument. They're just throwing the oppositions legitimate concerns under the bus.

I'd also add, while e2ee is strong, unless you're verifying and compiling all the code you run on your device yourself, Apple/Google/etc still maintains the technical ability to snoop on you regardless of this law. I assume the bar to snoop on someone would most likely be lowered if laws like this are passed. But it's still technically possible (and likely happens).


> I'm simply trying to point out that CSAM is a real problem. People really care about it. And it's factually wrong to claim otherwise.

Very nearly nobody is arguing otherwise, though.

> Apple/Google/etc still maintains the technical ability to snoop on you regardless of this law.

Absolutely. Not only can they, but they do.


> Very nearly nobody is arguing otherwise, though.

How is equating CSAM to (merely) emotional weight not doing that?


CSAM does carry emotional weight. Your entire argument is that since privacy measures are already loose then its okay to make them looser for "the children".


> CSAM does carry emotional weight.

I never argued it doesn't.

> Your entire argument is that since privacy measures are already loose then its okay to make them looser for "the children".

Where did I make this argument? (hint: I didn't)


Privacy is binary you either have it or you dont and to argue that there exists a "balance" is disingenuous.


The disingenuousness is the point. You're trying to rationalize with authoritarians, who do not argue in good faith.


That isn't reality. You give up privacy all the time in exchange for things. And even with e2ee you don't "have" privacy guaranteed.


AFAIK the app code can be decompiled to see what it does. The messaging can be monitored as well. I find it hard to believe these apps could sneak in something that breaks the encryption without anyone noticing.

The proposed solution to stopping child porn from being shared on these platforms would at most only do just that: Stop sharing on these specific apps. There are plenty of alternatives including encrypted zips, TOR etc. Breaking e2ee in WhatsApp would not do anything to stop sharing the material.

I don't think it's a leap to question why this is proposed when it could never achieve it's intended goal.


If the update is targeted at a single individual, it's almost guaranteed to go undetected.

Yes you can decompile code. But reverse engineering is time intensive (I've done it). And these code bases are huge. It wouldn't be difficult to obfuscate some element of the code so it's difficult to detect. Especially if it's rarely triggered.

> The proposed solution to stopping child porn from being shared on these platforms would at most only do just that: Stop sharing on these specific apps. There are plenty of alternatives including encrypted zips, TOR etc. Breaking e2ee in WhatsApp would not do anything to stop sharing the material.

There is value in introducing barriers, even when imperfect. You must admit that, otherwise the fact that Google/Apple can ship an OS update whenever they want negates the entire argument in favor of e2ee.

Also, just FYI encrypted zips are easily brute forced.


You can not even bruteforce an AES-128 encrypted zip, let alone 7z or rar with stronger algorithms. (Given a decent length password obviously)

> There is value in introducing barriers, even when imperfect.

Not when they are completely ineffective and introduce other issues.


> give their goal of breaking e2ee have some emotional societal weight.

Breaking E2EE is their goal. CSAM is the McGuffin.


>CSAM is a real problem.

It's is not nearly as big a problem as states wanting to oppress people.


Right. I don't think people realize there is no such thing as a free lunch. Having something, in this case a free society, means making a tradeoff. The dials on safety, happiness, and freedom cannot all be at 100 percent at the same time, and it is far too easy to fall into the trap of thinking that even the tiniest bit of safety is worth a huge amount of freedom in exchange.


Well, it’s a well-designed trap carefully constructed to have the strongest possible effects on our lizard brains.

Maybe there are ways to check our governments’ true commitment to safety in multiple areas and draw conclusions?

Environmental protection (we live in a period of climate collapse)? Food safety (sugar industry controlling legislation etc.)? Weapon licensing? Access to healthcare services and life-saving medications like insulin? Healthcare and social services for mothers and young children? Preferring diplomacy to aggression and acts of destabilization in politics?

Looking at all of this I conclude that, sadly, our governments don’t seem to care much about our or our children’s safety.

Looking at the efforts politicians put into staying into power and acting on personal or corruption-driven agendas… we’re not in great shape and I have the feeling that there may be some manipulation going on with intention to disrupt organizing, personal privacy, and the ability of individuals to communicate without being surveilled.

All authoritarian governments absolutely hate that - for reasons having nothing to do with the safety of their citizens.


I don’t believe that anyone is saying that it isn’t. That doesn’t preclude the government from preventing a “real problem” going to waste.


CSAM has nothing to do with encryption and it predates the internet.

https://youtu.be/aAeUeG-PTD0


There is some overlap, since you can't do a CSAM scan on properly encrypted data. It is not completely separate in a modern context of what is going on with IT.


Can your elaborate on your trigger here?

Why did you feel that the phrasing minimized it enough to miss the point?


I don’t read their comment as particularly triggered, but if you honestly think they were triggered by a reference to CSAM, this is a wildly inappropriate question to ask.

In general I think it is better to not project some emotional reading onto other people’s posts.


still don't get it, is this one of those things I'll magically be attuned to if I become a parent (but only maybe?), or does it require personal experience? or is there anything I can read about


It is bad when children are abused.

Many (most?) people feel this viscerally, regardless of personal experience of parenthood or of abuse. This why the "think of the children" rhetorical gambit works.

In general, people like to believe in fairness, and dislike it when bad things happen to those who can't be said to "deserve" it somehow. Thus child abuse is more universally reviled than the rape of adults, because it's a lot harder to justify a child "deserving" that. (See also: kicking a friendly dog as a way to show off that a fictional villain is irredeemable.)

For a non-sexual example, remember how the pictures in this article provoked a surprising upswing in horror amongst even very anti-refugee people? https://www.theguardian.com/world/2015/sep/02/shocking-image...


Has their been some massive increase in the amount of child abuse to justify the new paranoia I see in the media and with new laws? Perhaps a certain political party is taking advantage of parents who, let's face it, are emotional and paranoid themselves. Parents by nature operate more selfishly than others and that includes the freedom of the people around them


Parents are more selfish? That is the exact opposite of my experience. Parents are generally the most selfless. Its those who choose not to be a parent who prefer their selfish desires.

And yes, csam is on the up and up.

> From March 2009 to February 2022, the number of victims identified in child sexual abuse material (CSAM) rose from 2,172 victims to over 21,413 victims. From 2012 to 2022, the volume of reports to the National Center for Missing & Exploited Children’s CyberTipline concerning child sexual exploitation increased from 415,650 reports to over 32 million reports.

See https://www.judiciary.senate.gov/press/dem/releases/durbin-i....


If you put the concerns of your children over the rest of the population that's selfish.

Also how about some numbers that aren't swayed by covid


How exactly are parents putting the needs of their children over that of the rest of the population? Or are you saying that societies shouldn’t cater to children on any level beyond a minor inconvenience? Do the school buses making you have to stop driving your car bother you that much? Or are we building too many parks somewhere? And if so, where?


Children literally are the future society. Making them a priority is more selfless than making today's adults a priority.

What does covid have to do with the 10+ years of data I referenced? And even if the data does skew up during covid, how does that make it less of a problem?


This article talks about how those numbers are manipulated https://www.techdirt.com/2023/04/24/senator-durbins-stop-csa...

Snippet-- "I mean, back in March of 2009, the tools to track, find and report CSAM were in their infancy. Facebook didn’t start using PhotoDNA (which was only developed in 2009) until the middle of 2011. It’s unclear when Google started using it as well, but this announcement suggests it was around 2013 — noting that “recently” the company started using “encrypted “fingerprints” of child sexual abuse images into a cross-industry database” (which describes PhotoDNA)."

This is what I'm talking about with paranoid parents. I'm sick of your "save the children" bullshit


nonchildren are also future society, unless every adult immediately dies, so deprioritizing them in favor of someone related to yourself seems selfish to me

if it's just some random kid and you have none of your own, such professions of selflessness might be more convincing


Generally “triggered” is used to describe the recall of previous traumatic experiences, i.e,

https://en.m.wikipedia.org/wiki/Trauma_trigger

So it is obvious why somebody wouldn’t ask for more details on that sort of thing, right?

Maybe you are just using it to ask why they are annoyed, and weren’t aware of the specific definition that phrase?


People have definitely diluted the word trigger, and it is impossible to tell the difference between someone referencing a traumatic experience or something akin to annoyance or being distracted. 'Triggered my anxiety' where anxiety is also diluted to 'something I would prefer not to deal with'.

In this case, it seemed like the phrasing was enough to distract them from a discussion that didnt need to be derailed


Since you didn’t give any indication that you were using the “diluted” meaning, I’d say it is more accurate to say that you miscommunicated, rather than that anyone got distracted.


Okay I’ll take it

I was curious about why they felt compelled to take the thread that direction when the meaning was clear. They felt CSAM was being minimized when that was a red herring to the discussion.


Wait. Are you saying you don't get why some people may be bothered by sexual crimes against innocent children? Am I misinterpreting your post ?

Edit-- I'll happily take the down vote on this, I understand the policy of extending charitable interpretation But I am genuinely confused which is why I asked for clarification, it was not rhetorical.


Can your elaborate on your trigger here?

Was it someone daring to suggest that molesting a 3 year old is actually bad, like really bad?


It is inappropriate to ask someone about their trigger in such contexts. Why would you expect someone to go into details on past traumas like this?


If they are using their experience to advocate for changes that affect me then they should justify it


Justification wasn't requested. Elaborating on their "trigger" was. Again, this is abhorrent. No one should be required to go into details about being sexually assaulted as a means of "justification."


No, it was opt-in not required by law. The system was designed to work only with images as they were being uploaded to iCloud; it didn’t work with just any images on device. The receipt used for processing was generated as part of the upload process and neither the device nor backend could independently determine any useful information without coöperating with each other.

It was a really interesting scheme that Apple clearly put a huge amount of thought into in order to maximise privacy. But people didn’t read the details and instead imagined it worked in a much simpler way, drowning out any useful conversation about what it was actually doing. The UK approach should be opposed, but I wouldn’t be opposed to the scheme Apple wanted to implement.


No thank you. A lot of professionals perfectly understood the idea. What Apple came up with was essentially a "privacy preserving" backdoor; the behavior is still fully opaque to the user and acts against their interests. In the event of a false positive, we fall all the way back to "just trust us bro" territory, which basically destroys the strong guarantees that E2EE is supposed to provide when you use it correctly. The way I see it, you either offer E2EE or you don't: if you can't accept the potential consequences, simply don't play.

That having been said, given how they paraded their backdoor approach as a way to eat their cake and have it too, it's unsurprising that it attracted the attention of governments. They perhaps thought that implementing a scheme like this would put them ahead of the game, showing legislators they're self-regulating to work on these problems. In actuality, they're really just putting the future of digital privacy in dire straits.

Play stupid games, win stupid prizes.


> No thank you. A lot of professionals perfectly understood the idea.

Fucking thank you.

I have spent a lot of time in security. I spend a lot of effort on privacy. I keep abreast of the state of the art, I donate to the EFF. I spent a year auditing security at one of the largest cybersecurity firms in the Western world. I’ve read the texts, I’ve taken the classes.

I’m not saying I can just do bitwise AES over plaintext in my head. But I had no issue understanding the white papers Apple released that described the feature.

That didn’t convince me one iota that it was either a good nor necessary idea. It simply served to highlight the things Apple was being fairly arrogant about. And I say that as someone who daily-drives multiple Apple devices and am highly bought into the ecosystem.

They built a large part of their trust model with that effort upon the idea that two governments either wouldn’t or couldn’t collude to inject non-CSAM hashes into their database. It’s like they’ve never heard the term “Five Eyes” despite harboring serious credentials and contracts in _all_ the right spaces to know exactly how possible that would be.

It was a technologically clever idea, and if we lived in a “it’s this or that” world where we had to choose which technologies would be used to surveil our devices, it would be one of the best options developed to date.

However I don’t accept that premise because it’s not true.

Edit: typographical cleanup.


> They built a large part of their trust model with that effort upon the idea that two governments either wouldn’t or couldn’t collude to inject non-CSAM hashes into their database.

This is not correct. In the event that this happened, there would still be oversight at Apple before reporting to authorities.


Can you elaborate how that worked? Apple only said contemporaneously that they would refuse to include hashes that aren’t CSAM, but never went into detail on how they could actually prevent that, or refuse an order to do so.

But those hashes are produced and provided to Apple (edit: well, to any consumer of the hashes) by government backed actors. (Edited to replace “government agencies” with “government backed actors”.)


I misread your reply, and I cannot edit the above.

My misreading was that (paraphrased) “Apple could ensure the hashes they received were only for CSAM.”

That isn’t what you said. You noted that (paraphrasing) “internally Apple would have some validation functionality to ensure they were not reporting non-CSAM content.”

I was going to say I think your point is orthogonal, but I understand how it’s a response to what I wrote. I think we could rely on a reasonable set of people differentiating a perceptual hash of CSAM from non-CSAM content. That is no guarantee, but if their system worked as described then at some point they would be able to validate the actual stored images uploaded and know whether or not it was CSAM in an account.

Just leaving here that I would counter to others that make your original point constrained to CSAM flagged content only, that:

1. Given the fact that collisions were demonstrated in public before the system was fully implemented, this failure mode had the potential to create an unknown volume of false positives that Apple would have to deal with (which would unnecessarily expose user data to Apple contractors or employees).

2. We probably don’t want to enable corporations to have this kind of power (see recent stories regarding similar false positives at Google).

3. There was proposed no detailed plans for any sort of procedures about what happened next. What happens when Apple makes a report that was “close to the line” and later found not to be CSAM? The perceptual hash concept was not without vectors for human failures. What recourse would there have been if Apple closed an account due to a false positive?


> A lot of professionals perfectly understood the idea.

I saw very little evidence of that. Practically everybody against the idea thought the device determined what was a match and uploaded evidence to Apple. Practically everybody against the idea thought “the” hash was broken. Practically everybody against the idea confused a hash false positive with the system’s false positive.

Yes, there were a few thoughtful criticisms and I would have liked to have seen more of them. But they were drowned out by abject nonsense.

> given how they paraded their backdoor approach as a way to eat their cake and have it too, it's unsurprising that it attracted the attention of governments.

It didn’t though? Has there been any government who wanted to take that system up? They all want something a lot more intrusive; the same kinds of things they have been asking for a long time before Apple’s proposal.


> Practically everybody against the idea confused a hash false positive with the system’s false positive.

Doesn't the system output the result of the hash?

What is the difference?


> What Apple came up with was essentially a "privacy preserving" backdoor

As opposed to Google trying to have parents arrested for seeking medical care for their children during the pandemic lockdown?

> His wife called an advice nurse at their health care provider to schedule an emergency consultation for the next morning, by video because it was a Saturday and there was a pandemic going on. The nurse said to send photos so the doctor could review them in advance. With help from the photos, the doctor diagnosed the issue and prescribed antibiotics, which quickly cleared it up.

Because technology companies routinely capture so much data, they have been pressured to act as sentinels, examining what passes through their servers to detect and prevent criminal behavior. Child advocates say the companies’ cooperation is essential to combat the rampant online spread of sexual abuse imagery. But it can entail peering into private archives, such as digital photo albums — an intrusion users may not expect — that has cast innocent behavior in a sinister light in at least two cases The Times has unearthed.

“I knew that these companies were watching and that privacy is not what we would hope it to be,” Mark said. “But I haven’t done anything wrong.”

The police agreed. Google did not.

https://www.nytimes.com/2022/08/21/technology/google-surveil...

That's not just scanning things you upload to a public facing website, that's scanning everything on the device.


> As opposed to Google...

Bad start. I never mentioned Google.

> ... [the remainder of the message] ...

I do not understand how this is meant to weaken the case against what Apple did at all. If anything, it strengthens my resolve that tech companies should not be at liberty to even make these decisions.


I do not understand how anyone can go on a rant about a system Apple discussed but never implemented while trying to handwave away the fact that Google is scanning everyone's device and turning in parents to the police over a single false positive.


iCloud storage was never E2EE; the iMessage system was, but not iCloud. As of late last year iCloud can optionally be E2EE, with the "Advanced Data Protection" feature, which is off by default (https://support.apple.com/en-us/HT212520).


I'm pretty sure the point of the concept was to enable E2EE for iCloud storage; that's the narrative I've heard. If it's wrong, well, then it's even stranger if you ask me.


Well they went ahead and shelved the idea and then implemented E2EE for iCloud anyways with advanced data protection so idk how well that idea holds up.


It was designed to scan photos on device and upload reports as a condition of having iCloud Photos turned on. Since iCloud Photos is turned on by default on iPhones (and more or less has to be turned on if you have a photo library that’s larger than your local storage) it would have applied to the vast majority of users, even those that didn’t even realize “iCloud Photos” was a thing they cared about.

Fortunately Apple has abandoned that plan and now supports full end-to-end encryption for iCloud Photos. It’s still opt-in, but hopefully that will get easier in the future.


> it would have applied to the vast majority of users, even those that didn’t even realize “iCloud Photos” was a thing they cared about.

I don't buy this because everyone I know turns off iCloud Photos when it causes you to run out of your measly 5GB of free storage space, which makes the OS insistent about you fixing it to fix backups not uploading. For the people that do pay more, they keep iCloud Photos on specifically for the photo syncing aspect.


I backup my photos to my desktop computer. I would have never experienced the scanning.


I think it is fine that you opt out. But if society as a whole doesn't opt out of a surveillance regime, it still affects you as a member of society. And eventually your ability to opt out will probably be removed: after all, once you've acquiesced to the idea that private, unshared, possibly encrypted photos still need to be scanned for criminal activity -- then why should there be an exception for criminals who back up to a local computer.


Because in the USA if we keep our secrets in our house we are protected from unreasonable search an seizure. If we share our secrets with a third party the third party is free to share them. This seems like such a clear line of distinction that a layperson could understand.


There is no U.S. law that requires Apple to scan your uploaded content for CSAM and share reports with law enforcement. If such a law existed, it would almost certainly be a major 4th amendment concern. Instead Apple chooses to do server-side scanning "voluntarily" (which means, they do it under considerable pressure from law enforcement agencies and politicians) after making you click Agree on a ToS that gives them permission to do so.

Similarly, I am not aware of any constitutional rule that prevents a private company like Apple from installing client-side software into your phone OS that scans the local photos on your private device even if you never upload them, as long they don't do the scanning at the government's direct behest and you give them permission by agreeing to their ToS.

The important constitutional principle here is "is this private company acting as an agent of the government." The important civil principle is: "do we, as citizens and customers, want our government and phone manufacturers to scan private files and submit automatic police reports." Once you've decided those questions, the physical/logical location of the files is a detail that only we nerds care about.


Never said there was. I said there is a constitutional idea that items stored at home have more expectation of privacy than things store with other people. That you have some axe to grind with this or that company shouldn’t cloud the issues.


You're trying to argue in good faith with someone that fully understands the strategy of using private/public partnerships to bypass constitutional protections, and is playing oblivious. Don't feed the glowies.


>>It was a really interesting scheme that Apple clearly put a huge amount of thought into in order to maximise privacy.

No. They appeared to put in this effort, but at the end of the day they built a system which could be used for scanning any file on your device for any reason, saying "oh it's only for files uploaded to icloud" was obviously the first step. And then of course the fact that if anything was deemed innapropriate it would have been submitted to their own verification centre where someone would actually physically look at your pictures and then forward them to law enforcement, without you being notified that this has happened in any way - that doesn't sound like maximising privacy to me.

>>but I wouldn’t be opposed to the scheme Apple wanted to implement.

I would. It was a horrendous implementation and any on-device scanning should always be opposed to the maximum extent that you can bare.


> at the end of the day they built a system which could be used for scanning any file on your device

They could already be doing this though, right? Unless some trustworthy external auditor has gone through the iOS/macOS/iPadOS source and audited that they don't (good luck with that.)


Of course - but Apple isn't going to produce a custom iOS build to find pictures of dissidents for a dictator(at least I find that unlikely), but if it's a system that everyone knows exists but verifies your pictures against some closed, uninspectable and constantly updated database of hashes......what's the harm of sneaking few more hashes in, right?


> at the end of the day they built a system which could be used for scanning any file on your device for any reason

This kind of comment is exactly why they said lots of people didn’t understand the system.

Because it’s utterly untrue & appears to not display ANY understanding of the underlying system at all


Please explain how it's not true. I understand the system and its design pretty well and I literally don't understand how you can't see the risk of this happening here.


If you understood the system you’d understand how that makes as much sense as saying “the iPhone has cameras that means they will be able to see everything you’re doing!”

It’s not true, it’s not how that works, it’s not how any of it works. It’s also something that could ONLY be true of they changed how everything worked.


>>If you understood the system you’d understand how that makes as much sense

I don't know why you keep implying I don't understand how the system works - I do.

And I literally don't see how you can take your position and believe in it.

The system as proposed would generate a special "similarity" hash for pictures that are about to be uploaded to icloud - then those hashes would be compared against an uninspectable database of hashes provided by apple in collaboration with law enforcement agencies.

Literally all that would need to change here is that this system would need to scan all files not just pictures(trivial) and either all the time or on request instead of on upload to icloud(trivial). And of course it's silly to believe nation states wouldn't force Apple to add their own hashes of anything they want into the database - after all there is no oversight process over this. This isn't a major change to how the whole system works, it's a minor change and one that definitely would be made with time "because think of the children/terrorists/etc"

You really don't see it as an absolultely inevitable consequence of this system being implemented and in fact just existing at all? If no, then you have more trust in Apple than I do.


> No. They appeared to put in this effort

To be clear – you are saying this was an easy thing for Apple to construct?

> but at the end of the day they built a system which could be used for scanning any file on your device for any reason

This is factually untrue and also not in contrast to the amount of effort they put in.

> And then of course the fact that if anything was deemed innapropriate it would have been submitted to their own verification centre where someone would actually physically look at your pictures and then forward them to law enforcement

This is also factually incorrect. Did you read the technical documentation Apple published?


>>To be clear – you are saying this was an easy thing for Apple to construct?

No.

>>This is factually untrue and also not in contrast to the amount of effort they put in.

How so? I fail to see how this system has anything that would prevent it from being used to scan any file on the device? Sure their design is only for pictures, but why would it be limited to them?

>>This is also factually incorrect.

It's not, that is exactly what their technical paper says.

>> Did you read the technical documentation Apple published?

I did, multiple times, I've been having these arguments for a long time now.


> it didn’t work with just any images on device.

Only a matter of time before an authoritarian regime forced them to expand its scope. Create a Pandora's box and it will be opened.


Don't need an authoritarian regime, just the UK government. This is exactly what Apple attempted to head off in a privacy preserving manner. At the time it was speculated (I agreed) that it was part of the lead in to full E2EE - which has happened since.


An authoritarian regime can require spying regardless. This is the exact topic we are currently discussing.


You can also meaningfully resist spying by reducing the amount of sensitive user info you hold or building a transparent system. Apple has not done that, so they are now in a position where they have to reconcile what "fair data collection" even means.

If you're just asking me, they bungled it. MacOS shouldn't have ever gotten stuck in the Windows 8 Software as a Service trap, because it radically redirected their trajectory from "well-made personal devices" to "golden handcuffs". Don't believe me? Just try to install a freedom-respecting OS on your iPhone. You can't. It's a goose-chase, because Apple insists they don't even make computers in the first place. It's an iPhone, running a very special superior version of UNIX you're not entirely allowed to look at, but you can trust is secure.

So of course an authoritarian regime can compel spying regardless. They've probably been doing it for the past decade, given what we know about FIVE EYES and PRISM. When you're a company like Apple or Microsoft, they just update the software with the requested backdoor/vulnerability installed and push it to the closed codebase. Even Apple's proposed E2E encryption could be backdoored. Nobody would ever know the wiser.

If you want the reason I'm mad, it's because it echoes the reasons I left the ecosystem with MacOS Catalina. The idea of Apple defaulting to you trusting them on so many things is kinda sickening. They've had the opportunity to build groundbreaking trustless systems with unparalleled openness and plausible deniability for all user data. Now the users are bartering for a better (but also closed) E2E system with a company that doesn't want to hear them out. It's not my jam, I'm not going to be held hostage by my computer because Apple has a fetish for making extreme decisions on the behalf of their users.


Yes, and companies like Apple(and many others) have been successful - or at least appeared to be - at denying such requests by saying that there is no technical way to break the encryption built into their devices and communication. Now, there would be a way - once it's built it's built, there is no going back.


> companies like Apple(and many others) have been successful - or at least appeared to be - at denying such requests by saying that there is no technical way to break the encryption built into their devices and communication.

This is incorrect. Last time it went to court, Apple used very different arguments.

> Now, there would be a way - once it's built it's built, there is no going back.

Then why is the British government not asking for the system Apple designed? They are asking for something far more intrusive. Apple’s system is irrelevant to that.


Absolutely, but by normalizing scanning on local computers it takes you one step closer to authoritarianism. Every time politicians get a little more power, it only accelerates their desire for more control and inflates their sense of elitism that much more into thinking they're special and above us peasants.


> But people didn’t read the details and instead imagined it worked in a much simpler way, drowning out any useful conversation about what it was actually doing.

I read the details. I think I understood it perfectly well. I found it very objectionable.


Great! It’s a shame voices of people like you were next to impossible to find amongst the ignorance.

I think there’s a strong argument for and a strong argument against the system Apple designed. I wanted that discussion to be had instead of it all being washed away with knee-jerk ignorance.


I was here for the discussions on hacker news, and it seemed to me like most people knew basically how it worked. A lot of people on here defending the authorities and clutching pearls about “think of the children” telling us we didn't understand and when we presented them with evidence that we did in fact know how it worked they would go radio silent.


I found quite a lot of intelligent debate about this. Not in the popular media, of course, but that's to be expected.


I read the details. It normalized letting corporations scan and police my computer without a search warrant. I didn't like that and I complained. If they want to scan my data on iCloud, then fine, and good luck since there is a layer of heavy encryption there before it's backed up there. Normalizing scanning of local devices is a big no-no in my opinion and the opinions of lots of others. Yeah we read the white papers and know exactly how it works, if not how every bit and byte is juggled.


> The system was designed to work only with images as they were being uploaded to iCloud

Not iCloud, since that would imply it was scanning entire device backups stored in iCloud.

It was intended to scan just the photos that you uploaded to their publicly accessible Photo sharing website, iCloud Photos, in a way that even Apple wouldn't have access to the results unless you uploaded more than 100 photos that matched known kiddie porn.


iCloud Photos is part of iCloud. iCloud backups are not end to end encrypted by default. The option did not exist then. They claimed 30 not 100 photos. It could change at any time. The claim could not be audited. The databases are known to contain false positives. They could not be audited.


> iCloud Photos is part of iCloud

iCloud Photos is a photo sharing website accessible to the public, like Flickr. It has nothing to do with device backup.


iCloud Photos includes syncing between your own devices and sharing with others. iCloud Backup is a separate feature. Both are parts of iCloud.


Apple only discussed scanning images that would be uploaded to the photo sharing website.


Apple documentation said photos stored in iCloud. Synced photos are stored in iCloud. And they told people to turn off iCloud Photos to avoid it. Not not share.


Backups are also stored in iCloud, but Apple made it clear that it was only considering scans of photos that you uploaded to iCloud Photos (since that content is publicly accessible through the web).

Scanning device backups was not even on the table, unlike at Google, where they have been scanning device content for years now.

https://www.nytimes.com/2022/08/21/technology/google-surveil...


I explained already sharing is part not all of iCloud Photos.

Backups entered the discussion because you claimed iCloud meant backups not photos. And your words suggest iCloud Backup is more private than it is. Scanning device backups was not part of the photo scanning plan. But it is on the table always without E2EE.

The article said backed up to Google Photos. And the photos were sent over Gmail and chat. It is not clear device backups were scanned or existed. And device content is a misleading name for backups in the context of scanning on devices.


> "Didn't Apple recommend on-device scanning recently?"

> "No, it was opt-in not required by law."

Your comment indicates the answer is yes.

Trying to figure out where the gap may be...my guess is the view objections come from "people [who] didn’t read."

I found the other replies powerful evidence that other people did read, and had nuanced thoughts on this matter.


The delta is that the “on-device scanning” proposed by the British government and the “on-device scanning” proposed by Apple are very different in meaningful ways.


Their idea & implementation was MUCH better (and more secure & private) than the awful proposal their opposing here.


Yes, it really smells like a strange about-face. What is Apple's real stance here?


Companies have only one stance - make more money. When squeezed from both sides they might try poking either in order to see what would hurt their bottom line less.


> Companies have only one stance - make more money.

That's not entirely correct, but close enough. Since that's a constant, we can ignore that. Outside of "make more money", different companies absolutely take different stances about things.

Most companies seem to have at least some amount of consistency in their stances, though.


>"Most companies seem to have at least some amount of consistency in their stances, though."

Do no evil?


I did say most, not all.


I've dealt with many companies in the last 25 years and from my personal experience their stances are like a windsock (well except for that main "more money" stance).

Your experience can of course be different


Apple's plan to compare images on-device to hashes of CP was an eminently sensible attempt to give governments an alternative to banning E2EE encryption. As the government spokesperson in the article said: "companies should only implement end-to-end encryption if they can simultaneously prevent abhorrent child sexual abuse on their platforms." Apple was trying to do the latter so it has a leg to stand on when it comes to asking governments not to regulate against E2EE.


I don’t understand why comments like this are getting downvoted. I understand the absolutist position against any content scanning, but Apple very clearly tried to strike a middle ground that is far less invasive than the content scanning systems in every other photo service run by Google, Microsoft, Flickr, etc. And then when everyone freaked out, Apple didn’t ship it! Meanwhile the FBI can browse all your Google Photos right now.


Can the FBI not browse all my iCloud photos too?

I don't think a closed-source E2EE implementation would do much to change that.


E2EE is an option for iCloud Photos now, so no, not if it’s turned on.

Not sure what you mean about closed source. If you’re implying that the FBI can break Apple’s encryption because it’s closed source, uh, sure.


Based on the Pen Letter laws. Anything stored with a third party can be subpoena by the government. The reasoning is if one wanted to keep it a secret they wouldn't be sharing with a third party.

If you use an ISP for email the ISP can be subpoenaed with you being aware. If you want run your own mail server from your house. A search warrant of your house for the mail server is required. Different notification standards.


With a subpoena they can access anything unencrypted. If a user turns on advance data protection, they cannot.


E2EE with a back door is not E2EE.


Yeah, I think people who are absolute about E2EE here need to remember the xkcd about the person with the wrench. Apple's reading of the tea leaves is that the government will be the one with the wrench and will ban E2EE. This was an attempt to head it off. Of course their reading could be wrong, and the government would ban anyway. Or maybe a ban never comes to fruition. I have to imagine it would ultimately end up at SCOTUS.


> This was an attempt to head it off.

Right, by building in what amounts to a back door.


It sounds like you didn’t read the paper. It’s a reporting system, not an access system. No one can actually be granted access to your photos under Apple’s scheme. One of your photos must be an exact match for photos already existing in the CSAM database, at which point a report is generated, still not with the complete image from your phone. Even if the CSAM database were compromised by the FBI, where is the back door?


I did read the paper, and I understand what the system did. I suspect that you just have a more narrow definition of "back door" than I do.

> One of your photos must be an exact match for photos already existing in the CSAM database, at which point a report is generated

IIRC, one of your photos must match a hash that exists in the database, not an exact match to an image itself. That's the means by which end-to-end encryption is evaded. That evasion is why I call it a back door.

Really, my entire problem with the scheme was that it took place on the device itself rather than in the iCloud. While all smartphones are surveillance devices already, making one as overtly a surveillance device like this sets a precedent that I think is exceedingly harmful.


You’re missing the key point that there isn’t much surface for misuse under Apples scheme. If the FBI wants to falsely accuse you of having CSAM, inserting a hash from a photo on your phone into the CSAM database is actually much more complicated than simply putting CSAM on your phone. Running the scanner on your device means they actually need to compromise your device.


> You’re missing the key point that there isn’t much surface for misuse under Apples scheme.

No, I didn't miss that point at all. It's just orthogonal to my objection to the scheme. I object to the idea of formally making highly personal devices like your phone into ones that are spying on you. We have more than enough of that sort of thing informally. To make it official could only make things worse for everybody, Apple users and otherwise.

Since the whole thing was about photos being stored in the cloud, it would have been much less objectionable to put the scanning on the cloud side of things. At least that's actually someone else's computer and not someone using your own hardware against you.


Your theoretical objection is divorced from any actual consequences that affect people. Why should anyone care? You state no benefits or harms other than what you feel. My point is about reality, so yes you missed it


Hmm. I think the opposite, that you are missing my point entirely. But, really, it's probably just that we fundamentally disagree on very basic issues. That's fair.


This is HN. The facts are not relevant.


So, Apple felt it had to "burn the village in order to save it"?


IIRC their argument for on-device scanning was that it was a (largely) privacy-preserving way to comply with existing law.

I don’t think they’re insincere if they oppose new regulations, yet begrudgingly comply with them if they become law despite their attempts to prevent that.

Few people/organizations are willing to die for their principles.


I hope they go all in on privacy as a brand "distinguisher". I know corporations only exist to make a profit but stuff like this can make your brand stand out.


Glad to see someone mentioning this. It's a strange about-face for them, as I thought they still scanned iCloud


Tech workers need to unionize everywhere and create staunch opposition to this kind of garbage. We're the experts, we're the elite, we know what's best, and we have a responsibility to lead here. Doctors won't harvest organs to sell. Pharmacists won't make you poison. Engineers shouldn't make you encryption backdoors.


> we're the elite

Nice thought, except most of us are morally no better than expensive prostitutes.

> Doctors won't harvest organs to sell.

Many doctors take bribe money from Big Pharma. Also some doctors do crazy stuff like recommend surgeries to fund a new swimming pool.


> Nice thought, except most of us are morally no better than expensive prostitutes.

I don't find anything immoral about sex work, but that aside, I think one of the benefits of a professional organization would be to unlicense unethical actors like how doctors lose their licenses or attorneys are removed from the bar. The idea here is:

- PA (prof. assoc.) says, "hey, we're not making encryption backdoors and we're not making tools to crack them available to governments and law enforcement"

- PA lobbies against policies that would mandate encryption backdoors

- PA lobbies for a law that says you must be licensed to distribute software at a certain scale

- PA strips licenses from engineers working on encryption backdoors

It's not perfect, you can probably imagine a lot of holes in it, but there are holes in everything (you can travel to a different medical regulatory regime, etc. etc.). The point isn't perfection, the point is to move society in the right direction rather than have this idiotic discussion about encryption literally every year.

> Many doctors take bribe money from Big Pharma. Also some doctors do crazy stuff like recommend surgeries to fund a new swimming pool.

Corruption happens, and I think it's clear humanity hasn't figured out how to deal with it. I don't think that's a reason not to organize though.


I wonder what the impact of making software development a profession would be, when you can easily teach yourself the mechanics of development with just a laptop from anywhere in the world.

If there was a restriction that said Californian companies can only hire guilded developers, that might form an effective protest in this case in particular; but in general, I have a hard time imagining software development being contained in a bureaucratic box. Seems like it would add needless/expensive/frustrating bureaucracy without much obvious benefit.


The point is that although your idea might be good in theory, you should formulate how we get there in practice. Because that's the hard part.


> morally no better than expensive prostitutes

expensive prostitutes care way more about their clients than tech workers.


Theres no need to generalise and dehumanise a whole group of people like that.


Normally I would agree, but tech workers (especially those that dub themselves elites) deserve it.


They could have meant prostitutes.


I highly doubt that. If the pair of things being compared was self titled tech "elites" and used car salespeople; there is a small (non zero but maybe zero if you rounding to the nearest number) chance they meant used car salesmen.


It was a joke mainly. But both other replies defended prostitutes.


I was talking about your condemnation of sex workers.


You're thinking of self-regulating professional body rather than a trade union. Unions fight for pay and working conditions against employers, professional bodies represent the considered opinion of the profession and self-regulate their members, like doctors and barristers.


A union could, if it chose to, take collective action to stop some particular lobbying effort, I just don't know if it's ever been done.


> A union could, if it chose to, take collective action to stop some particular lobbying effort

For consumer benefit? No. It's technically possible. But it's simply not how the structure is framed or incentivized.


Nurses unions have advocated in the US for policies like Medicare-for-all (single payer health care), so it’s not just possible but plausible. It would help them too (simplifying insurance makes nurses lives easier), but that’s equally true for many consumer protection bills in the tech space.

https://thehill.com/policy/healthcare/552324-union-lobbies-c...


> Nurses unions have advocated in the US for policies like Medicare-for-all

The same way a mechanics' union would advocate for a car subsidy. I'm not saying it's disingenuous. Just that there is a commercial need served; I haven't seen a private sector union advocate for something meaningful that's revenue neutral.


Big unions like the CWA do a lot more than that, but I don't really care what it is, although I think a lot of people in tech have a knee-jerk anti-union response so maybe that's not the way to go.


> We're the experts, we're the elite, we know what's best

I certainly don't want this mantle.

Everyone wins if, in layman terms, you can instead make it clear to the non-technical people what the issue is.


That mantle is not something you can escape unless you opt into not having a career.


> We're the experts, we're the elite, we know what's best.

Maybe about the technology and code. But definitely not about effects on society.


On the flipside, policy makers don't really understand the societal effects of a policy like this either. They're narrowly focused on one specific perceived benefit (a tool for law enforcement), while ignoring all other consequences.


Sure, that's why there's other seats at the table. But we need one too.


> We're the experts, we're the elite, we know what's best, and we have a responsibility to lead here.

Wow. That is possibly the most "orange site" type post I have ever read here.

The minute you claim to be the elite is the minute the commons lose all trust in you. Appeal to authority is not the way to go here.


I think you're reading this as arrogance, which isn't an inefficient assumption around here, but I view it more from the responsibility lens. We're the ones who know about this stuff, and we need to do more to spread that knowledge, and to take control of our destiny as to what we'll build.


this is absolutely not something a union should be taking a stance on. fear of a union getting bloated and losing focus on improving working conditions and pay is one of the biggest reasons tech workers i talk to aren't too jazzed about unions in the first place.

we don't even have one and already it's supposed to be opposed to some totally irrelevant legislation. i can see why people are hesitant to unionize


I wouldn't trust most tech workers to pick roses let alone pick government policy.


This has to be a joke, like a union would earnestly evangelize privacy and data security practices. It would, no doubt, be captured by activists who care more about DEI and non technical pursuits, and bloat into a bureaucratic monster overnight. A union's job isn't to set industry standards and best practices. And what would stop the companies from using consultants or third parties to implement these backdoors? Nothing. There will always be engineers willing to build things that are 'unethical' ... because money and technical curiosity.


> And what would stop the companies from using consultants or third parties to implement these backdoors? Nothing.

Contract with the unions could force the companies to not do so. The employees of tech companies have a lot of power that they’re not using right now.


I don't think a union would prevent big tech legal teams from finding loopholes to circumvent said contract. They could use some entity in another nation or state that has more lax laws. And if the government wants Google or Meta's infrastructure to use as a spying apparatus, they get their way (see: NSA). Seems like a game of whack a mole that a slow moving organization like a union would lose.

A handful of rogue leakers or vigilante whistleblowers, backed by some non-profit defense legal fund (perhaps funded by payroll deductions, like dues), could have an outsized impact on the future of security/privacy.


Part of unionizing is voluntarily giving up your individual bargaining power. Why would anyone that's one standard deviation or more above the mean developer trade that away?


Because they get laid off in random staff cuts just like everyone else. When tech companies band together against tech workers, it doesn't matter how many Xs of an engineer you are.


You can always get another job, and being good at what you do should increase the probability of getting a new one.

Everyone knows about business cycles. It's worth saving for the unexpected layoffs so that you can make it through. To reiterate, knowing I could not have a job tomorrow every single day is not enough justification for giving up my ability to self represent. Plus, I don't have to pay people for the privlidge of losing my independence.


You can probably get another job yeah. Can you get another job with the same total comp and benefits? A lot of people are finding that that's not the case, you might preserve your salary but give up stock options and remote work, etc. Also, it's just a pain in the ass.

I think this reasoning is pretty labored. Sure there's some small percentage of engineers who are eminently employable and don't care to unionize. It benefits everyone else though, and even those "elite" engineers benefit from unionization, as unions have to represent everyone. I feel like there are tinges of, "I don't want to pay union dues if I'm not in the union" or "everyone thinks they'll be rich someday / are an elite engineer so they don't want to tax the wealthy / unionize", but idk, these are old, bad arguments.


Exactly. I, personally, aim higher than the standard deviation and am compensated well for it. I would not give that up.


I wonder if you'll reflect back on this comment when ageism comes knocking.


I'll be long retired before "ageism" even becomes close to a concern ;)


Unions exist to represent and advance their members rights, which is good. They don't exist to pick up other social or political issues. When they try it is usually a mess because there is no good chance they will pick the "right" side, because it distracts from their actual mission and because it saps their support.


You'll then probably be surprised to learn the AFL-CIO lists such causes as:

- Criminal Justice Reform

- Health Care

- Quality Education

- Sexual Harassment

- Social Security and Retirement

- Tax and Budget Policy

- Gender Equality

on its issues page [0], or that the NEA has a whole section for Racial & Social Justice [1], or that the CWA has a whole section for Human Rights [2].

I don't think there's any reason that if tech workers all joined the CWA that the CWA couldn't then lobby against encryption backdoors.

[0]: https://aflcio.org/issues

[1]: https://www.nea.org/advocating-for-change/racial-social-just...

[2]: https://cwa-union.org/workers-rights/human-rights


I am not really surprised I'm afraid.

Unions often DO get involved in these things, they make no progress, divide their members and it prevents them actually doing their actual day to day job.

Has the AFL-CIO balanced the national budget yet? No. And wasting time on that has meant less attention has been paid to membership or issues where they could have made some difference.

We saw the same thing with the Amazon unionisation attempts last year: what started as (a broadly supported and popular and achievable ) campaign to get better wages, less terrible conditions and some job security rapidly morphed into making Amazon go carbon neutral, fixing racism in america and end global warming. I would like all those things. But if you want me to put my job on the line our goals need to be achievable, and frankly beneficial to me directly. And the Amazon unionisation fizzled pretty rapidly.

Again, if people want to join campaign groups (EFF etc) then that is their right. But that's just not what a union is for (any more than the EFF should represent me in salary negotiations).

To be clear in the specific case of the AFL-CIO, they are NOT a union (at least not one of workers). So it makes sense for them to campaign on whatever they and their members feel is important.


The second sentence is patently untrue.


Both that sentence and the one before are opinions. Opinions cannot be true or false, if something is falsifiable then it's no longer an opinion, it's a fact.

What you mean, is that you disagree. But you offer no reason or argument. You have just mistaken your own opinions for facts...


At the risk of continuing an argument. I don't just disagree. It's untrue. Opinions as to the nature of reality have long been falsifiable.

More accurately, your first two sentences are more like Union mission statement repetition that doesn't reflect observable reality.

For example, the largest unions in the US can be consistently observed to be political on issues that can't be plausibly connected to the benefit of their members. Unless one were to reason that the interests of their members lies with virtually all partisan political issues. But then we are dealing in a semantics shell-game and not reality.


I'd love an union mouvement for IT workers. We have a small trade union in Switzerland that represents our interests but like for most other countries, engineers rarely see themselves as blue collars.


I look at it more like the ABA (attorneys) or the AMA (doctors) in the US. Those organizations have enormous political power, and I think tech workers could easily hang in that echelon. I say unionize because I just like unions, but either way.


And, like attorneys and medical doctors, we could really do with having a clear system of professional ethics.


> look at it more like the ABA (attorneys) or the AMA (doctors) in the US

These are trade groups. Not unions. There's a meaningful difference. Unions for tech don't make sense, particularly in a world of remote work. Trade groups, on the other hand, do.


Why does it not make sense? Remote workers can join an union too.


It’s too easy to scalp, or for an unencumbered overseas competitor to take your chips. When offices resembled factories there was at least a capital expense to be leveraged.


Long way to go, but that's what international solidarity is for.

[0]: https://aflcio.org/issues/global-worker-rights

[1]: https://cwa-union.org/workers-rights/international-solidarit...


I don't need a union, I need a lobby.

Let me know where to sign up.


In this case, the EFF?


People would freak out if software engineers formed a trade Union. I’m all for it personally. Kind of like the bar for lawyers.


Yeah, what we really need is government by developer-king. If the Silicon Valley crowd ran countries, our lives would be so much better.


>"We're the experts, we're the elite, we know what's best"

I did not know Narcissus still alive and hangs out here.


Yet another example of an insane, unworkable internet policy from the UK which will never actually be implemented. I don't know why they keep doing this.

Even if they were serious about it, they're a tiny country that lacks the clout to push around American tech giants. They lack the leverage of even Ireland, where most of these companies have their European tax base.


> they're a tiny country that lacks the clout to push around American tech giants

I'd wager the power dynamic is more along the UK's membership in five-eyes and their ability to be a useful sigint ally.

For all we know the NSA wants a better way to spy on Americans using Signal. How does that work? Get the Brits to do it.


But there is no reason signal would ever comply with some law like this passed outside their jurisdiction. Even if a law like this was passed in the US, you could just move your headquarters to Egypt.


This is the same mentality that got UK to pass a referendum for Brexit.

As absurd an idea might be, it doesn't mean we can stand still and do nothing because we're sure the idea is so dumb it won't come to fruition. We need to fight it to prevent it from becoming a reality.


It WILL be implemented. Remember: both the EU and the US are pushing these laws. And soon they will be extended to cover terrorist propaganda, drug trafficking and calls for unlawful protests.


I wonder how that will work against steganography? With the right cipher and signal-to-noise ratio it's essentially impossible to prove beyond doubt that a given file contains secret information.


No one's gonna use steganography but nerds.


They get to print something along the lines of "we tried to protect the children and those irresponsible American tech giants stopped us" in the Daily Mail.


The UK is for some reason very surveillance-friendly. CCTV's were prevalent there much earlier than most other countries.

I guess since Orwell was British they think they are on top of this.


> I guess since Orwell was British they think they are on top of this.

I don't think Orwell has anything to do with this. British society is very different to, say, American society in many ways. History will probably help explain things.

Like many European feudal societies, there's a long-standing expectation that government is expected to care for the citizens. In spite of how individuals acted, there have been laws to protect peasants from predatory landlords (even if their application wasn't perfect). In addition, it was the role of the local "government" / community to care for their homeless for a very long time.

What I'm trying to get at is that the suspicion of government is not really there in the way that it exists in the USA. This allows the government to hold vast powers and, for better or worse, be trusted that it's not going to abuse that (against its own citizens at least).

On CCTV specifically: in the modern day, in practice, most CCTV is concentrated in London. This means that the vast majority of the UK is not really affected by this surveillance. Also, I do recall most CCTV is privately operated, not state, and most organisations delete it within a few weeks to save money.

What this means is that, in practice, you don't have a Bourne Identity situation where the police can watch a live feed of someone running through London. They have to acquire that information afterwards through a series of requests and subpoenas to different companies.


> Also, I do recall most CCTV is privately operated, not state, and most organisations delete it within a few weeks to save money.

In practical terms, the numbers are pumped up by things like corner shops having shitty live feed cameras with limited recording capability and no networking to deter shoplifters when there’s a single shopkeeper who can’t see the whole shop from where they stand due to tall shelves and cramped spaces. Yet it seems common for Americans to imagine this as Enemy of the State style omniscience where the authorities can view anything happening in London at any time. It’s got no connection to reality.


Cameras are owned by councils and used to catch minor crimes like dog fouling.[1]

Your local police own several hundred cameras directly. [2]

Your police can request access to any public facing camera at any time. There does not exist a right to decline. No warrant process exists. [3]

[1]: https://www.theguardian.com/society/2008/may/22/localgovernm...

[2]: https://www.dyfed-powys.police.uk/foi-ai/af/accessing-inform...

[3]: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-re...


> Cameras are owned by councils and used to catch minor crimes like dog fouling.[1]

I hate to say it but "not all cameras." A relatively small number of CCTV cameras focused in areas intended to catch and deter specific petty crimes is hardly Enemy of the State stuff, is it? In fact, if you polled the populated, you'd probably not get a lot of objections here.

> Your local police own several hundred cameras directly. [2]

Again, not all cameras. This is really only strengthening my argument about the patchwork nature of CCTV and the practical reality that it's not the all-encompassing Bourne Identity live feed tracking you round the country.

> Your police can request access to any public facing camera at any time.

I already addressed this: this would usually be an after-the-fact tape, not a live feed. They can't just take the tape because that's not their property. They don't necessarily have to tell you because the tape isn't your property either. Does the general public necessarily object to the police being able to use available evidence to solve and prosecute crimes?


More than 100,000 live public cctv systems.

https://securityjournaluk.com/public-cctv-cameras-tops-10000...

Boroughs alone operate over 20,000 cameras.

https://clarionuk.com/resources/how-many-cctv-cameras-are-in...

How many more cameras until you feel safe?

Even if one property owner declines to hand over cctv footage and somehow the search warrant is denied (which basically never happens) then there are probably several other camera owners who will provide footage.

Everywhere you go, everything you do is on film. The police have access to it either in real time or by request form.

And yes, those cameras are used to stop crimes.. like not picking up after your dog. Obviously this intrusion is necessary for your safety and security. How else can the police provide such valuable services?


The first article does not say that councils own cameras nor that cameras are used for catching minor crimes. It does state that powers intended for serious crimes were, incorrectly, being used for prosecuting minor crimes and that it's possible that CCTV cameras were involved (though could have been park wardens).

The third article says pretty much the opposite of what you state: there is absolutely a right to decline a law enforcement request (under GDPR) and that there is a warrant process ie a court can oblige you to.

I can't access the second article but I don't live in Wales so they aren't my local force.


I don't understand how you get

> Your police can request access to any public facing camera at any time. There does not exist a right to decline. No warrant process exists. [3]

from

[3]: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-re...

That page is about companies sharing CCTV with law enforcement by choice, not about being forced to do so?


I'm sure regular folks (local cops and lawyers) have to get proper documents. I highly suspect the police state just has to click their mouses a few times to pull up any camera they want.


“They spy on them because they care”

Secret services got their start in Elizabethan (I) England and it does have something to do with British history and that history was the bloody civil war between the factions. Spying on citizens as a matter of state security came into being during her reign, as a natural product of a society divided against itself, sometimes even in the same (extended) family.

> European feudal societies .. government is expected to care for citizen

Feudal societies had no notion of “citizen”. Nor was the “government” ever tasked with ‘caring’ for the poor. That, as a matter of fact, was the domain of the church and in England the monasteries (that were destroyed during the above mentioned bloody civil war that raged for a long time). After the demise of the monastic system of caring for the poor, and after the lands of the church were confiscated and parceled out to the winning nobility, which gave rise to the British “manor”, for the first time the notion of poor houses became central. And that is when the post-feudal (new order) British Capitalist system gave rise to charities, poor houses, debtors prisons, etc.

The other historic “British” factor here is the propensity of the English for intrigue. They spy the heck on each other because they know themselves.


> “They spy on them because they care”

I didn't say this at all, so please don't put words in my mouth.

> Secret services got their start in Elizabethan (I) England and it does have something to do with British history

This simply isn't true. Spying goes back a long way for various reasons, both state-security related and religiously-motivated.

> Feudal societies had no notion of “citizen”.

You're correct. I should have said "subject."

> Nor was the “government” ever tasked with ‘caring’ for the poor.

Churches were part of this but the monarch was either the head of the church (post-Tudor) or were at least God's anointed Prince of that domain. There are many recorded cases of people suing their local lords for abusing their position, sometimes winning, sometimes losing. The fact is, there were clearly laws on the books to allow the big government to protect people from the small government. These cases predate Tudor England.

> which gave rise to the British “manor”, for the first time the notion of poor houses became central.

So you're saying that the state did take a responsibility to care for people.

> The other historic “British” factor here is the propensity of the English for intrigue.

This isn't exclusive to the English at all. The later British Empire became famously good at spying (all the way up to WW2 and beyond) but it's not unique to the British. Besides, the Byzantine Greeks and the Russian Empire are historically more renowned for intrigue and internal plotting. But again, they weren't the only ones doing it.


> “They spy on them because they care” > I didn't say this at all, so please don't put words in my mouth.

Third party here, but this reads in line with what you wrote imo.


Then quote the exact words I used instead of insinuating.


Spying is not “secret services”. Just like a “police force” is a fairly modern matter even though armed guards, etc. have been around for a long time.

You also said feudal governments were responsible for caring for the poor. No one is putting words in your mouth - just correcting your fictional “history”.


> Spying is not “secret services”. Just like a “police force” is a fairly modern matter even though armed guards, etc. have been around for a long time.

Are specific words important or not? You seemed quite blasé about it at first (insinuating something I clearly did not say) but now you find the exact words very important.

You're correct in that they're not direct analogues, as is the case when you go back in history, but I don't see as much difference between secret services and spying, or local guards and police forces, bar the legal and societal structures they operate within. The purposes of the organisations tend to map onto one another.

> You also said feudal governments were responsible for caring for the poor.

And now, suddenly, words are no longer important. I said there was an "expectation" that this happened and gave examples. This is not a legal responsibility but a societal expectation. I could point out that church leaders were exhorting monarchs to fall in line with Papal doctrine to save the souls of their subjects. Again, expectation to care for your subjects in a spiritual sense.

At this point, however, we're arguing specifics, of which you seem to have little idea, given you're nitpicking without any context in your attempt to "correct" me.

We've gone far from the initial point which was that British people do not have an inherent distrust of the state. The expansion of CCTV must be seen in that context.


>CCTV's were prevalent there much earlier than most other countries.

Cheaper than paying off-duty cops to watch your customers.

Calling private CCTV Orwellian seems like a confused metaphor. The CCTV in 1984 was run by the totalitarian government.


> The UK is for some reason very surveillance-friendly.

Yet in the free very surveillance-unfriendly haven of the world, everyone has a camera recording their front porch and uploading the video to amazon... (and then posting the video of any person who dares walk past their front door on the internet)


Quick note: the UK's current governing party has been in power for 13 years. They are completely out of ideas. The main opposition is similarly short on anything to actually offer beyond "the same as those guys, but better!". All the major issues facing the UK (debt, the economy, housing, education, brexit, democratic collapse) are issues where people are not willing to actually support action. So all there is left for parties is either dumb shit like this or fake wedge issues (like pretending trans people are pedophiles).

That is why this keep coming back.


Do you honestly believe Labour won't be pushing the same agenda once they're back in?

Sure, their last term was a while ago now, but still, Starmer is exactly the kind of person that pushed the anti-privacy, pro-surveillance agenda that Labour was also very fond of.


It seems the UK is good at self inflicting damage. First the brexit, now this online safety bill. Brexit had half the population in support while the safety bill didn't garner much attention from the public (or I've missed them). The pattern seems to be using a generic solution to specific problems while the blast radius of the generic solution is asymmetrically larger than the intended specific problems.


It seems like online safety bill is such a low-hanging fruit compared to building affordable housing so they concentrate on "achievable" results, even if these results are not really that important at all.

PS: I am against online safety bill.


IMO UK has so much potential in being economically and technologically relevant. It is english speaking, got top universities, healthcare is decent when compared to the US (not saying it is without problems), capital abundant, birth rate among the heathiest in europe (12, after sweden and france 12.1, norway 12.2, iceland, iceland 13.6, ireland 13.8, and greenland 14.3) [1]. It pains me to see UK giving up on their potential

[1] https://worldpopulationreview.com/country-rankings/birth-rat...


To say nothing of the Commonwealth: London is a more natural capital harbor for India, Australia and South Africa than New York or even Singapore.


Don't forget the Snoper's Charter too.


> It seems the UK is good at self inflicting damage. First the brexit...

I voted remain, however I'm still a Eurosceptic. The two issues aren't comparable.


Apple is not to be trusted in terms of privacy. Nobody knows what their software is actually doing. It's all closed source. Their focus on privacy in their advertisements does not mean that their software actually protects users' privacy.


Proof appears to be in the pudding: the feds & law enforcement routinely complain that they can’t access Apple users stuff.

Hell, they sue over it.

So it seems to work oreeeeetty well


Of course, it depends where you live. If you are an Apple customer of the Chinese mainland, your iCloud data just gets uploaded to government servers from the start. The Chinese "feds & law enforcement" just gets carte-blanche access to all uploaded user data.

So what stops them from doing the same thing where you live? If you're relying on absolute honesty from your jurisdiction, you're bound to be disappointed the next time a Snowden type hits the news.


The situation in China is not a secret. Apple's entire existence is/was beholden to factories and resources in China. It was an ultimatum of "put our users' data in our servers or have your entire business crippled", which might have also been illegal to do in the U.S. since it would've violated shareholder trust to destroy possibly years of efficiency and supply chain performance over user privacy in China (only to have your products banned in China anyways) while they try to move supply chains.

In the U.S., these types of bills are the ones aiming to mimic China's stranglehold over tech companies that China has. China has the manufacturing chops, but the US still has a large and wealthy consumer market and is considered a fairly politically stable location to run your business out of (thanks to military spending). Both the executive and legislative branches would love to spy on their citizens more to make their jobs easier and gain more power over the everyday lives of citizens. So while the US has the potential to follow in China's footsteps, we still have the opportunity to fight against these proposed laws.


> It was an ultimatum of "put our users' data in our servers or have your entire business crippled",

And the rest of FAANG chose option B, because violating user trust in the way China wanted was absolutely insane. The only company forced into kissing the boot was Apple.

> which might have also been illegal to do in the U.S. since it would've violated shareholder trust to destroy possibly years of efficiency

Yeah, I don't think any jury would indict Apple for leaving China. It was a controversial decision when they decided to bend the knee, and shareholders are breathing a sigh of relief with Apple's iPhone assembly experiments in other countries. Shareholders want to stay in China for the cheap labor - that's it. Once they can exploit workers with similar efficiency elsewhere, China will be unnecessary.

> So while the US has the potential to follow in China's footsteps

The way I see things, China is following in the US' footsteps. They know everything we manufacture is backdoored, so they have their own companies with their own government backdoors. They make huge pushes to remove American systems from their infrastructure like we do with them, and treat every American-made piece of technology as if it was a direct line to Langley. They subjugate services that operate in the mainland, or ban them entirely.

Mimicking China's stranglehold over tech companies would just resemble closing the loop of CIA employees who are on-call at FAANG. Their entire strategy here is embarrassingly simple, accelerated by authoritarianism and abetted by capitalist interests. The veneer of "political stability" is enough to make me laugh, even as an apologist for the American government.


> And the rest of FAANG chose option B

If you believe this I have a bridge in Brooklyn to sell.


How can you describe Microsoft and Google's ongoing business in China as anything but "crippled"?


How is Microsoft's business in China crippled?


What does that have to do with their opposition to this proposal?


That they build a huge dam to block natural waterflow, and sell urine to the masses.


I think you need to unpack this metaphor a bit more, I can't see what point you're trying to make.


Sure, we can't audit their software, but we can't audit anyone else's software either, so I'm not sure how that's a ding against Apple specifically?

Also, we need to consider that Apple widely advertises their devotion to privacy, has done nothing to indicate they aren't being honest, doesn't stand to gain anything by lying to us, and would do untold damage to their brand if they were caught lying.


This is the same argument that ISPs should be liable for piracy.. and they're not because of common carrier status. From the early 2000s. Tech always finds a way to remain separate from it's delivered content.

I bet these revealing encryption policies will become implemented laws, 5 minutes after some research paper shows a better secret messaging concept is technically possible.


Big tech already knows more about kids than parents do.(1) If they want to protect kids on their platforms we don't even need to talk about end-to-end encryption yet. What would help is big tech sharing daily summaries of child on-device activity with parents. There are lots of parental control apps that do this, and yet they have had to fight Apple for permission to operate.(2) Parents don't want their kids abused either. We can definitely find a win-win situation here if we work at it.

References: (1) https://www.forbes.com/sites/kashmirhill/2012/02/16/how-targ... (2) https://www.kaspersky.com/blog/apple-fas-complaint/26017/


I thought that it was universally understood that spying on your kid's activity is bad parenting?

Limiting what your child has access to in the first place, and setting time limits, is a good way to set barriers on viewing activity that you can break down over time as you continue to educate and talk to your child about the dangers of the real world accessible over the internet: https://support.apple.com/en-us/HT201304

And Apple is continuing to introduce privacy-preserving (and safe-from-russian-companies) solutions to kids seeing inappropriate content https://www.apple.com/ios/ios-17-preview/#privacy-security:~...


If you want to spy on your kids then install software and do it. I don't think normalizing corporations spying on kids is a great idea. Sure they do it, but normalizing it only makes it worse and more acceptable.


What has changed since calling those who opposed their intentions to introduce CSAM scanning last year as the screeching voice of the minority?


Probably everyone yelling at them not to do it changed their minds


Now if only they would listen to everyone who's yelling at them to make the App store fees reasonable and open up the in-app purchases ecosystem.


I resent as immoral the argument that if one is for e2ee message and data encryption then you're in favor of CASM. Why I want my messages private is my own business but for GOVERNMENT OFFICIALS to lump me in with child predators who need to be summarily executed when caught is beyond wrong.

This is also a symptom of the laziness of police: rather than getting out and properly investigating crimes they would rather sit on their asses, eat doughnuts, drink coffee, and grep the personal data of all citizens.


> for GOVERNMENT OFFICIALS to lump me in with child predators

So ... sue them individually and personally. It's libel. Make the cost of such generalisations too high to sustain them.


tangential, but interesting how they choose to be loud about this, yet don't say or do anything about imessage downgrading to unencrypted sms with anything outside imessage. no warning about a less secure message being sent, no byline that'd be like 'unencrypted text' or like a tiny little open padlock or something, no effort to make messages encrypted where it could be possible, not even a 'snarky ad campaign' or something that'd point out that 'green bubbles? sms? that's insecure'. it's almost like... "encryptedness" and "privacy" isn't really the point. it's more of a 'marketing point' and 'ecosystem dynamics' - lock-in, network effect, and all that. actually implementing outside encrypted messages for their users? mmmeh.


My question is, if we all know now by now how privacy invasive these hardware/software vendors are, why do we still buy and support their products?

Is it really so hard for open source hardware to become a thing?

I'd trade in my iPhone for something built using RISC-V or MIPS ISA with all software written in Rust any day.

Lastly, why aren't open hardware satellites built so that people can pay for internet that doesn't route through U.S. Datacenters?

Very lastly, why not not build completely decentralized web apps were all your data is encrypted and distributed so that censorship isn't possible?

"Ohhhh no but then all the bad guys will use it." Yeah well you can't censor and monitor an entire planet just because of a couple bad seeds. The bad guys also use language as a form of communication. Will we also make words illegal in the future?


As long as they don't try to bring back on-device scanning "for the children" we're good.


You prefer the Google model where nobody knows what they scan for, it’s easy to scan for different things in different countries, and there is no auditing of sources providing hashes to be scanned?


You prefer false dichotomies that derail meaningful discussions?


Huh? It’s a real dichotomy. We have one model today that all companies use. Apple proposed an alternative that people hated, so we still have the secret cloud scanning with no auditability or transparency.

How in the world is that a false dichotomy? It is literally a choice betweeen two options that people were very adamant about.


It is always the implication when people point out false dichotomies that there are more than two options. The rest is left as an exercise for the reader. Flex those critical thinking skills.


I don't use any public cloud, so they can scan to their heart's content in their cloud for whatever they want.

As long as they keep their grubby paws off the device we're all good.


Apple's proposal only scanned content just before it was uploaded to the cloud. Anything you keep locally (or at least don't use their servers for) was not scanned.


Don't want to reopen that can of worms since a lot of people were triggered by the distinction but in my case I simply don't want that functionality on my device at all (regardless of when they initially planned on using it) - the only way to never have it "accidentally" scan other content is for it to not exist locally at all.

Imagine Microsoft Defender (which is enabled by default on 99%+ of Windows machines) suddenly starts scanning all your local content against some opaque government-provided databases of "bad" hashes. But they pinky swear they won't ever report their findings unless you upload your files to OneDrive. Would you trust them?


You are missing the third option: no scanning anywhere.


Well how about the fourth option: nobody ever does anything bad?

Seems about as realistic and well-grounded as your proposal.


>You are missing the third option: no scanning anywhere.

What a take. Thankfully the vast majority of Americans don't agree with you in the slightest. Also private companies can do whatever they please.


I don't think "pervasive digital surveillance is bad" is an unpopular opinion here.


HN is not America (thank god)

Private companies catching child predators is actually pretty popular.


And IBM compiling databases with ethnic background was pretty popular with a particular Teutonic political movement. You guys should look it up, you'll find lots of handy strategies to copy.


You're quite right. It's a reductive, simplistic opinion, but extremely popular.


Apple introduced optional end to end encryption for photos after withdrawing the on device scanning plan. Do you claim a back door?


It's crazy, because Apple already maintains a backdoor in the e2ee crypto of iMessage which permits them to read all iMessages and attachments and scan them serverside for CSAM (if they desire).

iMessage endpoint sync keys (for "Messages in iCloud") are backed up in the non-e2ee iCloud Backup (on by default), allowing Apple to use those sync keys from the backup to decrypt iMessages and attachments in realtime.

Note that enabling e2ee iCloud Backups (a new feature) on your device won't close this hole as the conversation partner for each iMessage is also escrowing keys.

I assume this is just brand marketing for Apple.


"Advanced Data Protection" is an optional feature you can enable for iCloud that makes that not true. See the table at https://support.apple.com/en-us/HT202303 and note that iCloud Backup (including device and Messages backup) stores the keys on your trusted devices only when Advanced Data Protection is enabled:

"Advanced Data Protection: iCloud Backup and everything inside it is end-to-end encrypted, including the Messages in iCloud encryption key."

These sorts of laws make Apple not able to support Advanced Data Protection in jurisdictions that pass them.


No, please see my third paragraph. Your conversation partners are still escrowing their endpoint keys, allowing Apple to continue reading your iMessages.

Approximately nobody has this on because it's not default, so even if you enable it, ~100% of your iMessages are still readable by Apple (and by extension FBI/DHS without a warrant).


What is a "conversation partner"?


The person to whom you are sending your iMessages.


Except Apple can remotely execute any code they want on your phone at any time. That is how proprietary software works.

If Apple is ordered to ask a given device to return all keys in plain text, it can. So can any Apple employees also on payroll for state actors.


> Except Apple can remotely execute any code they want on your phone at any time. That is how proprietary software works.

No, that's not how any code works, proprietary or open source.

I've written proprietary software for 16 years, and none of it could remotely execute any code that I want.


Someone has the signing keys for Apple os/apps, baseband drivers, oma-dm toolkits, etc.

Whoever that is can sign whatever they want, send an update notification targeting a single user, and the phone will then execute anything as instructed.

A court order could tell whoever holds those signing keys to sign something to decrypt all messages, or install a dummy encryption key, or make the random number generator always return 42.

Just because Apple has successfully gaslighted the courts by claiming they cannot decrypt phones, does not make it true.

Supply chain attacks on a proprietary system lacking reproducible builds and public accountability is indistinguishable from remote code execution as a service.


> Whoever that is can sign whatever they want, send an update notification targeting a single user, and the phone will then execute anything as instructed.

That wasn't the claim. The claim was "Apple can remotely execute any code they want on your phone at any time." In other words, Apple can do it right now, on your phone, without any software update. And there's zero evidence for this claim.

Software updates are not remote code execution. Moreover, users can choose not to install software updates.


> Software updates are not remote code execution. Moreover, users can choose not to install software updates.

However, automatic software updates are, as Solarwinds learned, and this is on by default on iOS, meaning that Apple can indeed execute code on all iPhones in the default configuration without user intervention.


Executing code remotely via default-enabled automatic updates with some delay, or via an ssh command with less delay, the result is the same. Someone chooses to run code on your client, and it runs.

The semantics do not impact the threat model here so who cares?


As I already said, "users can choose not to install software updates". So no, Apple can't "remotely" execute any code they want on your phone at any time, when you've made the simple, easy choice to change your Settings. This is not just "semantics".

This discussion is getting pretty tiresome to me. You and sneak want to redefine all software updates as "remote code execution". That's complete BS IMO, and I'm never going to agree to that redefinition.


If you fail to apply updates, then you fail to apply actual security patches to stop RCEs -anyone- can use. In exchange you agree to allow apple to execute any code they, or anyone coercing the key holders, want.

You can play semantic games, but the end result is the same under a threat model that does not trust any single entity to run code of their choosing on your device.

A supply chain attack is the correct term for this, but a supply chain attack is, in effect, and RCE path available to anyone in the supply chain.

Saying supply chain attacks are not a form of RCE because you can disable software updates is like saying no RCEs exist because you have the choice to disconnect from the internet. Security discussions should be grounded in threat models over semantics.


> Security discussions should be grounded in threat models over semantics.

There's no evidence that Apple is a threat.

You claimed that Apple could already remotely execute any code at any time. That sounded very sinister, as if Apple had added a secret back door to the OS, but it was extremely misleading at best, completely false at worst. Yes, Apple can ship software updates, as we all know. But that's not a threat in itself.

I don't want to hear what Apple could theoretically do. Your lover could theoretically kill you while you sleep. How's that for a threat model? Are you going to sleep alone for the rest of your life? I suspect it's actually more likely, though, than Apple shipping the malicious OS update that you fear.


> A court order could tell whoever holds those signing keys to sign something to decrypt all messages

No, I'm not a lawyer but my understanding is that this is compelled speech and is not permissible.


You do not have to incriminate yourself under US law, but corporate property is not the same as personal property. The NSA very likely could compel Apple to give up their signing keys if it came with an executive order citing national defense objectives... or more probably they will just manipulate the RNGs of the HSMs Apple uses, or get a special contract with their supplier, or bribe a release engineer. I know people who have been party to such arrangements first hand.

Any system that relies on a single private key with SPOFs in the supply chain should be considered compromised, if the targets it governs are high value.

The CCP did not need to do anything cloak and dagger and simply mandated overt access to Apples HSMs for chinese users.


iOS has automatic updates enabled by default. All Apple has to do is make an update available for the device in question that has whatever backdoor in it they want. They can easily make updates available only to specific devices.

Any iPhone in a default configuration will happily download and install it.

This is RCE (with a slight delay).

Automatic unattended upgrades are on by default in iOS. The configuration wizard flow still has the screen that used to prompt you about them, asking for consent, but now it doesn't give you the choice, it just tells you it's going to update automatically. It's up to you to go into Settings after setup and disable it if you don't want Apple to be able to push new and unknown code to your device arbitrarily.


> This is RCE (with a slight delay).

Eh, that's really a stretch of the definition. Hardly anyone installs software via physical media anymore, so doesn't mean that all code execution is remote code execution?

"Software updates are RCE" is not the greatest take.

> It's up to you to go into Settings after setup and disable it if you don't want Apple to be able to push new and unknown code to your device arbitrarily.

Which you can do very easily.

Regardless, every user notices when Apple has shipped an OS update, because your device rebooted.

Furthermore, software updates can still be reverse engineered after they're released.


They notice there was an update, but they have no clue if that update just decrypted all their messages then reverted itself along with all evidence. If this is just done on select targets, capture and detection is highly unlikely.

A court order on whoever controls the Apple signing keys could do this. I would wager that this is already happening in China since the CCP took control of the signing keys.

If a system is not open, accountable, with distributed reproducible builds each adding their own signature, then you are in fact granting a central party access to execute any code they wish on your device if you accept updates without reverse engineering every single CPU instruction, driver, firmware bundle, and microcode change in every update bundle and sideloading them by hand somehow... which Apple does not permit.


> I would wager that this is already happening in China since the CCP took control of the signing keys.

Which signing keys are you talking about?


The various ones for CPU microcode, drivers, OS images, firmware, firstparty and third party app store updates. Any of these has the power to sign code that will compromise your messages.


Where in the world did you get the idea that the CCP took control of these?


> Regardless, every user notices when Apple has shipped an OS update, because your device rebooted.

I would venture to say that these users notice because of the notification that says "Your device has been updated to iOS <version>", not because of the 2 AM reboot.

That notification, of course, being entirely within Apple's discretion.

Apple can effectively silently run whatever code they want on most iPhones and iPads.


> I would venture to say that these users notice because of the notification that says "Your device has been updated to iOS <version>"

That's what I meant.

> Apple can effectively silently run whatever code they want on most iPhones and iPads.

You may feel that this is a powerful argument, but I'm completely unimpressed by it when anyone can easily go into Settings and turn off automatic updates. ¯\_(ツ)_/¯

I'm not a fan of automatic updates, especially as the default setting, but that's not because I think Apple is going to send down some secretly evil update. Rather, it's just because updates can break stuff that the user relies on, which is no secret.


So the defense you suggest, is for everyone to never accept updates to any software again? Are you serious?


I'm not suggesting any "defense", because I'm not suggesting any attack. You're the one suggesting an attack. I'm worried about Apple making the OS crappier, but I'm not worried about Apple attacking me, or anyone else for that matter.

Once again, you said, "Apple can remotely execute any code they want on your phone at any time", which is what I replied to. Until you're willing to admit that you misspoke, we're done here.


Their OS may give them that capability, or it may not. There is no way to know.


There actually is a way to know. People can and do reverse engineer the code on device. I've done it myself many times.

I would also note that unless you compile an open source project yourself, there's no guarantee that the compiled product you download from an open source project actually came from the publicly available source. If you want to get all conspiratorial, I suggest that you're not conspiratorial enough. ;-)


If Apple is somehow dumb enough to leave an unscrubbed/unfuzzed backdoor in their shipping version of iOS, I'd almost be inclined to call it a red herring.

It stands a reasonably safe assumption that Apple has been compelled to add obfuscated adversarial code to iOS, just as you can assume BitLocker is about as secure as a Masterlock made out of Swiss Cheese.


No, it's not a safe assumption, and it's not clear that you even know what you're talking about, technically.

You use terms like "unscrubbed", "unfuzzed", "obfuscated", but what do you think they mean exactly? For example, how does "unfuzzed" relate, if at all, to https://en.wikipedia.org/wiki/Fuzzing


I think the core conclusion is perfectly sound. Apple is a capable company - they would not let the user discover malicious code unless they wanted you to find it. They intend for people to reverse-engineer their systems and run it through IDA, Ghidra and Cutter. Tools exist to mitigate that discoverability, and I think it's foolish to assume they don't use them.

The only safe assumption seems to be that they are hiding something. Proving the contrary requires tools we may never have.


> Apple is a capable company - they would not let the user discover malicious code unless they wanted you to find it.

Apple is not capable of magic, or the impossible.

> Tools exist to mitigate that discoverability

Citation needed.

> The only safe assumption seems to be that they are hiding something. Proving the contrary requires tools we may never have.

You can assume whatever you want, but with no evidence whatsoever, I wouldn't call it "safe", I'd call it "irrational".

Also, you didn't answer my question, which suggests that I was right about your not knowing what you were talking about.


> Citation needed.

There are hundreds of commercial and open methods for obfuscating a program during compilation. Since Apple controls Ring 0 they can also obfuscate syscalls and network traffic if they want. I'd ask you if I'm wrong about this, but you already seem convinced.

> You can assume whatever you want, but with no evidence whatsoever, I wouldn't call it "safe", I'd call it "irrational".

It's perfectly rational. Apple already does this in China. We know the CIA, FBI and NSA have a common interest in weakening all domestic cryptography. Taking Apple at face value when so many other FAANG companies have been embarrassed by PRISM revelations is a fool's decision.

It will be irrational once we know for certain that Apple doesn't aid spying on their users. Until then, I figure I'm a safe skeptic. Pending evidence, both of our claims are equally substantiated. Only the uninvested parties have nothing to lose here, though.


> There are hundreds of commercial and open methods for obfuscating a program during compilation.

Obfuscation doesn't make reverse engineering impossible, only a little more difficult.

> It's perfectly rational. Apple already does this in China.

Does what exactly?

Remember, the original claim was "Apple can remotely execute any code they want on your phone at any time."

> Taking Apple at face value

I never said to take Apple at face value.

> It will be irrational once we know for certain that Apple doesn't aid spying on their users.

Which you claim is impossible to know forever, making your belief totally unfalsifiable. How convenient.


> Obfuscation doesn't make reverse engineering impossible, only a little more difficult.

Sometimes it works pretty well. It's easy money to wager the community doesn't fully understand iOS from a reverse-engineered best-guess. Especially if their adversary is a multi-trillion dollar company who anticipates reverse engineers.

> Does what exactly?

Violate their venerable "Commitment to Privacy".

> the original claim was "Apple can remotely execute any code they want on your phone at any time."

It is undeniably feasible, yes. They control the modem, have complete ring-zero knowledge and can send arbitrary data to and from your device. That qualifies pretty much everything needed for remote code execution. If they send data to the iPhone that exploits a decoding bug like NSO Group, then feasibly yes you could execute arbitrary programs on the device, using an unmodified OS image. Nevermind just a regularly installed backdoor.

> Which you claim is impossible to know forever, making your belief totally unfalsifiable. How convenient.

I've been wrong before! Let's both hope I've got their character wrong and we get to see the truth someday. Until then we're squabbling over unknowns. It's not my habit to put faith into the unknown, but to each their own.


> Let's both hope I've got their character wrong and we get to see the truth someday.

How would we ever "get to see the truth someday" on your conspiracy theory, if the truth happens to be that there's not actually an intentional remote code exploit?

That's my point: you've made yourself immune from the facts. You can't ever be proved wrong, by your own criteria of "proof", which posits that Apple always has the ability to avoid detection.


> We know the CIA, FBI and NSA have a common interest in weakening all domestic cryptography.

To the extent that the CIA, NSA, or FBI have an interest in weakening cryptography, it applies at least as strongly (more strongly for most of them) to foreign cryptography, so I’d probably drop the modifier “domestic” from that sentence.


You're not wrong about that. The history of their interference with domestic corporations is why I add that in, though. They do have an interest in weakening all cryptography, but the ones they're best-suited to compromise are the ones relying on domestic trust.


You could reverse engineer everything on -one- phone and one version of software and be confident every single CPU instruction involving crypto sufficiently random, in constant time, and not being exfiltrated. This is a -massive- effort, that would also need to rule out hardware backdoors and use some cutting edge techniques, but say you did it.

Now a dissident is targeted. Their phone is sent a special OS hotfix no one in the world has reviewed and that compromises the random number generator at a CPI microcode level that would take months of targeted reverse engineering to detect, then relevant messages are decrypted over the wire with the now weakend keys. It then reverts itself when done making detection that much less likely.

Whoever has the signing keys make the rules... or anyone who can give secret orders to those holding the signing keys.


> Now a dissident is targeted. Their phone is sent a special OS hotfix

You're moving the goalposts. What you originally said, which I objected to, was "Apple can remotely execute any code they want on your phone at any time."

In other words, Apple can remotely execute any code they want on my phone and your phone and everyone's phone now.


Anyone that has automatic software updates enabled, the default, yes.

And if you disable those you will not get security updates, thus allowing everyone access to execute code on your phone.


You do not need to compile it yourself. The build just needs to be deterministic and compiled by multiple independent parties you trust. This is the entire point of the reproducible builds project. Participating projects make themselves immune from a single release engineer executing whatever code they want on end clients.


Yep same for Google/android and Microsoft devices


Actually most people are stuck on the free 5GB iCloud plan, and the backup will never complete because there isn't enough space.


This is only the case if you’re using iCloud backup for your phone, although yes that’s kind of annoying - presumably this is a usability issue and there use.

Happily there’s now advanced data protection option - though you the docs go into great detail on the “if you forget everything all your data is gone forever” usability problem.


Apple has done a good job balancing security and usability here.

I can see e2ee being default/recommended for people with iCloud family accounts though as every family member can be part of the account recovery process.


The amount of stuff you can accomplish with "save the children"...


Curious. They develop a feature, receive tons of opposition, go quiet, now they oppose it?

Waiting for this to inevitably pass and for then to get to play the good guy card.


Isn't "encrypted messages" implies the impossibility of any scan?

Oh, could it be that they store all the msgs as a plain text on the server and the "encrypted" is just a meme for only a TLS connection?

Rhetorical question, of course.


E2EE messages aren't encrypted until they're sent and are decrypted on the receiving end. Scanning can be mandated to happen at either or both of those points.


Just ban privacy, because some criminals benefit from it ...


Many governments are really trying, in the "freest" part of the world.


Computers are the modern equivalent of "my papers" for historical reasons. British cops can't come in and rifle through your papers at home without a warrant, there is simply no reason they should be able to come in and rifle through your phone, computer, or anything else without a warrant. Are MPs completely daft? Do they all look to Xi and Putin as examples of "the way things should be" ?


I don't really see the problem with this. Scanning happens before the message is encrypted or after the message is decrypted (thus preserving the end-to-end encryption itself from compromise), occurs only on the device itself, and will report back only if known illegal content is detected.

It seems to be the best possible method of implementing such scanning. Shouldn't we be supporting this as a proportionate safety measure?


What's been implemented is a warrantless search on every file. The only real restriction is that you have to know what you're looking for. Right now it's looking for this one thing 'for the children', but it could be used for anything. What if a law is passed requiring it to look for copyright marterials, or unauthorized chemistry books? What if Apple starts searching for say, leaked source code, or some an app they have a patent dispute with? What if Apple does nothing else and keeps it restricted to what they say it will be restricted to but it becomes more widely adopted by even less ethical manufacturers?

Any hole is a wide hole.


And to think that wide hole isn't now utilized is folly, despite whatever is said.


Known illegal content will inevitably become possibly illegal content under that scheme given the progress of AI. It will lead to all kinds of negative (and even fatal) consequences given the tendency for parents to have photos of their toddlers bathing and involvement of LEO. Parenting is hard enough without the risk of being murdered by a cop.


Once it is possible to scan the messages, you can bet it won't be limited to CSAM, that's just to get their foot in the door.


As of today every tech platform scans every message, photo, email, etc. for CSAM, yet there isn't misuse of the system. Why would that change under the proposed change?


Is it just csam? Really? They don’t search things for anything else? Not at all? And what if iran tells them they must scan for something else? Obviously they’ll refuse


1) Iran? did you forget about sanctions? Apple does not operate at all in Iran. No offices, no employees, no data centers. Nothing. Apple doesn't care what Iran tells them to do lol.

2) I mean they search for spam, malware, copyrighted material (as required by the DMCA), etc.


Define tech platform.


It's proportionate to give partisan governments fond of the nastiest behaviors the ability to read every word that you type to anyone?

Even now, iOS and MacOS have scanning ability that should be assumed is used well beyond the proposed scope.

Anyone who uses these OS's can only come away with the sense that literally everything is logged.


> It's proportionate to give partisan governments fond of the nastiest behaviors the ability to read every word that you type to anyone?

This argument is so absurdly dumb. Today, every tech company will give EVERY single message, text, IP address, photo, email etc. to law enforcement. Yet you rarely see misuse of this system. Why would that change under the proposed rules?


If your argument is "trust them bro", then I'm not the one who is making "absurdly dumb" statements.


N-gate summed it up very well: "if a guy in Cupertino can put a U2 album on your phone, he can just as easily put CSAM on it".

If you are using a device managed by Apple/Google/Microsoft, E2E is merely a roadbump - it means that your device owner needs to push a backdoored update to it, rather than simply reading your data server-side at their leisure.

Which is still valuable, to be clear: it's a pretty large roadbump. Either you're such an important target that they're shipping a specific update to you alone (in which case, frankly, you didn't stand a chance), or they risk exposure.

But if you install a little snitch that reads your messages and reports you if they match some complicated, necessarily hashed pattern, then that roadbump is mostly gone. All the guy in Cupertino now needs to do is update the patterns to match what they want them to match, without anyone noticing.


I think I see your argument, are you saying that because the scanning involves transforming the content into a hash and comparing to a set of hashes, the problem is that this is resistant to third-party analysis? Not because of the overall behaviour of the scanner or indeed any other software on the device, which can be reverse-engineered using standard techniques, but because a database of hashes is inherently opaque as to the content it will match with?


Correct. Is that hashed fingerprint you just downloaded associated with the face of a known victim of abuse, or with that of a whistleblower? Impossible to tell.

(The N-Gate scenario is a little different: someone who doesn't like you controls a random app that is allowed to write to your /Downloads or /Pictures folder. Puts compromising material in it, then lets the well-meaning OS scanner pick it up and report you. You spend the next X years wrangling with law enforcement, trying to convince them that you were "hacked", the same thing every criminal claims.)

If the scanner program merely refused to transmit the potential match, it would be acceptable. No avenue for abuse (without shipping a malicious update, which is always possible for cloud-controlled devices anyway), and at worst an annoyance in the event of a false positive.

But I very much doubt that the parties pushing for this legislation would settle for that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: