cross-posted from: https://lemmy.ml/post/6469594
How to contact your MEP.
Terrorists will have no problem writing their own encryption program, and more ordinary citizens will install malicious apps from unofficial app stores.
Writing your own is hard. They won’t have a problem illegally using Signal
Ah… terrorist, the magic word. That’s why you can’t have a SIM card which is not tied to your ID or passport in EU since 2015. Terrorists actions allowing an state entity throwing 4000t of explosive on civils in a weekend… yep yep…
more seriously (though I wasn’t totally kidding), your non-tech relatives and friends are all on whatsapp/insta/messenger, good luck to move them.
I have helped a little with some ongoing research on the subject of client-side-scanning in a European research center. Only some low level stuff, but I possess a solid background in IT security and I can explain a little what the proposition made to the EU is. I am by no means condemning what is proposed here.I myself based on what experts have explained am against the whole idea because of the slippery slope it creates for authoritarian government and how easily it can be abused.
The idea is to use perceptual hashing to create a local or remote database of known abuse material (Basically creating an approximation of already known CP content and hashing it) and then comparing all images accessible to the messaging app against this database by using the same perceptual hashing process on them.
It’s called Client-Side-Scanning because of the fact that it’s simply circumventing the encryption process. Circumvention in this case means that the process happens outside of the communication protocol, either before or after the images, media, etc, are sent. It does not matter that you use end-to-end encryption if the scanning is happening on you data at rest on your device and not in transit. In this sense it wouldn’t directly have an adverse effect on end-to-end encryption.
Some of the most obvious issues with this idea, outside of the blatant privacy violation are:
- Performance: how big is the database going to get? Do we ever stop including stuff?
- Ethical: Who is responsible for including hashes in the database? Once a hash is in there it’s probably impossible to tell what it represent, this can obviously be abused by unscrupulous governments.
- Personal: There is heavy social stigma associated with CP and child abuse. Because of how they work, perceptual hashes are going to create false positives. How are these false positives going to be addressed by the authorities? Because when the police come knocking on your door looking for CP, your neighbors might not care or understand that it was a false positive.
- False positives: the false positive rate for single hashes is going to stay roughly the same but the bigger the database gets the more false positive there is going to be. This will quickly lead to problems managing false positive.
- Authorities: Local Authorities are generally stretcht thin and have limited resources. Who is going to deal with the influx of reports coming from this system?
This is a really nice summary of the practical issues surrounding this.
There is one more that I would like to call out: how does this client scanning code end up running in your phone? i.e. who pushes it there and keeps it up to date (and by consequence the database).
I can think of a few options:
- The messaging app owner includes this as part of their code, and for every msg/image/etc checks before send (/receive?)
- The phone OS vendor puts it there, bakes it as part of the image store/retrieval API - in a sense it works more on your gallery than your messaging app
- The phone vendor puts it there, just like they already do for their branded apps.
- Your mobile operator puts it there, just like they already do for their stuff
Each of these has its own problems/challenges. How to compel them to insert this (ahem “backdoor”), and the different risks with each of them.
Another problem: legislation like this cements the status quo. It’s easy enough for large incumbents to add features like this, but to a handful of programmers trying to launch an app from their garage, this adds another hurdle into the process. Remember: Signal and Telegram are only about a decade old, we’ve seen new (and better) apps launch recently. Is that going to stop?
It’s easy to say “this is just a simple hash lookup, it’s not that big a deal!”, but (1) it opens the door to client-side requirements in legislation, it’s unlikely to stop here, (2) if other countries follow suit, devs will need to implement a bunch of geo-dependant (?) lookups, and (3) someone is going to have to monitor compliance, and make sure images are actually being verified–which also opens small companies up to difficult legal actions. How do you prove your client is complying? How can you monitor to make sure it’s working without violating user privacy?
Also: doesn’t this close the door on open software? How can you allow users to install open source message apps, or (if the lookup is OS-level) Linux or a free version of Android that they’re able to build themselves? If they can, what’s to stop pedophiles from just doing that–and disabling the checks?
If you don’t ban user-modifiable software on phones, you’ve just added an extra hurdle for creeps: they just need to install a new version. If you do, you’ve handed total control of phones to corporations, and especially big established corporations.
I get the concept but this doesnt realy offer any advantages over just not encrypting anything at all. The database being checked againts can still just include a hash of somethibg the governemnt doesnt like and boom u have a complete tool for absolute cencoring of everything.
I’m deeply against this ridiculous proposal.
But scanning of messages already happens, tbf, for spell checking, emoji replacement, links to known infectious sites.
Photo copiers do client side scanning to prevent copying of money.
There are precedents.
I hate this proposal. But let’s be straight about the facts: The phone has full access to everything you send and receive already. This isn’t the same as having an encryption back door.
Thanks for the explanation. Do you know how they’re planning to implement this client side scanning? Take an iPhone for example— where Apple has already ditched their plans to do the same device-wide. Is it planned for WhatsApp, Signal etc. to be updated to force perpetual scanning of the iPhone’s photo album? Because that can be turned off quite easily at the OS level.
The only way I could see them doing it is by scanning any image that is selectively chosen to be sent before the actual message itself is sent—i.e. after it’s selected but before the send button is pressed. Otherwise it’s breaking the E2E encryption.
Is that the plan?
Client-Side-Scanning is going to be implemented by the messaging app vendor. This means that it’s limited by OS or Browser sandboxing . Therefore it’s definitely limited to what the messaging app has access to. However, I’m not sure what the actual scope would be, meaning if all accessible images are going to be scanned or only the one being transmitted to someone.
People in Reddit and sometimes here always praise the EU as some bastion of privacy, and I always got downvoted when I said that this isn’t always true. And now here we are. I hope people don’t forget this after a month, like they always do.
They will, and you’re screaming into the wind sadly.
What you can do is never forget and base your voting decisions to include this as a priority going forward. Endorse and support companies that protect privacy.
It’s a long uphill battle and every little thing can help no matter how small.
citizens have the right to private communication.
Not in places where constitutions are not the ultimate authority AND written such that they form negative rights by only limiting the governments power. That’s in all those places whose immigrants to America get on TV and call America’s constitution anachronistic.
You forget to mention, a constitution that is written (and properly commented) in such a way that it doesn’t require any interpretation; and that will receive periodic review and updating according to cultural and historical development; and that holds actual punishment for lawmakers who violate the constitution. Not saying that i know of any such thing.
I sometimes wonder about this. I hugely value my private communication, and I grew up in a world with that ideal. But with the rise of more cleverly invasive apps and tracking, and ease of someone else putting a video of you online, and so on, I sometimes think about a world where non face-to-face communication isn’t private any more.
I don’t know what I think of that world.
After all, we haven’t always had private, at-a-distance communication, especially for all people
We always had. Many people wrote personal notes/letters in cryptic ways to prevent unwanted readers from deciphering it.
Imagine a word where we would teach children not to make their own cypher because this is illegal. What a distopian society.
Kind of, but written communication for everyone hasn’t even always been a thing. And cryptic letters perhaps aren’t reliable secrecy for ordinary people against trained spying. And anonymity… not without other layers to your communication. And all of that not for your ordinary postcard home: it’s something you do in special situations.
I don’t think the new law would outlaw encrypting messages to your friend with PGP; nor having a second phone that you leave at the library for anonymity.
Benjamin Franklin once said: “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.”
This still applies.
But what liberty is essential? Proveably secret postcards to people on the other side of the world?
That’s also quite a harsh quote to bring in the context of the many hidden erosions of privacy - would you say the tick-tockers don’t deserve privacy or safety because they chose that social ability over a privacy they little understand?
Tangential, but Lemmy is filled with smart people so I’m going to ask: is it possible to legally make it impossible for wireless signals to work within your own home? That is, how would one dampen access to wireless networks? Would this require illegal use of signal jamming devices as I imagine a Faraday cage would be too difficult to make in a room.
Edit: where else on Lemmy could I ask this sort of question?
The FCC has a lot of regulations on it. From what I remember active jamming within the home is still wildly illegal. Depending on the size of your house/room, a far as at cage wouldn’t be too difficult, especially if you did it during construction. If you’re on a budget and don’t mind looking crazy you can line a closet with tinfoil and connect it to ground.
What is wrong with the eu? Why do they need to always ban end to end encryption?
Lobbiests are probably the one reason they haven’t passed such anti-privacy laws, actually.
As I remember at the moment partly Von Der Leyen, the current Commission president. She is a German Christian democrat and apparently bit with capital C. Meaning she has bit of a moral panic streak on her of the “won’t you think of the children” variety. As I understand this current proposal is very much driven by her.
However her driving it doesn’t mean it sail through to pass as legislation. Some whole memberstate governments are against the encryption busting idea.
I’m sure they will tell you it’s weighing the security (against terrorists, criminals, etc) of the many against the security (from seeing dick pics or messaging a mistress) of the few.