I shared some thoughts on some proposals about CSAM regulations on LinkedIn recently, and decided to share them here, as well, since, unfortunately, this topic is quite personal for me. But I will also expand a bit.
Importance
While I am not a victim of sexual abuse (at least not that I consciously remember), I am victim of just physical abuse in relation to sexuality. What's worse, and I may regret ever mentioning this publicly, my teen years happened to be during WAP internet, and I saw and heard things that I would really love to forget but can't. In addition, a victim of such abuse shared a personal story with me (still don't understand how the person even trusted me enough).
Me being empathic by nature and feeling some things very deeply, I was affected greatly, to such an extent that realization that such cruelty exists nearly broke me, until I wrote one specific story, that helped to straighten my mind. It can be read here, but please, do not read it, unless you have a stomach for it: allegedly it made some people vomit even.
Non-existent tech
The gist of the proposal is that chat(-like) platforms should monitor chats for CSAM, flag people and chat where it was detected, decrypt the chat and deanonymize the people involved. The main idea is to use "hash database", that will store hash of known media files with CSAM. My guess is that this has been proposed by someone who has no idea how hashes work, and how easy it is to circumvent those. Or maybe who intentionally wants to leave a huge loophole to allow themselves to distribute such content. They can, certainly, help, but alone they are pretty much useless.
The only way to automatically monitor such content would be AI with some sort of computer vision (or image recognition in more general sense), so that it would scan media files, but even then - just put them in a zip with a password (let alone encrypted one) and it becomes pointless.
But even if we assume that there are ways to make things work still - there is no system for this yet. And you need it before you can enforce anything. You need some centralized system, probably, with decentralized P2P-like storage, and with extremely high security standards, that would allow communication with it from different platforms. Probably through an open API (access by platform holders), that would require a simple registration flow, so it can be used by any platform and not just big ones like WhatsApp, Viber, Telegram, Signal or whatever else.
If there is an AI component, it should be run on the device to minimize potential data leaks. Which would imply, that for web-based communications and exchanges you may need to implement something for that in browsers and/or OSs as well. Not impossible, but definitely not something that can be done right now, because it simply does not exist. Oh, and the system needs to be universal, as in for all countries, and not just 1 or 2 or a dozen.
Watchful eye
Now then there is a question of how things should be monitored, as well. Let's say a local AI detects something suspicious whether it's a media file or some text - what then? Just stripping encryption from whole chat is not a good idea, because of false positives. Flagging everyone in a chat(group), because someone sends a media file, that's suspicious is also not a good idea (can be abused to harass people). So what?
I would think that if the local system finds something suspicious, it should submit a copy of the text messages and suspicious media files to the central system, instead platform holders, probably without clear identification of the participants. It's "decrypted" in the sense. that it's no longer encrypted by keys from submitting platform, but it still needs to be encrypted on the centralized system processing this stuff.
The chat received by the system should be time-limited, say a week, by default. During a week you need to review it to clearly mark the chat as false or true positive (by 2 people, maker-checker style). If it's not marked, it's not removed, but "canned" and automatically escalated as a potential miss. Misses are not to be removed, they are to be publicly reported on weekly basis (some web site for statistics, I guess, obviously not full chat disclosure).
If the chat is false positive - it's removed a week after being marked as such, only logs remain with some potential metadata for references in future cases from same people (that's why obfuscation is probably the way to go, rather than removal of IDs entirely). Meta data may be used to highlight chats with a lot of false positives to indicate, that may be something is being missed.
If the chat is actual positive - a report is submitted to platform holder and to respective local authorities, where they are known, if not - to some escalation point in the organization handling this system. These reports need to result in actual court order, that will allow deanonymization of the suspects and then respective process of arresting and persecuting.
Oh, and of course access to the decrypted copy of the data is allowed only with proper MFA (2FA at least), possibly with multiple levels of approvals and in a time-limited manner. With extensive non-purgeable logs (or stored for like 100 years). For any person, that may have access to the data, that is. Ideally with ways to limit potential copies, screenshots, photos, and whatnot.
For me the biggest question would be if the affected users should be notified that the chat and/or media were flagged by the system. In some way it may be good for transparency's sake, but if it's an actual positive hit, the perpetrator may flee. This would also require some feedback between this system and platform holders, so could risk deanonymization, when it's not needed. I am leaning towards not letting anyone know, but open for debate.
Rehabilitation
This will not be a whole solution. Even if you add ways to prevent CSAM (not even sure what they would be, to be honest), a big question remains: what to do with the offenders? Emotional response is to kill. I get it. Every time I read some news about CSAM, it just triggers me. But pragmatic in me thinks it's a waste.
The thing that a lot of people forget about is that with any crime "the why" is important. Depending on it, some criminals can be rehabilitated. I do not have a success rate, but I am pretty sure it's not zero. From what I hear these particular offenders can be rehabilitated as well, since what at least some of them suffer from is a mental disorder. And that's where a big problem lies: the general population does not want anyone to spend any resources on these offenders. We need to find ways to change that perception.
This is not to say that victims or their parents or friends should necessarily "forgive" the offenders that harmed them, no. But offering help to control and overcome these urges, that prompted them to violence is… Humane. And if not that, from a practical perspective, they can become useful for society by doing stuff. Potentially under some controlled environment, yes, but still. There are even prevention organizations that deal with people who have not become offenders yet (like German Prevention Project Dunkelfeld), and that can be an even better tool for the long term.
Unfortunately, I do not know how to shift society's thoughts from bloody rage (and justifiable at that!) to something this constructive. Even in my own thoughts, I still have this nagging bias telling me, that no, they should just be purged. But perhaps, can the monitoring system be used to help diagnose people beforehand? Perhaps the false positives can be used to communicate back something like "Our system found that you may have these tendencies, we recommend checking with organization X in your region to seek out help". It could be used for some other forms of abuse and monitoring certain mental disorders, too.
But anyway, the current proposal is just undercooked. It feels haphazard, as if for the sake of ticking off a checkbox, rather than actually trying to solve a problem, and, in turn, can cause a lot of other problems on top of that. We can and should do better.