Not a perfect CP detection system (might detect kids playing in a room with a rated R movie playing on a TV in the background), but it would be a good first attempt filter.
Of course, if you upload a lot of files to Google Drive and run a sanity check like this on the files, it is too late to save you from Google.
Avoiding putting anything with any risk potential on Google Drive seems like an important precaution regarding the growing tyranny of automated and irreversible judge & juries.
Nevermind the importance of context, such as distinguishing a partially clothed child playing on a beach from a partially clothed child in a sexual situation.
Do you though?
Some children look like adults (17 vs 18, etc). Some adults, look younger than they actually are. How do we tell the difference between porn and art, such as nude scenes in movies, or even ancient sculptures? It doesn't seem like an agent would be able to make these determinations without a significant amount of training, and likely added context about any images it processes.
A scanning system will never be perfect. But there is a better approach: what the FTC now requires Pornhub to do. Before an image is uploaded, the platform scans it; if it’s flagged as CSAM, it simply never enters the system. Platforms can set a low confidence threshold and block the upload entirely. If that creates too many false positives, you add an appeals process.
The key difference here is that upload-scanning stops distribution before it starts.
What Google is doing is scanning private cloud storage after upload and then destroying accounts when their AI misfires. That doesn’t prevent distribution — it just creates collateral damage.
It also floods NCMEC with automated false reports. Millions of photos get flagged, but only a tiny fraction lead to actual prosecutions. The system as it exists today isn’t working for platforms, law enforcement, or innocent users caught in the blast radius.