https://www.instagram.com/reel/DO9MwTHCoR_/?igsh=MTZybml2NDB...
The screenshots/videos of them doing it are pretty wild, and insane they are editing creators' uploads without consent!
edit: here's the effect I'm talking about with lossy compression and adaptive quantization: https://cloudinary.com/blog/what_to_focus_on_in_image_compre...
The result is smoothing of skin, and applied heavily on video (as Youtube does, just look for any old video that was HD years ago) would look this way
That being said, I don't believe they should be doing anything like this without the creator's explicit consent. I do personally think there's probably a good use case for machine learning / neural network tech applied to the clean up of low-quality sources (for better transcoding that doesn't accumulate errors & therefore wastes bitrate), in the same way that RTX Video Super Resolution can do some impressive deblocking & upscaling magic[2] on Windows. But clearly they are completely missing the mark with whatever experiment they were running there.
[1] https://www.ynetnews.com/tech-and-digital/article/bj1qbwcklg
[2] compare https://i.imgur.com/U6vzssS.png & https://i.imgur.com/x63o8WQ.jpeg (upscaled 360p)
Here's some other creators also talking about it happening in youtube shorts: https://www.reddit.com/r/BeautyGuruChatter/comments/1notyzo/...
another example: https://www.youtube.com/watch?v=tjnQ-s7LW-g
https://www.reddit.com/r/youtube/comments/1mw0tuz/youtube_is...
https://www.bbc.com/future/article/20250822-youtube-is-using...
Google has already matched H.266. And this was over a year ago.
They've probably developed some really good models for this and are silently testing how people perceive them.
https://blog.metaphysic.ai/what-is-neural-compression/
Instead of artifacts in pixels, you'll see artifacts in larger features.
https://arxiv.org/abs/2412.11379
Look at figure 5 and beyond.
https://www.theguardian.com/society/2025/dec/05/ai-deepfakes...
The key section:
> Rene Ritchie, YouTube’s creator liaison, acknowledged in a post on X that the company was running “a small experiment on select Shorts, using traditional machine learning to clarify, reduce noise and improve overall video clarity—similar to what modern smartphones do when shooting video.”
So the "AI edits" are just a compression algorithm that is not that great.
It's difficult for me to read this as anything other than dismissing this person's views as being unworthy of discussing because they are are "non-technical," a characterization you objected to, but if you feel this shouldn't be the top level comment I'd suggest you submit a better one.
Here's a more detailed breakdown I found after about 15m of searching, I imagine there are better sources out there if you or anyone else cares to look harder: https://www.reddit.com/r/youtube/comments/1lllnse/youtube_sh...
To me it's fairly subtle but there's a waxy texture to the second screenshot. This video presents some more examples, some of them have are more textured: https://www.youtube.com/watch?v=86nhP8tvbLY
Edit: The changes made by the ai are a lot more vissible in the higher quality video uploaded to patreon: https://www.patreon.com/posts/136994036 (this was also linked in the pinned comment on the youtube video)
https://en.wikipedia.org/wiki/YouTube_Poop
Examples:
https://blog.metaphysic.ai/what-is-neural-compression/
See this paper:
https://arxiv.org/abs/2412.11379
Look at figure 5 and beyond.
Here's one such Google paper: