This article suggests that machine learning and collaborative filtering are incapable of producing healthy recommendations. I beg to differ, the New York Times may not like the result but they work for the vast majority of users on any service with too much content to manually curate.
YouTube is a _disastrously_ unhealthy recommender system, and they've let it go completely out of control.
Can you explain with more details?
I use Youtube as a crowdsourced "MOOC"[0] and the algorithms usually recommended excellent followup videos for most topics.
(On the other hand, their attempt at matching "relevant" advertising to the video is often terrible. (E.g. Sephora makeup videos for women shown to male-dominated audience of audiophile gear.) Leaving aside the weird ads, the algorithm works very well for educational vids that interests me.)
[0] https://en.wikipedia.org/wiki/Massive_open_online_course
Yes, I was aware of Elsagate.[0] I don't play games so didn't realize every gaming video ends up with unwanted far-right and Ben Shapiro videos.
I guess I should have clarified my question. I thought gp's "unhealthy" meant Youtube's algorithm was bad for somebody like me that views mainstream non-controversial videos. (Analogy might be gp (rspeer) warning me that abestos and lead paint is actually cancerous but public doesn't know it.)
It could be the type of games involved, since I usually watch strategy, 4x, city-building, and military sims. I usually get history-channel documentaries or "here's how urban planning works in the real world" videos recommended, which suits me fine. Somebody whose gaming preferences involve killing Nazis in a WW2-era FPS might be more likely to get videos that have neo-Nazis suggesting we kill people.