Netflix, Amazon, Youtube, PornHub, etc... they're all accomplishing little more than "similar to the one, and only one item you last saw", with dramatic shifts in "profiling" from one or two videos.
Actually, Netflix acknowledges this and splits the recommendation into "because you watched X..", so at least it covers a greater range (eg last 5 things seen)
I'm damned sure they could be much more useful if they would let me tell them what I like, by implementing rating systems that are worth using (e.g. the ability to browse and edit previous ratings in a sane fashion)
but user-useful recommendation is not the actual goal, so really its just that our metrics are wrong. It's probably great according to view counts.
That is, the user is capable of efficiently informing the engine of their taste, and there’s significant incentive for the user to consistently re-evaluate their ratings (playlists), so it can be trusted as up to date.
Another very important aspect is that playlists are useful enough to the user that they actually want to maintain it.
For example, amazon, netflix and pornhub all have rating systems, but they’re not at all useful. The interface isn’t useful enough for reviewing and reflecting on, its not comprehensive enough to keep as a primary list (because it only covers what they offer, which is very limited) and there’s of course no impact on the recommendation engine (because the rating systems are not worth using; chicken and egg). No sane person would touch the things (beyond “upvoting”, which isn’t significantly related to taste)
Imo ratings are absolutely vital to useful reccomendation, but they’ve been totally neglected
That said, you can indeed tell them what you like but by use of negative space. When you get bombarded with obviously horrible recommendations, do the two-step process of clicking 'Not Interested' (if possible without even watching the video, or you can check it in incognito mode, assuming they're not watching that even more closely) and then 'tell us why', and respond 'I'm not interested in this channel: "undesirable video maker".
That assumes you can be sure you want to nuke the channels and subjects in question, but when it's clickbait channels and/or alt-right propaganda it's generally easy to identify and not get wrong. I'm sure the same would be true for leftwing propaganda, but the stuff I don't want pushed on me has a whole language and lexicon that's easily recognizable by video title, channel title and attempted clickbait image. If stuff trips my sensors on those grounds, I'm generally comfortable nuking it unseen.
I think quite a lot has to do with machine learning working out that if you panic the human animal they pay attention to threats, and therefore to maximize engagement 'if it bleeds, it leads' (old newspaper maxim). Newspaper editors can (and might not) automatically apply a social-benefit heuristic or sense of social shame (not wanting to be a 'muckraker' or troublemaker), and machine learning may not even start with such a concept.
If engagement was maximized by turning viewers into cannibalistic humanoid underground dwellers (CHUDs), machine learning would simply make note of that and run as hard as it could in that direction, since it doesn't have a larger context in mind unless programmed to do so.
(Such a larger context is actually sort of controversial: lot of people demonize the very concept of social justice, and without it you get these hacks to maximize engagement by tapping into really unmanageable human/animal behaviors)