Sounds again like hyperbole from the NYT.
I find it more interesting to consider what would actually be a good outcome for the viewers. I suppose originally all those recommender algorithms simply optimized for viewer engagement. Obviously that may not be the best outcome for consumers. Perhaps enraging content makes people stick on a platform longer, for example. But it would be "better" for a viewer to see more educational content and even to disconnect after a certain while.
But how would you even quantify that, for the algorithm to be able to train for it?
The son of a friend of mine taught himself programming from YouTube videos, which YouTube had recommended to him. I wouldn't complain about a result like that.