zlacker

[return to "YouTube’s Algorithm Incentivizes the Wrong Behavior"]
1. strike+jh[view] [source] 2019-06-14 18:23:48
>>furcyd+(OP)
"If YouTube won’t remove the algorithm, it must, at the very least, make significant changes, and have greater human involvement in the recommendation process.", man does this person know how many videos and how many users YouTube has? They cannot use anything except an algorithm to recommend videos. They cannot use anything except an algorithm to detect videos inappropriate for children. It seems YouTube is working on this, and this opinion seems like a ill thought out fluff piece to enrage readers and sell this persons book.
◧◩
2. andrew+mi[view] [source] 2019-06-14 18:30:58
>>strike+jh
Maybe they can't make editorial recommendations for the long tail but they absokutely could do so for the top few thousand videos each week.

Would that yield an improvement? I don't know, but it would have an impact.

◧◩◪
3. scj+9o[view] [source] 2019-06-14 19:08:04
>>andrew+mi
I'm kind of wondering if a "Ned Flanders" user-detector is possible.

Search for users who stop videos at "offensive" moments, then evaluate their habits. It wouldn't be foolproof, but the "Flanders rating" of a video might be a starting metric.

Before putting something on YouTube for kids, run it by Flanders users first. If Flanders users en masse watch it the whole way through, it's probably safe. If they stop it at random points, it may be safe (this is where manual filtering might be desirable, even if it is just to evaluate Flanders Users rather than the video). But if they stop videos at about the same time, that should be treated as a red flag.

Of course, people have contextual viewing habits that aren't captured (I hope). Most relevantly, they probably watch different things depending on who is in the room. This is likely the highest vector for false positives.

The big negative is showing people content they obviously don't want for the sake of collecting imperfect data.

◧◩◪◨
4. Nasrud+wx[view] [source] 2019-06-14 20:22:53
>>scj+9o
The question I have is how can they tell "Flanders" viewers from "bored" ones or "out of time" ones short of them flagging it without a lot of manual review and guess work?

Reviewing viewers on that level sounds even more intensive than filtering every channel and video.

◧◩◪◨⬒
5. scj+cC[view] [source] 2019-06-14 21:02:19
>>Nasrud+wx
In the system I've proposed, if there are enough test-Flanders thrown at the content the times closed should be different enough to trigger an unclear Flanders rating. This would indicate some other metric should be used.

I don't see this test working in isolation. Given it's nature, it's value is in obscure rejection statements rather than acceptance (or "okilly-dokillies" in this case).

To echo what others on this thread have said, there's a lot of content on Youtube. This means that even if they are cautious about which content passes through the filter for kids, there's still a lot available.

[go to top]