Would that yield an improvement? I don't know, but it would have an impact.
Humans are involved in the process. To suggest otherwise is to be willfully ignorant.
Search for users who stop videos at "offensive" moments, then evaluate their habits. It wouldn't be foolproof, but the "Flanders rating" of a video might be a starting metric.
Before putting something on YouTube for kids, run it by Flanders users first. If Flanders users en masse watch it the whole way through, it's probably safe. If they stop it at random points, it may be safe (this is where manual filtering might be desirable, even if it is just to evaluate Flanders Users rather than the video). But if they stop videos at about the same time, that should be treated as a red flag.
Of course, people have contextual viewing habits that aren't captured (I hope). Most relevantly, they probably watch different things depending on who is in the room. This is likely the highest vector for false positives.
The big negative is showing people content they obviously don't want for the sake of collecting imperfect data.
BTW, you could do some simple math to figure out how many employees it'd take to have a human watch every video that comes in. 3600 secs/hour * 20 hours of video/sec = 72000 secs/video/sec, * 3 to assume 8 hour shifts = 216,000 employees, * $30K/year = $6.4B/year. It's theoretically doable, but you wouldn't get the product for free anymore.
$2B is still nothing to sneeze at, but it's less than Microsoft paid for Minecraft.
Of course, the system doesn’t expose these kinds of outputs, because no-one has any interest in designing such a system and taking responsibility for the content.
Just start banning certain creators from showing up in recommendations if their content crosses the line. Not that hard if you are willing to do it.
Should we filter all the Santa-is-fake videos or the Santa-is-real videos?
Do you agree with Flanders?
YouTube could probably outsource it internationally, but that'd just spark a new round of outrage: "Why are global community standards set by an American technology company outsourced to poor workers in the Philippines? Are these the people we want deciding our values?"
While that might be true, 99% of the views are a very small subset of the videos posted. It's completely doable, or at the very least the problem can be greatly mitigated by putting more humans into the process and not letting the algos recommend videos that haven't been viewed by someone in Youtube's equivalent of "standards and practices". All that being said, I fear the primary reason this is not done is because such actions would reduce the number of hours of viewed videos and ad revenues. In fact, I've read articles supporting this theory.
Google under Pichai is basically like Exxon under Lee Raymond--solely focused on revenue growth and completely blind to any number that doesn't show up on the current and next quarter's income statement.
Reviewing viewers on that level sounds even more intensive than filtering every channel and video.
Whatever they do is going to have to be evaluated in terms of best effort / sincerity.
Semi-related: The fun of Youtube is when the recommendation algo gets it right and shows you something great you wouldn't have searched for. The value is that it can detect elements that would be near impossible for a human to specify. But that means it has to take risks.
I agree that with the current business model it is not possible for YouTube to sort it manually.
When I was a kid, a long long time ago, it would have been impossible to conceive that a TV channel showed that kind of content regularly and continue open. If their answer would have been that they cannot fix it because it costs money there would have been an outraged response.
If YouTube cannot keep things legal, cannot respect people rights, cannot be a good responsible part of society because it is not cost effective for me the way to go is clear. And that is true for YouTube, Facebook or any other business digital or not.
Every algorithm is an editorial decision.
I don't see this test working in isolation. Given it's nature, it's value is in obscure rejection statements rather than acceptance (or "okilly-dokillies" in this case).
To echo what others on this thread have said, there's a lot of content on Youtube. This means that even if they are cautious about which content passes through the filter for kids, there's still a lot available.
The big square in front of the congress was split at the half, the pro-choice "green" group was on one side and the pro-life "sky-blue" group was in the other side. Each group had a strong opinion, but the mobilization was quite civilized, I don't remember that anyone get hurt. Anyway, there were small kids on both sides with the handkerchief of the respective color.
Also, what is your definition of kid: 6? 12? 17?
Just imagine that the Church release a video on youtube where Santa visit a lot of children to give them presents, and in particular to a unborn children during the 8 month of pregnancy, and add to Santa a "sky-blue" handkerchief in case someone didn't notice the hidden message. Do you think it should be censored for kids?
If we want to have a "free" (as in no subscription and no money required to be payed for the service) video sharing/uploading site, what model would that make it work and still have human reviewing? I consider the fact that there may be undesirable videos as the cost of having such a site, similarly how to the "cost" of having a free Internet is that there's going to be lots of hate online and free access to tutorials to make bombs and what not. It's part of the deal and I'm happy with that, YMMV. If you worry about what kids might access then don't let them access Youtube but please don't create laws that would make free video sharing sites illegal/impossible to run.
This is true for pretty much any free Internet service that allows for user content. If all of Internet content production will go back to just "official" creators (because they are the only ones where the cost/benefit math would make sense) I think that would be a huge loss/regression over what we have gained since the age of the Internet.
That’s assuming recommendations need to be personalized. They could recommend at a higher level to groups of people using attributes like age range or region.
I’m not a fan of their personalized recommendations. It’s algorithm overfits my views to recommend videos extremely similar to videos I’ve recently watched, which isn’t really aligned with my interests.
If they took a completely different approach (not personalized) it could really impact the UX in a positive way.
In this case, I'd suggest the upper bound doesn't matter, as the criteria for filtering should be "a semi-unattended 5 year old could view it without concern."
All your examples are of topics where it's probably best for parents to initiate their child's education on the topic rather than Youtube randomly putting it in some kid's feed.
Also, detecting videos that are inappropriate for children is a lot harder than determining certain content creators that are trustworthy to post videos that are appropriate (and to tag them correctly). That can be learned from the user's history, how many times their stuff has been flagged, getting upvotes from users that are themselves deemed credible, and so on. The more layers of indirection, the better, a la PageRank.
So even without analyzing the video itself, it would have a much smaller set of videos it can recommend from, but still potentially millions of videos. You still need some level of staff to train the algorithm, but you don't have to have paid staff look at every single video to have a good set of videos it can recommend. The staff might spend most of their time looking at videos that are anomalous, such as they were posted by a user the algorithm trusted but then flagged by a user that the algorithm considered credible. Then they would tag that video with some rich information that will help the algorithm in the future, beyond just removing that video or reducing the trust of the poster or the credibility of the flagger.
Kids < 4 really shouldn't have access to YouTube though.
Is that an option?
I'm not sure heavy automation is needed here, people jump from content creator to content creator by word of mouth. In contrast most algorithmic suggestions to me seem highly biased towards what is popular in general. I click on one wrong video in a news article and for the next two days my recommendations are pop music, Jimmy Kimmel, Ben Shapiro and animal videos
There are probably a ton of situations like that in YouTube, where certain kinds of mistakes are hardly noticed (it shows you a video you weren't remotely interested in), but others can be really bad and need special training to avoid (such as where it shows violent or sexual content to someone who likes nursery rhymes and Peppa Pig).
https://www.newstatesman.com/science-tech/social-media/2019/...
>Just start banning certain creators from showing up in recommendations if their content crosses the line.
also won't help, because it's not the creators that have content crossing the line, it's the commenters.
https://www.theverge.com/2019/2/11/18220032/youtube-copystri...
https://redwoodbark.org/46876/culture/redwood-students-view-...
2019:
In response, the principal of the high school sent a note to students and parents Thursday night regarding the "hate-based video and text posts attributed to one of our students":
https://www.kron4.com/news/bay-area/bay-area-girl-says-she-l...
One is an investment/one time purchase and the other is a long-term annual liability, slated to grow.
See current Pinterest scandal and banning from Youtube of any video mentioning this.
All three things I just mentioned are fairly niche, comparatively, yet it knows that I've been watching a lot of them lately and is giving me more of it.