This always sounds good, but decentralized is nearly impossible to commoditize or make appealing to the general public. Outside of evangelism and word-of-mouth, how are people going to escape the Youtube advertising budget and instead choose - en masse - the product that is better for their privacy?
There's just so much money and inertia to fight.
Someone watching lots of DIY home repair videos will start seeing more. In that case it seems like it's incentivizing good behavior. Likewise, someone watching lots of soft porn on YouTube will be recommended more soft porn.
The algorithm isn't responsible for helping you make good life choices. The algorithm is responsible for recommending videos that you would like, and it seems like it does a good job of that, generally.
Unfortunately, some people like bad things and that's an age old problem that is hard to fix.
That said, it would be nice if users could CTOA (choose their own algorithm) instead of letting Google be the sole gatekeeper.
This article suggests that machine learning and collaborative filtering are incapable of producing healthy recommendations. I beg to differ, the New York Times may not like the result but they work for the vast majority of users on any service with too much content to manually curate.
Also, as a parent to 4 children myself, the idea of letting my kids loose on the internet completely devoid of any supervision is ridiculous. When did it become youtube's responsibility to parent the children in its audience? Should we also ban HBO, Amazon, and Netflix from providing recommendations because it might be a child in front of the screen?
This is just another pointed attempt to censor free speech via the abuse of technology companies. The idea being that the platform will be restrictive if they are constantly badgered about it.
YouTube is a _disastrously_ unhealthy recommender system, and they've let it go completely out of control.
Would that yield an improvement? I don't know, but it would have an impact.
My entire sidebar is now just a random assortment of irrelevant interests. For instance I wanted to learn to play a denser piano chord, I learned it ages ago but I still get like 20 videos that explain how to add extensions to a 7 chord, even if I'm watching a video on the F-35 fighter pilot.
YouTube is a paperclip maximizer (where paperclips correspond to eyeball-hours spent watching YouTube) and at some point optimizing paperclips becomes orthogonal to human existence, and then anticorrelated with it.
I think it's a perfectly fair thing to say that maybe the negatives outweigh the positives at the present.
(This argument doesn't apply solely to YouTube, of course)
It's better to compare Spotify's recommendations to Netflix's recommendations, which also deals with mostly professional content. Those two systems have comparable performance in my opinion.
The parents are the most well placed to know at an individual level. But responsibility is a cop out, if you are just dropping it on someone.
Granted, I agree it is a hard problem. Not even sure it is solvable. :(
https://www.youtube.com/watch?v=fHsa9DqmId8 for his theory.
Yes, but you _must_ understand that most (no, ALL) of the millennial generation grew up with public content over the airwaves that was curated and had to pass certain guidelines. So many parents think that the YouTube Kids app is the same thing. it's not!
If YouTube want to be the next Television, they're going to have to assume the responsibilities and expectations surrounding the appliances they intend to replace. Pulling a Pontius Pilate and tossing the issue to another algorithm to fail at figuring out is not going to fix the problem.
Thankfully, there's much more out there than YouTube when it comes to children's entertainment, actually curated by human beings with eyeballs and brains, and not algorithms. The problem is that parents don't know these apps even exist, because YouTube has that much of a foothold as "place to see things that shut my kid up, so I can see straight."
But I guess they could charge money to get to the head of the line?
Can you explain with more details?
I use Youtube as a crowdsourced "MOOC"[0] and the algorithms usually recommended excellent followup videos for most topics.
(On the other hand, their attempt at matching "relevant" advertising to the video is often terrible. (E.g. Sephora makeup videos for women shown to male-dominated audience of audiophile gear.) Leaving aside the weird ads, the algorithm works very well for educational vids that interests me.)
[0] https://en.wikipedia.org/wiki/Massive_open_online_course
Society has had laws in place to prevent children from viewing things they should not be (inappropriate movies, magazines, etc).
It doesn't gradually introduce broader material, it gradually introduces more "engaging" material.
> Human intuition can recognize motives in people’s viewing decisions, and can step in to discourage that — which most likely would have happened if videos were being recommended by humans, and not a computer. But to YouTube’s nuance-blind algorithm — trained to think with simple logic — serving up more videos to sate a sadist’s appetite is a job well done.
So this person is advocating that a human (ie, another human besides oneself, an employee at youtube), have access to the click stream of individual users? This proposal, in 2019??? Of course this would have to be compulsory to be effective. Why would I want a megacorp to be making moral decisions for me? I'm ok with them making amoral algorithmic decisions.
The author is generalizing the problem of YT Kids, which should be human curated, to all of youtube.
OTOH, yeah feeding our worst impulses is kind of a problem. But, tweaking the algorithm isn't the solution. The machine itself is designed to thrive on attention.
They need to stop showing people the upvote and view COUNTS. Behind the scenes they can still use it to make recommendations.
Those numbers are pseudo signals of quality to people who encounter content they have never encountered before.
Even when they have doubts that are watching something unhealthy the mind goes "well if the rest of the world thinks this dumbass is important I better pay attention..."
If a dumbass hurting people on video gets 10 million views other dumbasses worldwide automatically get triggered looking at the count. "hey I can do this maybe I should run for President..."
Remove the counts and you remove the pseudo signal of quality.
I don't think that the recommendation is broken at all, in fact it works astonishingly well for the vast majority of people. The fact that there are a few bad actors is also present in the banking industry, (Wells Fargo for instance), to use your own bad comparison.
But in the case of YouTube, there is absolutely no way that they can curate it and it still being as open as it is.
I think there’s a ton of ideas to be tried.
Edit: the Instagram motivation is admittedly a bit different, but a good path regardless
Yes, I was aware of Elsagate.[0] I don't play games so didn't realize every gaming video ends up with unwanted far-right and Ben Shapiro videos.
I guess I should have clarified my question. I thought gp's "unhealthy" meant Youtube's algorithm was bad for somebody like me that views mainstream non-controversial videos. (Analogy might be gp (rspeer) warning me that abestos and lead paint is actually cancerous but public doesn't know it.)
I want to see the counts. I feel it is far more transparent to see the counts than for things to just be surfaced or not opaquely. Youtube is not a discussion site and it does not work as one. How popular things are is a part of the context of pop culture, and most youtube content is pop culture.
So?
If YouTube exits the space and allows oxygen back into the video sharing market, we might actually get some different video sharing services that do different things (a la NicoNicoDouga).
It must? No, it doesn't have to do a damn thing. It's a product from a publicly traded company, therefore it "must" return value for stockholders. That means more behavior that increases ad revenue. The author is out of touch with reality. Stop feeding your kids youtube if you don't want them exposed to youtube. It's a private service(youtube), not a public park.
[1] https://www.quora.com/How-many-videos-are-uploaded-on-YouTub...
Humans are involved in the process. To suggest otherwise is to be willfully ignorant.
I do not see any Nazi far-right videos in 1.8% of my recommendations ever.
They don't. That's confirmation bias at work.
Search for users who stop videos at "offensive" moments, then evaluate their habits. It wouldn't be foolproof, but the "Flanders rating" of a video might be a starting metric.
Before putting something on YouTube for kids, run it by Flanders users first. If Flanders users en masse watch it the whole way through, it's probably safe. If they stop it at random points, it may be safe (this is where manual filtering might be desirable, even if it is just to evaluate Flanders Users rather than the video). But if they stop videos at about the same time, that should be treated as a red flag.
Of course, people have contextual viewing habits that aren't captured (I hope). Most relevantly, they probably watch different things depending on who is in the room. This is likely the highest vector for false positives.
The big negative is showing people content they obviously don't want for the sake of collecting imperfect data.
Engagement on the internet is also being driven by neural networks that are learning to adapt to the users brain chemistry to statistically modify behavior for maximum engagement/profit. Perhaps it is time to realize that these services are going to be analogous to a random stranger offering your kid candy for their own twisted goals that are unlikely compatible with a child's well being. If you consider a service like YouTube as an untrusted source of interaction perhaps you'll be as likely to block or monitor it the same as random chat rooms.
A very large percentage of those people are former FAANG employees, employees of companies trying to get acquired by FAANG, or employees of companies doing exactly the same things that FAANG are getting criticized for. (Some are all three!)
I doubt any of them are getting paid to AstroTurf, but they know what side their bread is getting buttered on.
But I will unapologetically and forthrightedly say that, yes, if we're going to assert that YouTube has certain responsibilities for the nature of the videos that it hosts, and that it turns out that the nature of those responsibilities is such that YouTube can't possible meet them, then, yes, YouTube as we know it should be essentially shut down, at least going forward.
I am NOT going to say we should deliberately craft the responsibilities in such a way that YouTube is deliberately shut down. However, if it turns out that they are incapable of applying even the bare minimum effort that we as a society deem it necessary for them to apply, then, yes, it is absolutely a consequence that YouTube as we know it today may have to be so radically altered as to be a different site entirely.
In the general case, when the law requires certain obligations of you as a business, and you as a business can not meet them, that does not mean that suddenly those obligations are not applied to you. It means that your business is not legally viable, and needs to change until it is. It may be the case that there is no solution to being legally viable and being profitable, in which case, your business will cease to exist. Just as there is, for instance, no solution to being a business built around selling torrent files containing unlicensed commercial content to people. You can't defend yourself by saying you can't afford to get the licenses; your suitable legal remedy was to never have started this business in the first place. There's some concerns around grandfathering here to deal with, certainly, but they can still be affected going forward.
There is no guarantee that there is a solution where a company exerting whatever minimal control they are obligated to assert by society is capable of growing to the size of YouTube. If that is the case, so be it. The solution is not to just let them go because they happened to grow fast first.
(My solution to freedom of expression is an explosion of video sites, where each of them has ways of holding the videos to the societally-mandated minimum standard, and no one site can do it all because they simply can't muster the resources to be The One Site, because as they grow larger they encounter anti-scaling effects. Given how increasingly censorious Silicon Valley is becoming, as we are now into censoring the discussions about censoring discussions like the recent removal of Project Veritas from Twitter for its discussion of Pinterest censoring pro-life films, I expect this to increase the range of expression, not diminish it.)
I can't think of any traditional medium that tells you the popularity of something before you consume it. Movie theaters, TV stations, radio stations, etc. have no concept of "view counts" telling you whether or not to consume something.
We believe that man is essentially good.
It’s only his behavior that lets him down.
This is the fault of society.
Society is the fault of conditions.
Conditions are the fault of society.
If you ask me, "YouTube's algorithm" is simply exposing the way humanity is. And trying to get an algorithm to "shepherd" humanity to be better is simply Orwellian.I've definitely seen this with comics. I watched a few videos criticizing Avengers: Infinity War, and now I see mostly Ben Shapiro recs. It makes no sense. I never have (and never plan to) seek out political content on YouTube.
He gets an average of 10-15 views per day.
The value this guy adds to my day is literally measurable in $$$.
If I could find more people like him, that would be great, but instead these are my recommendations:
- 5 ways to do X
- Bill Gates breaks down blah blah blah
- Something about Tesla
- One video by a guy I discovered outside of YouTube who is similar to the guy I watch every day. I don't watch this one that much though.
YouTube's algorithm is not designed for discovery. It's designed for engagement. So I keep separate accounts: 1. Account for actually useful stuff where YT's recommendations are useless
2. Account where YT's recommendations are OK: white noise like things. Howard Stern interviews, etc
I wish you could configure the algorithm for discovery somehow.Second, allow users to blacklist, or whitelist, different kinds of content. If someone is struggling with sexual attraction to minors, let them blacklist content with minors in it. If I don't want to see the latest anti(or pro)-<insert political figure here> videos, I should be able to filter them out. I have no interest in Minecraft, so why should I have to keep scrolling past Minecraft videos just because I watch a lot of game related videos?
That said, all the calls for regulation or censorship concern me. I haven't seen the video, but Steven Crowder saying mean things isn't exactly something that should be censored. Any more than all the videos calling President Trump names. What I'm seeing signs of is a society that silences any speech that doesn't fit in a specific, politically correct, box. And that box is being defined by advertising companies who don't want to be associated with topics that their potential customers find uncomfortable. That's not a direction any of us should support...
Youtube is optimizing for the underlying psychological mechanisms that put people in that mood because it makes them suggestive and because none of this stuff has substance or meaning they can graze on it like how junk food producers want to promote.
On the internet it is much more difficult, of course, and we can't realistically expect some shady offshore site from implementing age checks, let alone recommendation algorithms. But Google is a public, respected company from a first world country that claims to be promoting social good (which, of course, is marketing BS, and even if it weren't I would not want their idea of social good, but still). You'd think that they would invest some effort into not showing inappropriate content to kids at least. But no, they throw up their hands and go on ideological witch hunts instead.
BTW, you could do some simple math to figure out how many employees it'd take to have a human watch every video that comes in. 3600 secs/hour * 20 hours of video/sec = 72000 secs/video/sec, * 3 to assume 8 hour shifts = 216,000 employees, * $30K/year = $6.4B/year. It's theoretically doable, but you wouldn't get the product for free anymore.
I interpret 4 upvotes and 1 downvote much differently than 4000 upvotes and 1000 downvotes.
Well, information IS available, beforehand in nielson ratings and films' grossing numbers, but you're essentially right.
That's the problem: opaqueness leaves us vulnerable to being misled. Some PR company calls it "the hottest ticket of the season," and we have no way of corroborating this claim.
https://news.ycombinator.com/newsguidelines.html
There's a vast history about this for anyone who wants more explanation: https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
Data: https://ourworldindata.org/uploads/2019/05/Causes-of-death-i...
This isn't a knock against the NYTimes so much as it is of humanity, we're all fascinated by the lurid and sensational (note that the Google searches are similarly off) and this permeates all levels of life.
That only works for rational people.
View counts ~= box office take ~= TV ratings ~= Billboard.
Every type of media you list has gatekeepers, kingmakers, and counters, and other things influencing your to or not to consume.
It could be the type of games involved, since I usually watch strategy, 4x, city-building, and military sims. I usually get history-channel documentaries or "here's how urban planning works in the real world" videos recommended, which suits me fine. Somebody whose gaming preferences involve killing Nazis in a WW2-era FPS might be more likely to get videos that have neo-Nazis suggesting we kill people.
Quite un-so! HN is targeted at one group only: the intellectually curious. So curiosity—keeping HN interesting—is the only thing we care about and optimize for.
As for VC-funded startups, since only about 10% of HN users are located in Silicon Valley, the number working for such startups is certainly far less than that. 50% of HN users are outside the US. This community is orders of magnitude more diverse than your image of it here.
People work for many different employers. Are their views impacted by their work experience? Of course they are—for every user here. There's nothing sinister about that. If someone posts a view that's wrong, others will correct it. As long as they do so thoughtfully, that's great. The cycle of life on HN.
And content-creators play a part in this: next time you hear about some pop-drama do a youtube search and admire how many videos are a single person just reblabbing the same story in front of a mic, cam, or videogames. You'll find hundreds. And so many things on youtube are like this...
$2B is still nothing to sneeze at, but it's less than Microsoft paid for Minecraft.
Of course, the system doesn’t expose these kinds of outputs, because no-one has any interest in designing such a system and taking responsibility for the content.
Just start banning certain creators from showing up in recommendations if their content crosses the line. Not that hard if you are willing to do it.
The article cites actual instances and recurring problems showing that "machine learning and collaborative filtering are incapable of producing healthy recommendations.": Even when YouTube tried to produce child friendly content, they failed. You can't just say "it's fine" after the article shows it not being fine.
Come to think of it, this is basically the complaint against AdWords and the gradual takeover of the search result page by paid results.
Should we filter all the Santa-is-fake videos or the Santa-is-real videos?
Do you agree with Flanders?
Only with respect to people you know talking about it. Not just arbitrary metrics. Rating systems are part of the context of putting valuations on ads, not part of culture. Whatever impact they do have is based on advertisers trying to reel you in by applying the bandwagon fallacy and stoking your sense of FOMO. It's not something edifying.
Watching hours of YouTube - obesity of the mind. Kind of.
Subject to the laws of the jurisdiction in which it operates, of course. We could - if we so wanted - pass laws to regulate this behavior. That is perhaps the best option, in my own opinion.
> It's a product from a publicly traded company, therefore it "must" return value for stockholders.
The dogma that it "must" return value for shareholders is not an absolute rule[1]; rather it's a set of market expectations and some decisions from Delaware (which have an outsize impact on business law) that encourage it. But it's not required. In fact, many states allow a type of corporation that specifically and directly allows directors to pursue non-shareholder-value goals - the benefit corporation[2].
> The author is out of touch with reality.
Please re-read the HN guidelines[3].
> Stop feeding your kids youtube if you don't want them exposed to youtube. It's a private service(youtube), not a public park.
This is the doctrine of "caveat emptor," essentially - that a consumer is ultimately responsible for all behavior. However, a wealth of regulation exists because that's unworkable in practice. The FDA and the EPA come to mind, but we also regulate concepts like "false advertising." Your stance here ignores the realities of life in service of ideological purism.
[1] http://web.archive.org/web/20190327123200/https://www.washin...
Of course, it is incumbent on us individually to behave responsibly. But there is space for public policy and regulation, even of YouTube.
Well, YouTube (or any advertising platform) also wants people clicking on ads and actually buying things, not just graze. AFAIK they already demonetize content that is not advertiser friendly, and thus de-prioritize it. Busy professionals with limited free time are your best bet for people with a lot of disposable income. If anything YouTube optimizes for content that is safe-for-work, and will quickly lead to you opening your wallet. But yes, I think this is a large scale multi-variate problem, and individual simple metrics don't cut it.
> It may be the case that there is no solution to being legally viable and being profitable, in which case, your business will cease to exist.
Or your business will exist illegally.
There's this interesting interplay between law and economics, where law is generally taken as a prerequisite for frictionless commerce, and yet at the same time if activities that large groups of people wish to partake in are made illegal, the market just routes around them and black markets spring up to provide them. Prohibition. The War on Drugs. Filesharing. Gambling. Employing illegal immigrants. Usury. Short-term rentals. Taxi medallions. Large swaths of the economy under communism.
There are a couple other interesting phenomena related to this: the very illegality of the activity tends to create large profits around it (because it creates barriers to entry, such that the market often ends up monopolized by a small cartel), and the existence of widespread black markets erodes respect for rule of law itself. When people see people around them getting very rich or otherwise deriving benefit from flouting the law, why should they follow it?
Switching to editorializing mode, I think that this gradual erosion of respect for law to be quite troubling, and I also think that the solution to it needs to be two-fold: stop trying to outlaw behaviors that are offensive to some but beloved by others, and start enforcing laws that if neglected really will result in the destruction of the system.
YouTube could probably outsource it internationally, but that'd just spark a new round of outrage: "Why are global community standards set by an American technology company outsourced to poor workers in the Philippines? Are these the people we want deciding our values?"
I’m not sure I get the difference between suggesting content and telling people what content to watch. Were you trying to drive a different point ?
That aside, it seems your argument is that youtube being neutral in recommending videos shelters them from blame, while the article is basically about why being neutral is harmful.
I personaly think anything dealing with human content can’t be left neutral, as we need a bias towards positivity. Just as we don’t allow generic tools to kill and save people in the same proportion, we want a clear net positive.
And if the algorithm is producing negative side effects, then, of course, it should be looked at and changed.
I'm no expert myself, but to my understanding: any algorithm is limited by its data set.
Based on its data set, an algorithm comes to conclusions. But one can then, of course, ask: what's the basis for these conclusions?
I recall reading that a certain AI had been fooled into thinking a picture of a banana was showing a toaster or a helicopter, after a few part of the image were changed to contain tiny bits of those items.
It turned out that the AI used the apparent texture on places in the image to determine what was on the image, rather than doing a shape comparison.
Which sounds like a time-saving measure. Though it may very well have been the method that most consistently produced correct results, for the given dataset.
Frankly, the attitude of "we don't know how it works and we don't care" cannot possibly end well.
Neither the attitude "oh well make a better dataset then".
I get that we're all excited about the amazing abilities we're seeing here, but that doesn't mean we shouldn't look where we're going.
I recall a story of an AI researcher who didn't want to define anything because he was afraid of introducing bias. Upon hearing this, his colleague covered up his eyes. When asked why he did this, he replied: "The world no longer exists". And the other understood.
Because of course the world still exists. And just the same way: it's impossible get rid of bias.
Some human intervention is needed. Just like constant checks and comparison against human results.
While that might be true, 99% of the views are a very small subset of the videos posted. It's completely doable, or at the very least the problem can be greatly mitigated by putting more humans into the process and not letting the algos recommend videos that haven't been viewed by someone in Youtube's equivalent of "standards and practices". All that being said, I fear the primary reason this is not done is because such actions would reduce the number of hours of viewed videos and ad revenues. In fact, I've read articles supporting this theory.
Google under Pichai is basically like Exxon under Lee Raymond--solely focused on revenue growth and completely blind to any number that doesn't show up on the current and next quarter's income statement.
Reviewing viewers on that level sounds even more intensive than filtering every channel and video.
Whatever they do is going to have to be evaluated in terms of best effort / sincerity.
Semi-related: The fun of Youtube is when the recommendation algo gets it right and shows you something great you wouldn't have searched for. The value is that it can detect elements that would be near impossible for a human to specify. But that means it has to take risks.
>same genre as whatever I just saw, but with a clickbaitier title, flashier thumbnail, and overall poorer informational quality
We already have them, yet FB, IG, Twitter, YT are the social media behemoths.
Are you making a plea for the average internet person to care about the values of the platforms they use over the platform content? You are likely preaching to the choir here on HN, but I would guess that the audience here is only 1% of 1% of the audience you need to message.
Corps make good use of psychological experiments to optimize their utility function. "Evil is efficient." The problem is that companies optimize for money without taking into account any other factor in any significant way.
> In 1970, Nobel Prize–winning economist Milton Friedman published an essay in The New York Times Magazine titled “The Social Responsibility of Business Is to Increase Its Profits.” [1]
Arguably this quote incentivized the destruction of "good corporate citizenship" (although I admit it's possible that concept never existed in a broad sense).
[1] https://www.newsweek.com/2017/04/14/harvard-business-school-...
This is a great point that I was going to phrase slightly differently: if YouTube is too large to be able to prevent harm, YouTube needs to be regulated. YouTube get the benefit of being so large, so they should also get the cost.
If nobody gives a fuck enough to affect business you can give the complete SAW series to 3 year olds and all the offended can do is yelp indignantly.
I walk up to you on the street and suggest you give me a dollar.
vs
I walk up to you on the street and take a dollar from you by force.
Youtube is a platform, in order remain a platform it MUST remain neutral. You cannot have an open forum with bias. There are certain mutually agreed upon rules, (no nudity, extreme violence, etc.), those limitations are more than enough to handle the vast majority of "negative" content.
I whole heartedly disagree that we need a bias towards positivity. Who determines what that definition is? Something you see as negative, I might happen to enjoy. If Youtube begins to censor itself in that way it is no longer a platform and is now responsible for ALL of its content.
I agree that with the current business model it is not possible for YouTube to sort it manually.
When I was a kid, a long long time ago, it would have been impossible to conceive that a TV channel showed that kind of content regularly and continue open. If their answer would have been that they cannot fix it because it costs money there would have been an outraged response.
If YouTube cannot keep things legal, cannot respect people rights, cannot be a good responsible part of society because it is not cost effective for me the way to go is clear. And that is true for YouTube, Facebook or any other business digital or not.
The vague "do something!" regulation push has all of the marks of a moral panic and all participants should slap themselves hard enough to leave a mark and repeat "It is never too import to be rational."
Every algorithm is an editorial decision.
Most of these discussion posts seem to miss the point that 'engagement' or 'upvotes' does NOT equal value.
Also missing is the concept that a company with a massive platform has any social responsibility to at least not poison the well of society.
And claiming "it's the parent's responsibility" may have some truth value, but it does not and should not be an excuse to absolve the platform owner of responsibility.
The key to longer term success of the platforms is to abandon the low-hanging-fruit of "engagement" as a measure value and develop more substantitive metrics that actually relate to value delivered, both to the individual watcher and society as a whole.
As one audience member, I find their recommendations to be basically crap, nearly never leading me to something more valuable than what I just watched (sure, they'll occasionally put up a recommendation that has enough entertainment value to watch, but much of the time I want my 5min back). To find any real value, I need to search again. That already tells us that their "engagement"-based algos are insufficient to serve the needs.
The problem of the dataset is that you're not in control of who populates the dataset and what their intentions are. There's no understanding of an adversarial model and threat handling.
Also, when dang contradicts me about HN, that means I'm wrong, lol.
Also they are the default view, I’d argue suggestions are a lot more than just “suggestions”. It would be akin to a restaurant “suggesting” their menu, and you’d need to interrogate the waiter to explore what else you could be served. For most people the menu is effectively the representation of the food of the restaurant.
For the neutrality, if you recognize there are agreed upon rules, as you point out, the next question becomes who agreed on these rules, and who made them ?
Who agreed nudity should be banned ? Which country ? What nudity ? and art ? and educational content ? and documentaries ? at which point does it become nudity ? The more we dig into it, the more it becomes fuzzy, everyone’s boundary is different, and all the rules are like that.
Any rule in place is positive to a group and negative to another, for a rule to stay in place it needs to have more supporters than detractors, or put it another way have more positive impact than negative ones.
The current set of rules are the ones that were deemed worthwile, I think it’s healthy to chalenge them or to push for other rules that could garner enough agreement to stay in place.
I don't see this test working in isolation. Given it's nature, it's value is in obscure rejection statements rather than acceptance (or "okilly-dokillies" in this case).
To echo what others on this thread have said, there's a lot of content on Youtube. This means that even if they are cautious about which content passes through the filter for kids, there's still a lot available.
Bear in mind that YouTube does not operate only in the US with unhinged free speech laws. Many countries have stricter laws and YouTube definitely needs to comply with them.
Other than that, adpocalypse happened because of bad videos being surfaced by the algorithm so another responsibility is to the creators. (And shareholders)
There is nothing to be gained by having crap in your backyard.
You can very easily turn auto-play off. There is plenty of opportunity to switch videos. It would be different if youtube forced you to watch the next video in order to use the site.
>For the neutrality, if you recognize there are agreed upon rules, as you point out, the next question becomes who agreed on these rules, and who made them ?
Youtube made them. Those are pre-conditions for uploading videos. They don't have to have any reason why they made them, those are conditions that must be met in order to upload a video. So by uploading a video you are agreeing to them.
>Any rule in place is positive to a group and negative to another
I don't agree with this generality. However, this discussion is not about the legitamacy of the rules to use youtube, it is whether or not youtube should censor videos, (that meets basic rules of use). My opinion is no, your's as you stated above was:
>I personaly think anything dealing with human content can’t be left neutral, as we need a bias towards positivity.
I agree with you that Youtube should routinely challenge their own rule sets. That is not the same as censoring their content, or in this case modifying their recommendation algorithm.
I figure that they probably don't give a damn about users like me, the algorithm is designed to steer traffic to a pyramid of monetized content and I don't seem to have any options to fight the trend but to disengage.
There are some channels/users that I started following a long time ago but after I watch one of their videos I land back on the crapflood.
The big square in front of the congress was split at the half, the pro-choice "green" group was on one side and the pro-life "sky-blue" group was in the other side. Each group had a strong opinion, but the mobilization was quite civilized, I don't remember that anyone get hurt. Anyway, there were small kids on both sides with the handkerchief of the respective color.
Also, what is your definition of kid: 6? 12? 17?
Just imagine that the Church release a video on youtube where Santa visit a lot of children to give them presents, and in particular to a unborn children during the 8 month of pregnancy, and add to Santa a "sky-blue" handkerchief in case someone didn't notice the hidden message. Do you think it should be censored for kids?
Also, I'd say people turn rational or irrational on their own choices.
Is it 50% of registered users or 50% of active commentators? And I think the image HN has been trying to cultivate is vastly different from the image of what HN actually is, at least from some of the other sites and circles I post on. The rather negative reaction to the Katie Bouman news as well as the summer programming for women incident show that somewhere down the line there is a serious breakdown in what culture HN is trying to create.
If we want to have a "free" (as in no subscription and no money required to be payed for the service) video sharing/uploading site, what model would that make it work and still have human reviewing? I consider the fact that there may be undesirable videos as the cost of having such a site, similarly how to the "cost" of having a free Internet is that there's going to be lots of hate online and free access to tutorials to make bombs and what not. It's part of the deal and I'm happy with that, YMMV. If you worry about what kids might access then don't let them access Youtube but please don't create laws that would make free video sharing sites illegal/impossible to run.
This is true for pretty much any free Internet service that allows for user content. If all of Internet content production will go back to just "official" creators (because they are the only ones where the cost/benefit math would make sense) I think that would be a huge loss/regression over what we have gained since the age of the Internet.
That’s assuming recommendations need to be personalized. They could recommend at a higher level to groups of people using attributes like age range or region.
I’m not a fan of their personalized recommendations. It’s algorithm overfits my views to recommend videos extremely similar to videos I’ve recently watched, which isn’t really aligned with my interests.
If they took a completely different approach (not personalized) it could really impact the UX in a positive way.
In this case, I'd suggest the upper bound doesn't matter, as the criteria for filtering should be "a semi-unattended 5 year old could view it without concern."
All your examples are of topics where it's probably best for parents to initiate their child's education on the topic rather than Youtube randomly putting it in some kid's feed.
But then I try to not let my mom on YouTube either. Or myself, for that matter.
I'm glad you have a sense of the culture HN is trying to cultivate. Even getting just that across is astonishingly hard. By far the likeliest default is that nobody has any sense of it.
Does it fall short? Sure. The question is how much is possible on the internet—specifically on a large, public, optionally-anonymous internet forum, the bucket HN falls in. We're happy to do our utmost, but only along the axis of the possible. We can't come close to delivering everything people imagine and demand. Your comment doesn't allow for the constraints we're up against, how few degrees of freedom we have, or how close we come to getting crushed between the gears.
HN is a large enough population sample (5M readers a month) that it is divided wherever society is divided. That means you're inevitably going to see posts which represent the worst of society relative to your own views. Societies, actually, because whether you doubt it or not, HN is a highly international site. People post here relative to their respective local norms, but mostly in mutual ignorance of that fact. This accentuates how bad the worst posts seem.
So yes you see awful posts, but it doesn't follow that they're representative either of the community or the culture. Jumping to that conclusion is an error nearly everyone makes, because painful impressions (such as nasty comments) are vastly more memorable than pleasurable ones (such as sensible comments). This has been studied and there's a name for it: https://en.wikipedia.org/wiki/Hostile_media_effect. The phenomenon was established about news coverage (https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/978019...), but internet comments are no different.
What are these "other sites and circles" you mention that do so much better culturally than HN does? Are they public forums where anybody can create an account? Are they optionally anonymous? How large are they? In other words, do they face the same problems that we do? If so, and they're better than we are at solving them, please point us there so we can learn from them. Nothing would make me happier. Usually, though, when people make this claim, they're talking about much smaller communities and/or ones that are not fully open.
Also, detecting videos that are inappropriate for children is a lot harder than determining certain content creators that are trustworthy to post videos that are appropriate (and to tag them correctly). That can be learned from the user's history, how many times their stuff has been flagged, getting upvotes from users that are themselves deemed credible, and so on. The more layers of indirection, the better, a la PageRank.
So even without analyzing the video itself, it would have a much smaller set of videos it can recommend from, but still potentially millions of videos. You still need some level of staff to train the algorithm, but you don't have to have paid staff look at every single video to have a good set of videos it can recommend. The staff might spend most of their time looking at videos that are anomalous, such as they were posted by a user the algorithm trusted but then flagged by a user that the algorithm considered credible. Then they would tag that video with some rich information that will help the algorithm in the future, beyond just removing that video or reducing the trust of the poster or the credibility of the flagger.
Kids < 4 really shouldn't have access to YouTube though.
Is that an option?
Is he really fast-talking? He seems kind of slow talking to me, when I watch his videos I use 2x speed.
I'm not sure heavy automation is needed here, people jump from content creator to content creator by word of mouth. In contrast most algorithmic suggestions to me seem highly biased towards what is popular in general. I click on one wrong video in a news article and for the next two days my recommendations are pop music, Jimmy Kimmel, Ben Shapiro and animal videos
There are probably a ton of situations like that in YouTube, where certain kinds of mistakes are hardly noticed (it shows you a video you weren't remotely interested in), but others can be really bad and need special training to avoid (such as where it shows violent or sexual content to someone who likes nursery rhymes and Peppa Pig).
https://www.newstatesman.com/science-tech/social-media/2019/...
>Just start banning certain creators from showing up in recommendations if their content crosses the line.
also won't help, because it's not the creators that have content crossing the line, it's the commenters.
Sounds again like hyperbole from the NYT.
I find it more interesting to consider what would actually be a good outcome for the viewers. I suppose originally all those recommender algorithms simply optimized for viewer engagement. Obviously that may not be the best outcome for consumers. Perhaps enraging content makes people stick on a platform longer, for example. But it would be "better" for a viewer to see more educational content and even to disconnect after a certain while.
But how would you even quantify that, for the algorithm to be able to train for it?
The son of a friend of mine taught himself programming from YouTube videos, which YouTube had recommended to him. I wouldn't complain about a result like that.
Likewise the YouTube algorithm helps many people, but criminals or unwanted people (like pedophiles) can also use it.
It's ok to think about ways to prevent it, but I don't think it should be the first concern. Imagine outlawing telephony, because criminals could benefit from its use.
Many people here maybe have or hope to have their own company one day. It seems normal to me to defend the concept of companies.
Personally I think blaming big companies is just lazy thinking that prepares the path to socialism. Of course companies, or the economy in general, aren't perfect. People are still trying to figure out how to do things better. That's normal. There is no need to assume evil greed behind everything.
Telephone service providers were monopolies, sometimes government monopolies. There was only one type of telephone you could use, and that was supplied by that same monopoly. It was illegal to attach anything else to the line either directly or indirectly. There were even (outside the US) laws on what you could say when talking on the phone.
Here's an article from 1994 about a modem maker who had products that were not officially licensed to connect to the network. https://www.newscientist.com/article/mg14219263-000-technolo...
https://www.theverge.com/2019/2/11/18220032/youtube-copystri...
It's not necessarily the case that people sympathise with companies because they're involved with running them. It might be that they involve themselves with running companies because they sympathise with them.
This is quite an insightful observation that I hadn't considered before.
https://redwoodbark.org/46876/culture/redwood-students-view-...
2019:
In response, the principal of the high school sent a note to students and parents Thursday night regarding the "hate-based video and text posts attributed to one of our students":
https://www.kron4.com/news/bay-area/bay-area-girl-says-she-l...
One is an investment/one time purchase and the other is a long-term annual liability, slated to grow.
See current Pinterest scandal and banning from Youtube of any video mentioning this.
https://idioms.thefreedictionary.com/fast+talker
https://www.logicallyfallacious.com/tools/lp/Bo/LogicalFalla...
Cf: fast and loose:
https://idioms.thefreedictionary.com/play+fast+and+loose+wit...
Specifically, I don't see how HN nowadays is cultivating that sort of intellectual curiosity when I've seen how horrible the culture here can be to certain groups of people. Not only that but the sort of posting culture encouraged here is weighed in such as a way as to be anti-intellectual: Specifically the idea that you're supposed to flag or downvote egregious posts rather than respond in kind. Now I understand that you're not supposed to feed the trolls, but what I believe this has resulted in a sort of 'tragedy of the commons' incident where nonsense is allowed to expand and grow without proper response. Some of the incredibly toxic things I've seen I've flagged and downvoted in kind, but the net result is that it goes unchallenged.
I see HN encountering the same problems as Reddit where HN is not prepared to deal with growth and bad actors. I see people being shadowbanned (and for good measure), but I don't think HN can remain a completely open platform without falling apart like many other platforms that have existed before.
And as for the other sites/circles, I mostly post on semi-private controlled forums because that's where I can find the highest quality of users and debates. My point was that I see the overall impression of HN from the people on those platforms to be more starkly negative. And it's getting harder for me personally to see any real cultural difference between HN and a more tech-oriented subreddit.
One thing that did make it through that was the ruling that mediums which lack said limitation like cable and internet don't have the rationale for that restriction and thus the censorship that weak minds had become accustomed to vanished in a puff of logic. This has been the case since cable porn channels were a thing.
By regulating YouTube you effectively regulate what /all/ platforms may push. It isn't simply that YouTube decides that "You know what we don't want to post that." - an exercise of their collective Freedom of Association but "The government doesn't want us to post that so we can't." You can't just deputize tasks to third parties and expect the limits on exercises of power to vanish. Otherwise we'd see hordes of private detectives as a work around to Fourth Amendment rights.
Said regulations on youtube would be a major infringement upon freedom of the press and speech. Not to mention it is logically equivalent to censoring your own press is whenever it fits whatever criteria they dislike.
The idea that large public internet forums inevitably degrade has been the default understanding of internet forums since before PG started HN—in fact he started HN as an experiment in escaping that fate (https://news.ycombinator.com/newswelcome.html). So another way of putting this is that you think HN's experiment has failed. That's fine, but I cling to a different view for the time being; out of cognitive dissonance if nothing else, since I spend days and nights working on it.
The guideline about flagging egregious comments is there to prevent obviously awful comments from generating off-topic, repetitive flamewars. If a comment like that is flagged and downvoted, that is challenging it. It means the community has rejected it. Better still, it minimizes its influence by stopping it at the root. Responding by pouring fuel on the flames is what causes it to expand and grow. Since you refer to not feeding trolls, you obviously know this. Beyond that, I'd have to see specific examples.
Since I don't know what semi-private controlled forums you're referring to and can't look at the criticisms of HN people are making there, it's impossible for me to evaluate them. That's a pity, because we might be missing opportunities for improvement. But the fact that they're starkly negative doesn't say much by itself. Smaller communities always have a negative view of larger communities—that's how community identity gets created. And cohesive communities always have a negative view of non-cohesive communities, because divisive topics inevitably produce responses that fall outside their acceptable spectrum. Sharing an acceptable spectrum is part of what makes a community cohesive. We don't have that on HN, certainly as a function of size, and probably also for other reasons.
And to call back to my original examples: The Katie Bouman thread(s) [1] [2] and the Women: Learn To Program [3] threads should give you quite a lot of pause. The fact that such benign incidents resulted in large flamewars is specifically an issue because it indicates a greater rot growing inside HN's culture.
There's obviously more examples (such as anything politically related almost immediately devolving into flamewars or whataboutism) which indicates that the mission statement simply isn't working. And the greater problem of flagging only works if the community as a whole agrees in a positive direction; if suddenly tomorrow HN was filled with people who held highly negative beliefs, then the flagging system fails.
When I look at other forums such as SomethingAwful, Penny Arcade etc I see them as surviving because they have much stronger moderation while maintaining a sustainable community size. And right now I don't see HN outlasting either of those communities. Without some sort of cohesion guiding the community, the end result is that the site will eventually be pulled away from its original purpose.
To sum up what I think would be necessary:
1. Long time contributors would need to be emphasized more. Especially the high quality contributors, because they serve as a way of keeping a community united.
2. The mission statement of HN needs to be less vague and more to the point. Keep a focus solely on things that happen with the tech community and issuing harsher but smaller punishments to people that cause issues. You issued a warning to me a while ago because I was being an ass, and on other sites a warning like that would've resulted in a harsher punishment like a temporary probation.
3. Politics is inescapable as was found out during the 'political detox' week. But other forums can help moderate and control political debates and inflammation by keeping them solely inline with the site's mission statement (ie: a gaming site focuses on politics as it relates to games).
That said, implementing a lot of this might be almost impossible at this point because people would decry censorship almost immediately, resulting in a large reactionary wave. Which unfortunately I think also says a lot about the overall lassiez-faire moderation style HN employs for everything but the most egregious and repeat of offenses.
[1] https://news.ycombinator.com/item?id=19632086
All three things I just mentioned are fairly niche, comparatively, yet it knows that I've been watching a lot of them lately and is giving me more of it.
> The fact that such benign incidents resulted in large flamewars [...] indicates a greater rot growing inside HN's culture.
Such incidents result in flamewars because society is polarized on these topics and getting more so. Is there a single place on the open internet at HN's scale or greater that is any different, or indeed isn't worse? HN can't be immune from macro trends. (For example, there have lately been more nationalistic flamewars, especially about China. That's plainly related to shifts in geopolitics.) If HN is a ship, the sea is stormy. We can't control the waves, or how much vomiting the passengers do. If we focus on what we're actually able to affect, maybe we can prevent the ship from sinking.
I took another look at the threads you linked to and don't see what you see. The balance of the community there is clearly supportive. Most of the indignant comments are from people protesting against the negative reactions, which were clearly in the minority. Those don't represent the community, although (as always) the community is divided. So I come back to what I said in my first reply to you: if you're judging the community by the worst things that appear here, that's a fallacy. (Actually, I'm talking about your links #1 and #3. #2 was worse.)
Perhaps my standards are lower than yours? That's possible. On the other hand, sometimes when people post complaints like yours I have the impression that what they really want is for us to take their side on every issue and ban everyone on the opposite side. We can't do that. The community would not allow it, and trying to force it would destroy it—what good would that do? There's a deeper reason too: enforcing homogeneity would be incompatible with intellectual curiosity, and we're optimizing for the latter. The price of that is a certain turmoil on divisive topics—enough to convince ideologically committed users that the site is dominated by the other side (see "hostile media effect" above). If you don't think people on the opposite side of the issues have just as "starkly negative" a view of HN as people in your circles do, I have a long list of links I can share. In fact I almost unearthed them to post here, but decided to spare you.
True.
In the context of this particular case, I was assuming that nothing the current size of YouTube could exist illegally, as that would imply that whatever authority was declaring them "illegal", but not capable of doing anything about it despite it nominally living in its jurisdiction, must be anemic and impotent to the point of being nearly non-existent.
There's already an underground proliferation of video sites, spreading copyrighted content out of the bounds of what the rightsholders want, so it's pretty much assured we'd end up with illegal alternatives. :)
But hey they're a corporation and thus have no accountability to the public good.
This is just not true. A massive part of the views originate from recommended/up next. Ask pretty much any creator. Only the core audience of a channel will have the notification bell on for a specific channel. Many users don't check the Subscription section and either link in from an external source, know beforehand what they want to search for or just watch what pops up in recommended.