In the same way parents can be blamed for not keeping their children safe around guns/alcohol/drugs, they should also be blamed for not keeping the children out of digital dangers, and keep mandatory age verifications out of here.
It almost sounds like multiple parents from a large number of households need to collectively act in unison to address the problem effectively. Hmm collective action, that sounds familiar. I wonder if there’s a way to enforce such a collective action?
To be clear, I do agree that putting the ban on the software/platform side is the wrong approach. The ban should be on the physical hardware, similar to how guns/alcohol/tobacco which are all physical objects. But I don’t have the luxury to let perfect be the enemy of close enough.
Have these parents tried to not let the salesman in?
The actual problem is not that kids are using group communications technology, it's that the network effect in public interaction has been captured by private companies with a perverse incentive to maximize engagement.
That's just as much of a problem for adults as for teenagers and the solution doesn't look anything like "ban people from using this category of thing" and instead looks something like "require interoperability/federation" so there isn't a central middle man sitting on the chokepoint who makes more money the more time people waste using the service.
Humans survived well before the internet, the telephone, the telegraph, or even international post.
The "just say no" argument, basically.
It's also assuming that we're willing to abandon a technological capacity (not having to personally travel to someone's location to communicate with them) that humans have had since before Moses came down from the mountain, which seems like a fairly silly constraint to impose when there are obviously better alternatives available.
There is also the education part that for some reason we are ignoring. Kids are going to be able to access drugs in locations where they are unsupervised, they are going to be subject to peer pressure, etc. The job of the parents is to prepare them for that, as they should prepare them for the negative effects of social media.
I don't think that is the case any more since social media isn't social like it used to be?
IDK where to begin with this, because we clearly do have physical public spaces for interaction, whether free like parks or not free like coffee shops. People also hang out at each others' homes. Moreover, supply of public spaces increases when there's demand, much of which is being soaked up by social media.
You're also acting like we can't meaningfully distinguish between social media and other forms of communication and that we have to be all or nothing about it, which is a bewildering take. Even social media can be meaningfully distinguished in terms of design features. Facebook back when it was posting on friends' walls, no likes, comments, shares, friend/follower counts, or feeds, was fun and mostly harmless. LinkedIn was genuinely useful when the feed was nothing more than professional updates. They've all since morphed into toxic cesspools of social comparison, parasociality, polarization, disinformation, and other problems. Interoperability/federation doesn't solve those problems: most of the interoperable and federated solutions actually perpetuate them, because the problematic design features are part of the spec.
In my first message I was not targeting those parents who try to block this but can't; I was targeting those parents that use Youtube to distract their kids since they are babies, those who give unrestricted access with no control at all, those who don't care. We all know people like that.
This is just an hypothesis, but if parents were fined every time their kid accessed social media, I'm sure most kids wouldn't be on it.
The premise that parenting is wholly on the parents and society at large doesn't need to play any role in raising kids is a manifestation of the kind of libertarianism that appeals to techies on the spectrum who want to find the simplest possible ruleset for everything, but it just doesn't work that way in reality.
I didn't say that "parenting is wholly on the parents", that's a straw man argument. I said that parents who don't keep their children away from digital dangers should be blamed.
Parents have a huge radius of action, they can:
- Avoid using Youtube for entertaining their babies/toddlers.
- Avoid buying tablets to their children.
- If they buy them a phone, use parental control and restrict app usage.
- Monitor what their kids do on internet.
- And the most important: educate their children to identify dangers.
Do you think a parent who does none of this shouldn't be blamed?
I want parents to embrace responsibility and act as parents. Delegating this kind of education to government is dangerous and has many negative collateral effects we will pay sooner or later.
Do you think a crack dealer should be allowed to hang around on the playground and every kid has to talk to him too (and its up to parents to make sure the kids know not to buy his stuff)?
The government can and does already track whatever they want about you. Businesses already track you unless you are extremely thorough about erasing your footprint. Adding a zero-knowledge proof through a trusted system that you are 18+ doesn't seem like the mountain people are claiming. You already have to provide ID and credit card to get ISP access, the byte patterns are traced back to your household. They already have a unique fingerprint on your browser and computer. The real harm is just the obvious encroachment that we can all see and have known about since early 2000s. They don't need a "backdoor", it feels like alarmism over a possible problem, when there is a very real harm to children and teens (suicide rates, depression, bullying, mental health, etc).
to go back to smoking / alcohol / guns, one could argue it is an infringement, but ultimately it does seem to have been the right choice for society at large, and the increased "invasion of privacy" has been pretty minor. If anything, the opt-in stuff like credit cards, cell phones, GPS, car apps, streaming services have all been far larger invasions of privacy that people willingly embrace.
Also, the fact that gov and companies are already tracking people doesn't mean we should consent to more ways of tracking.
I see nothing in their comments to suggest that.
They argued against the government tracking people, that's it.
How many public discussions have you participated in at a coffee shop? If you have something to say and you go there and start trying to chat up anyone who walks in the door, what response do you expect from the proprietors?
If you go to a park which is within 10 miles of the median home, how many people do you expect to encounter there at any given time, especially in the heat of summer or cold of winter?
You need indoor spaces that don't have some private commercial operator, like community centers or hackerspaces, but those are the things that get priced out by high real estate costs.
> People also hang out at each others' homes.
You move to a new city and want to meet people. Are you expecting many strangers to invite you into their homes without introduction?
> Moreover, supply of public spaces increases when there's demand, much of which is being soaked up by social media.
Social media costs time. Physical spaces cost even more time (since you need to travel there) and they cost money (to cover the rent). What happens when you then make the rent high?
> Even social media can be meaningfully distinguished in terms of design features.
So is e.g. Usenet social media or not? Does it matter if it provides ordering options other than search by date?
> They've all since morphed into toxic cesspools of social comparison, parasociality, polarization, disinformation, and other problems.
Because those things increase engagement and the central middle man gets paid for increasing engagement.
> Interoperability/federation doesn't solve those problems
It removes the perverse incentive to design things that way.
> most of the interoperable and federated solutions actually perpetuate them, because the problematic design features are part of the spec.
Then why is Neocities or "add a Bluesky comments section to your blog" so much less toxic than Facebook?
The primary thing driving toxicity in certain federated networks is when they get a huge influx of users after some incumbent social network gets into the news over political suppression, because then a mass of the target's partisans try to switch to something else in protest and partisans are toxic so if you get inundated with disproportionately partisan exiles you've got a problem. Which doesn't happen if you federate the whole main network containing the majority of the population including moderates and apolitical subjects rather than disproportionately one side's most excitable militants.
For instance, a simple law like "Companies should take measure, even if it lowers revenue and growth, to reduce addictive behavior. They should to it more emphatically on under age users and even more on under 13 years old.". But no. Instead, they will write 40 pages of what companies should implement in their software, and than have the 40 pages be quickly outdated, partially impossible to implement and hell for developers who try to do the right thing to comply. Total crap of standards and regulation bodies that help nothing and slow down all innovation.
Solution will only come from social pressure, movements to delete the apps, parents actually educating their children to avoid adicitive features. It will take time. But Government will solve nothing.
"I totally understand that "the salesman" is everywhere and that a single person can't fight against that, but he is everywhere because most parents are not blocking him in the first place, and that's exactly my point. Those are the parents that need to be blamed."
In my state, buying cigarettes requires presenting your driver's license, which is scanned at every purchase. Not sure about alcohol.