zlacker

[return to "AI Usage Policy"]
1. Versio+Qb[view] [source] 2026-01-23 11:29:40
>>mefeng+(OP)
The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have. I have a handful of open source contributions. All of them are for small-ish projects and the complexity of my contributions are in the same ball-park as what I work on day-to-day. And even though I am relatively confident in my competency as a developer, these contributions are probably the most thoroughly tested and reviewed pieces of code I have ever written. I just really, really don't want to bother someone with low quality "help" who graciously offers their time to work on open source stuff.

Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.

◧◩
2. monega+yf[view] [source] 2026-01-23 11:59:40
>>Versio+Qb
> The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have.

ever had a client second guess you by replying you a screenshot from GPT?

ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?

no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.

Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time

◧◩◪
3. Sharli+Yg[view] [source] 2026-01-23 12:10:44
>>monega+yf
Problem is people seriously believe that whatever GPT tells them must be true, because… I don't even know. Just because it sounds self-confident and authoritative? Because computers are supposed to not make mistakes? Because talking computers in science fiction do not make mistakes like that? The fact that LLMs ended up having this particular failure mode, out of all possible failure modes, is incredibly unfortunate and detrimental to the society.
◧◩◪◨
4. Suzura+Jp[view] [source] 2026-01-23 13:14:16
>>Sharli+Yg
My boss says it's because they are backed by trillion dollar companies and the companies would face dire legal threats if they did not ensure the correctness of AI output.
◧◩◪◨⬒
5. buggy6+Vp[view] [source] 2026-01-23 13:15:47
>>Suzura+Jp
Your boss sounds hilarious naive to how the world works.
◧◩◪◨⬒⬓
6. TeMPOr+Ks[view] [source] 2026-01-23 13:32:15
>>buggy6+Vp
This is a good heuristic, and it's how most things in life operate. It's the reason you can just buy food in stores without any worry that it might hurt you[0] - there's potential for million ${local currency} fines, lawsuits, customer loss and jail time serving as strong incentive for food manufacturers and vendors to not fuck this up. The same is the case with drugs, utilities, car safety and other important aspects of life.

So their boss may be naive, but not hilariously so - because that is, in fact, how the world works[1]! And as a boss, they probably have some understanding of it.

The thing they miss is that AI fundamentally[2] cannot provide this kind of "correct" output, and more importantly, that the "trillion dollar companies" not only don't guarantee that, they actually explicitly inform everyone everywhere, including in the UI, that the output may be incorrect.

So it's mostly failure to pay attention and realize they're dealing with an exception to the rule.

--

[0] - Actually hurt you, I'm ignoring all the fitness/healthy eating fads and "ultraprocessed food" bullshit.

[1] - On a related note, it's also something security people often don't get: real world security relies on being connected - via contracts and laws and institutions - to "men with guns". It's not perfect, but scales better.

[2] - Because LLMs are not databases, but - to a first-order approximation - little people on a chip!

◧◩◪◨⬒⬓⬔
7. miki12+IE[view] [source] 2026-01-23 14:40:35
>>TeMPOr+Ks
> [1]

Cybersecurity is also an exception here.

"men with guns" only work for cases where the criminal must be in the jurisdiction of the crime for the crime to have occurred.

If you rob a bank in London, you must be in London, and the British police can catch you. If you rob a bank somebody else, the British police doesn't care. If you hack a bank in London though, you may very well be in North Korea.

[go to top]