zlacker

[return to "AI Usage Policy"]
1. Versio+Qb[view] [source] 2026-01-23 11:29:40
>>mefeng+(OP)
The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have. I have a handful of open source contributions. All of them are for small-ish projects and the complexity of my contributions are in the same ball-park as what I work on day-to-day. And even though I am relatively confident in my competency as a developer, these contributions are probably the most thoroughly tested and reviewed pieces of code I have ever written. I just really, really don't want to bother someone with low quality "help" who graciously offers their time to work on open source stuff.

Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.

◧◩
2. monega+yf[view] [source] 2026-01-23 11:59:40
>>Versio+Qb
> The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have.

ever had a client second guess you by replying you a screenshot from GPT?

ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?

no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.

Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time

◧◩◪
3. Sharli+Yg[view] [source] 2026-01-23 12:10:44
>>monega+yf
Problem is people seriously believe that whatever GPT tells them must be true, because… I don't even know. Just because it sounds self-confident and authoritative? Because computers are supposed to not make mistakes? Because talking computers in science fiction do not make mistakes like that? The fact that LLMs ended up having this particular failure mode, out of all possible failure modes, is incredibly unfortunate and detrimental to the society.
◧◩◪◨
4. pousad+ik[view] [source] 2026-01-23 12:38:17
>>Sharli+Yg
I think in science fiction it’s one of the most common themes for the talking computer to be utterly horribly wrong, often resulting in complete annihilation of all life on earth.

Unless I have been reading very different science fiction I think it’s definitely not that.

I think it’s more the confidence and seeming plausibility of LLM answers

◧◩◪◨⬒
5. oneeye+tp[view] [source] 2026-01-23 13:13:09
>>pousad+ik
People are literally taking Black Mirror storylines and trying to manifest them. I think they did a `s/dys/u/` and don't know how to undo it...
[go to top]