zlacker

[return to "Be a property owner and not a renter on the internet"]
1. rpcope+9j[view] [source] 2025-01-03 04:07:47
>>dend+(OP)
> Exploiting user-generated content.

You know, if I've noticed anything in the past couple years, it's that even if you self-host your own site, it's still going to get hoovered up and used/exploited by things like AI training bots. I think between everyone's code getting trained on, even if it's AGPLv3 or something similarly restrictive, and generally everything public on the internet getting "trained" and "transformed" to basically launder it via "AI", I can absolutely see why someone rational would want to share a whole lot less, anywhere, in an open fashion, regardless of where it's hosted.

I'd honestly rather see and think more about how to segment communities locally, and go back to the "fragmented" way things once were. It's easier to want to share with other real people than inadvertently working for free to enrich companies.

◧◩
2. alibar+BC[view] [source] 2025-01-03 07:46:40
>>rpcope+9j
Based on my experience, I've found that I like using AI (GitHub copilot) to do things like answer questions about a language that I could easily verify in the documentation. Almost basically 'yes/no' questions. To be honest if I were writing such documentation for a product/feature, I wouldn't mind the AI hoovering it up.

I've found it to be pretty crap at doing things like actual algorithms or explaining 'science' - the kind of interesting work that I find on websites or blogs. It just throws out sensible looking code and nice sounding words that just don't quite work or misses out huge chunks of understanding / reasoning.

Despite not having done it in ages, I enjoy writing and publishing online info that I would have found useful when I was trying to build / learn something. If people want to pay a company to mash that up and serve them garbage instead, then more fool them.

◧◩◪
3. namari+WO[view] [source] 2025-01-03 10:04:51
>>alibar+BC
I've argued years ago based on how LLMs are built that they would only ever amount to lossy and very memory inefficient compression algorithms. The whole 'hallucination' thing misses the mark. LLMs are not 'occasionally' wrong/hallucinating sometimes. They can only ever return lower resolution versions of what was on their training data. I was mocked then but I feel vindicated now.
◧◩◪◨
4. richar+FR[view] [source] 2025-01-03 10:40:19
>>namari+WO
They can combine two things in a way that never appeared together in the source material.
◧◩◪◨⬒
5. namari+q91[view] [source] 2025-01-03 13:39:41
>>richar+FR
Youtube compression algorithm also produces lots of artifacts that were never filmed by the video producers
[go to top]