zlacker

[parent] [thread] 3 comments
1. namari+(OP)[view] [source] 2025-01-03 10:04:51
I've argued years ago based on how LLMs are built that they would only ever amount to lossy and very memory inefficient compression algorithms. The whole 'hallucination' thing misses the mark. LLMs are not 'occasionally' wrong/hallucinating sometimes. They can only ever return lower resolution versions of what was on their training data. I was mocked then but I feel vindicated now.
replies(1): >>richar+J2
2. richar+J2[view] [source] 2025-01-03 10:40:19
>>namari+(OP)
They can combine two things in a way that never appeared together in the source material.
replies(1): >>namari+uk
◧◩
3. namari+uk[view] [source] [discussion] 2025-01-03 13:39:41
>>richar+J2
Youtube compression algorithm also produces lots of artifacts that were never filmed by the video producers
replies(1): >>wizzwi+Ns
◧◩◪
4. wizzwi+Ns[view] [source] [discussion] 2025-01-03 14:48:03
>>namari+uk
And datamoshing lets you produce effects that weren't in the source clips.
[go to top]