zlacker

[parent] [thread] 3 comments
1. BobbyT+(OP)[view] [source] 2025-08-22 01:44:50
I’m curious … So “transformative” is not necessarily “derivative”?

Seems to me the training of AI is not radically different than compression algorithms building up a dictionary and compressing data.

Yet nobody calls JPEG compression “transformative”.

Could one do lossy compression over billions of copyrighted images to “train” a dictionary?

replies(2): >>zahlma+k5 >>ipaddr+mk
2. zahlma+k5[view] [source] 2025-08-22 02:52:17
>>BobbyT+(OP)
> I’m curious … So “transformative” is not necessarily “derivative”?

(not legal advice)

Transformative works are necessarily derivative, but that transformation allows for a legal claim to "fair use" regardless of making a derived work.

https://en.wikipedia.org/wiki/Transformative_use

3. ipaddr+mk[view] [source] 2025-08-22 06:11:57
>>BobbyT+(OP)
A compression algorithm doesn't transform the data it stores it in a different format. Storing a story in a txt file vs word file doesn't transform the data.

An llm is looking at the shape of words and ideas over scale and using that to provide answers.

replies(1): >>const_+Zc1
◧◩
4. const_+Zc1[view] [source] [discussion] 2025-08-22 14:07:51
>>ipaddr+mk
No a compression algorithm does transform the data, particularly lossy ones. The pixels stored in the output are not in the input, they're new pixels. That's why you can't uncompress a jpeg. Its a new image that just happens to look like the original. But it even might not - some jpegs are so deep fried they become their own form of art. This is very popular in meme culture.

The only difference, really, is we know how a JPEG algorithm works. If I wanted to, I could painstakingly make a jpeg by hand. We don't know how LLMs work.

[go to top]