zlacker

[parent] [thread] 4 comments
1. const_+(OP)[view] [source] 2025-07-08 04:46:30
Well I mean you're constructing very convoluted and weak examples.

I think, in your example, the obvious answer is no, they're not entitled to any profits of Gravity. How could you possibly prove Gravity has anything to do with someone reading, or not reading, a textbook? You can't.

However, AI participates in the exact same markets it trains from. That's obviously very different. It is INTENDED to DIRECTLY replace the things it trains on.

Meaning, not only does an LLM output directly replace the textbook it was trained on, but that behavior is the sole commercial goal of the company. That's why they're doing it, and that's the only reason they're doing it.

replies(1): >>parlia+me1
2. parlia+me1[view] [source] 2025-07-08 17:03:02
>>const_+(OP)
> It is INTENDED to DIRECTLY replace the things it trains on.

Maybe this is where I'm having trouble. You say "exact same markets" -- how is a print book the exact same market as a web/mobile text-generating human-emulating chat companion? If that holds, why can't I say a textbook is the exact same market as a film?

I could see the argument if someone published a product that was fine-tuned on a specific book, and marketed as "use this AI instead of buying this book!", but that's not the case with any of the current services on the market.

I'm not trying to be combative, just trying to understand.. they seem like very different markets to me.

replies(1): >>const_+A92
◧◩
3. const_+A92[view] [source] [discussion] 2025-07-09 01:09:44
>>parlia+me1
> how is a print book the exact same market as a web/mobile text-generating human-emulating chat companion? If that holds, why can't I say a textbook is the exact same market as a film?

Because the medium is actually the same. The content of a book is not paper, or a cover. It's text, and specifically the information in that text.

LLMs are intended to directly compete with and outright replace that usecase. I don't need a textbook on, say, Anatomy, because ChatGPT can structure and tell me about Anatomy, and in fact with say the exact same content slightly re-arranged.

This doesn't really hold for fictional books, nor does it hold for movies.

Watching a movie and reading a book are inherently different experiences, which cannot replace one another. Reading a textbook and asking ChatGPT about topic X is, for all intents and purposes, the same experience. Especially since, remember, most textbooks are online today.

replies(1): >>fragme+xc2
◧◩◪
4. fragme+xc2[view] [source] [discussion] 2025-07-09 01:47:18
>>const_+A92
Is it? If a teacher reads a book, then gives a lecture on that topic, that's decidedly not the same experience. Which step about that process makes it not the same experience? Is it the fact that they read the book using their human brain and then formed words in a specific order? Is it the fact that they're saying it out loud that's transformative? If we use ChatGPT's TTS feature, why is that not the same thing as a human talking about a topic after they read a book since it's been rearranged?
replies(1): >>const_+Yd2
◧◩◪◨
5. const_+Yd2[view] [source] [discussion] 2025-07-09 02:05:03
>>fragme+xc2
Well there's multiple reasons why it's not the same experience. It's a different medium, but it's also different content. The textbook may be used as a jumping-off point, supplemented by decades of real-life experience the professor has.

And, I think, elephant in the room with these discussions: we cannot just compare ChatGPT to a human. That's not a foregone conclusions and, IMO, no, you can't just do that. You have to justify it.

Humans are special. Why? Because we are Human. Humans have different and additional rights which machines, and programs, do not have. If we want to extend our rights to machines, we can do that... but not for free. Oh no, you must justify that, and it's quiet hard. Especially when said machines appear to work against Humans.

[go to top]