zlacker

[return to "OpenAI departures: Why can’t former employees talk?"]
1. mwigda+OQ[view] [source] 2024-05-18 04:13:00
>>fnbr+(OP)
The best approach to circumventing the nondisclosure agreement is for the affected employees to get together, write out everything they want to say about OpenAI, train an LLM on that text, and then release it.

Based on these companies' arguments that copyrighted material is not actually reproduced by these models, and that any seemingly-infringing use is the responsibility of the user of the model rather than those who produced it, anyone could freely generate an infinite number of high-truthiness OpenAI anecdotes, freshly laundered by the inference engine, that couldn't be used against the original authors without OpenAI invalidating their own legal stance with respect to their own models.

◧◩
2. TeMPOr+0T[view] [source] 2024-05-18 04:55:59
>>mwigda+OQ
Clever, but no.

The argument about LLMs not being copyright laundromats making sense hinges the scale and non-specificity of training. There's a difference between "LLM reproduced this piece of copyrighted work because it memorized it from being fed literally half the internet", vs. "LLM was intentionally trained to specifically reproduce variants of this particular work". Whatever one's stances on the former case, the latter case would be plain infringing copyrights and admitting to it.

In other words: GPT-4 gets to get away with occasionally spitting out something real verbatim. Llama2-7b-finetune-NYTArticles does not.

◧◩◪
3. dorkwo+FX[view] [source] 2024-05-18 06:16:02
>>TeMPOr+0T
How many sources do you need to steal from for it to no longer be considered stealing? Two? Three? A hundred?
◧◩◪◨
4. TeMPOr+f01[view] [source] 2024-05-18 06:57:40
>>dorkwo+FX
Copyright infringement is not stealing.
◧◩◪◨⬒
5. psycho+W21[view] [source] 2024-05-18 07:41:55
>>TeMPOr+f01
True.

Making people believe that anything but their own body and mind can be considered part of their own properties is stealing their lucidity.

[go to top]