zlacker

[return to "OpenAI departures: Why can’t former employees talk?"]
1. mwigda+OQ[view] [source] 2024-05-18 04:13:00
>>fnbr+(OP)
The best approach to circumventing the nondisclosure agreement is for the affected employees to get together, write out everything they want to say about OpenAI, train an LLM on that text, and then release it.

Based on these companies' arguments that copyrighted material is not actually reproduced by these models, and that any seemingly-infringing use is the responsibility of the user of the model rather than those who produced it, anyone could freely generate an infinite number of high-truthiness OpenAI anecdotes, freshly laundered by the inference engine, that couldn't be used against the original authors without OpenAI invalidating their own legal stance with respect to their own models.

◧◩
2. andyjo+q41[view] [source] 2024-05-18 08:02:37
>>mwigda+OQ
Clever, but the law is not a machine or an algorithm. Intent matters.

Training an LLM with the intent of contravening an NDA is just plain <intent to contravene an NDA>. Everyone would still get sued anyway.

◧◩◪
3. jeffre+961[view] [source] 2024-05-18 08:29:04
>>andyjo+q41
But then training a commercial model is done with the intent to not pay the original authors, how is that different?
◧◩◪◨
4. repeek+d81[view] [source] 2024-05-18 08:53:43
>>jeffre+961
> done with the intent to not pay the original authors

no one building this software wants to “steal from creators” and the legal precedent for using copyrighted works for the purpose of training is clear with the NYT case against open AI

It’s why things like the recent deal with Reddit to train on their data (which Reddit owns and users give up when using the platform) are becoming so important, same with Twitter/X

[go to top]