zlacker

[parent] [thread] 1 comments
1. pedroc+(OP)[view] [source] 2023-07-15 19:40:32
That's clearly not enough. There's a continuum between producing exact input copy and having genuine creativity because the model actually learned something. A model that just reformats code and changes all the variable names would pass your test and yet be clearly a copyright violation. This whole argument requires that the neural network weights do something creative because they learned from the code instead of just transforming it. We're even careful about this with humans with things like clean room reimplementations to make sure.
replies(1): >>hakfoo+dC
2. hakfoo+dC[view] [source] 2023-07-16 01:27:36
>>pedroc+(OP)
The window of possible actual creativity may be limited and variable.

There are a lot of pretty complex prompts, where if you asked a group of reasonably skilled programmers to implement, they'd produce code that was "reformatted and changed variable names" but otherwise identical. Many of us learned from the same foundational materials, and there are only a handful of non-pathological ways to implement a linked list of integers, for example.

With code it may be more obvious, in that you can't as easily obfuscate things with synonyms and sentence structure changes. Even with prose, there is going to be a tendency to use "conventional" language choices, driving you back towards a familiar-looking mean.

[go to top]