zlacker

[parent] [thread] 2 comments
1. Fergus+(OP)[view] [source] 2023-12-27 17:57:09
At some point (maybe not yet) It's hard to say convincingly that what AI outputs is plagiarism but human output isn't. Not because AI is conscious or whatever, because it never outputs exactly what was in the training data and like humans combines everything they know to solve the issue in front of them
replies(2): >>kiba+X3 >>Xelyne+Bp1
2. kiba+X3[view] [source] 2023-12-27 18:18:10
>>Fergus+(OP)
It sounds like we should just make all these AI open source and freely available to prevent any single individuals or corporations from monopolizing the profit off of it.
3. Xelyne+Bp1[view] [source] 2023-12-28 04:14:47
>>Fergus+(OP)
Can't the same argument be used to say "lossy compression is not plagiarism".

If I encode a movie with H264 there is no way to get it to output "exactly what was in the training data" and I can argue that "like humans extract important information from large dumps of data, the algorithm does the same".

I don't have any reservations about calling an H264 encoded video redistributed with the wrong attribution "plagiarism", so I don't see what's different about Large X Models that they deserve a special pass.

[go to top]