zlacker

[parent] [thread] 0 comments
1. datacr+(OP)[view] [source] 2023-01-15 17:37:08
I get their argument on the basis of the idea. But I think it's not valid when you apply the scale of the stable diffusion model. They show a case of a simple spiral and that the technology can create a similar looking spiral calling it a copy. But when you factor in the billions of trained images, the amount of specific information from all of these sources is like 1 byte.

They are going to have to show that the model copies ALL source images with perfect retention, and they are 100 percent full of shit if they think they can demonstrate that. What you may find is that some models out there are heavily biased on source images and can produce some outputs that are too similar to original works, in that case, there may be an issue.

[go to top]