zlacker

[parent] [thread] 3 comments
1. dawner+(OP)[view] [source] 2022-10-16 20:46:29
In theory AI should never return an exact copy of a copyrighted work or even anything close enough you could argue is the original “just changed”. If the styles are the same I think that’s fine, no different than someone else cloning it. But there’s definitely outputs from stable diffusion that looks like the original with some weird artifacts.

We need regulation around it.

replies(3): >>rtkwe+B6 >>XorNot+fa >>orbita+oL
2. rtkwe+B6[view] [source] 2022-10-16 21:48:36
>>dawner+(OP)
Code is much easier to do that because the avenues for expression are significantly limited compared to just creating an image. For it to be useful copilot has to produce compiling and reasonably terse and understandable code. The compiler in particular is a big bottle neck to the range of the output.
3. XorNot+fa[view] [source] 2022-10-16 22:20:43
>>dawner+(OP)
> there’s definitely outputs from stable diffusion that looks like the original with some weird artifacts.

Do you have examples? Because SD will generate photoreal outputs and then get subtle details (hands, faces) wrong, but unless you have the source image in hand then you've no way of knowing whether it's a "source image" or not.

4. orbita+oL[view] [source] 2022-10-17 05:07:21
>>dawner+(OP)
This is like saying "we need a regulation around bugs in software", with similar consequences. ML models are generally too large to ensure that there's no bugs. Same with software.
[go to top]