zlacker

[return to "We’ve filed a law­suit chal­leng­ing Sta­ble Dif­fu­sion"]
1. Traube+42[view] [source] 2023-01-14 07:18:07
>>zacwes+(OP)
You are literally modern day luddites.

If you succeed, you will undo decades of technological progress.

◧◩
2. klabb3+Z3[view] [source] 2023-01-14 07:43:53
>>Traube+42
How? If you want to distribute a commercial non-research model, simply train it on data sets where people have given consent. I doubt that research would be affected.

At most, I’d expect copyright legislation around training to slightly delay commercial mass-deployment. Given the huge socio-technical transition that is ahead of us, it’s probably a good thing to let people have a chance to form an opinion before opening the floodgates. Judging by our transition into ad-tech social media, I’m not exactly confident that we’ll end up in a good place, even if the tech itself has a lot of potential.

◧◩◪
3. astran+8c[view] [source] 2023-01-14 09:09:41
>>klabb3+Z3
> How? If you want to distribute a commercial non-research model, simply train it on data sets where people have given consent. I doubt that research would be affected.

This is not necessary because the model was trained in Germany and the law there explicitly says you don't need to do that.

◧◩◪◨
4. visarg+Ce[view] [source] 2023-01-14 09:39:57
>>astran+8c
Use the model to generate image variations, filter out things that look too similar to the original. Then you can replace the original art works in the training set. Also remove artist names from the text, you can later create new style IDs. This will make it harder to duplicate the exact expression of an original work but still learn the ideas and visual styles in a more abstracted way.

For all the non-problematic training images you can use the originals. Some artists might want their names to become popular as style keywords.

[go to top]