zlacker

[parent] [thread] 3 comments
1. gpdere+(OP)[view] [source] 2022-12-15 12:45:01
Also if a theoretical purged-dataset SD were released, it would still be easy and cheap for users to extend it to imitate any art style the want. As they wouldn't be redistributing the model and presumably they would use art they have already licensed the copyright issue would be further muddled.

I think attempting to prevent this is a losing battle.

replies(1): >>Gigach+t
2. Gigach+t[view] [source] 2022-12-15 12:46:52
>>gpdere+(OP)
I’m not too sure how it works but someone commented that you can take the model and “resume training” it on the extra dataset you want to add.

Given most of the heavy lifting is already done, this seems like a pretty easy thing for anyone to do.

replies(2): >>gpdere+I1 >>mejuto+0t
◧◩
3. gpdere+I1[view] [source] [discussion] 2022-12-15 12:53:04
>>Gigach+t
https://dreambooth.github.io/

edit: the examples are all about objects, but my understanding is that it is capable of style transfers as well.

◧◩
4. mejuto+0t[view] [source] [discussion] 2022-12-15 14:56:34
>>Gigach+t
It is called fine-tuning or transfer learning, and you usually train the last layer.

Here is an example for keras (a popular ML framework). https://keras.io/guides/transfer_learning/

[go to top]