zlacker

[return to "Who knew the first AI battles would be fought by artists?"]
1. 4bpp+65[view] [source] 2022-12-15 12:25:25
>>dredmo+(OP)
Surely, if the next Stable Diffusion had to be trained from a dataset that has been purged of images that were not under a permissive license, this would at most be a minor setback on AI's road to obsoleting painting that is more craft than art. Do artists not realise this (perhaps because they have some kind of conceit along the lines of "it only can produce good-looking images because it is rearranging pieces of some Real Artists' works it was trained on"), are they hoping to inspire overshoot legislation (perhaps something following the music industry model in several countries: AI-generated images assumed pirated until proven otherwise, with protection money to be paid to an artists' guild?), or is this just a desperate rearguard action?
◧◩
2. gpdere+L8[view] [source] 2022-12-15 12:45:01
>>4bpp+65
Also if a theoretical purged-dataset SD were released, it would still be easy and cheap for users to extend it to imitate any art style the want. As they wouldn't be redistributing the model and presumably they would use art they have already licensed the copyright issue would be further muddled.

I think attempting to prevent this is a losing battle.

◧◩◪
3. Gigach+e9[view] [source] 2022-12-15 12:46:52
>>gpdere+L8
I’m not too sure how it works but someone commented that you can take the model and “resume training” it on the extra dataset you want to add.

Given most of the heavy lifting is already done, this seems like a pretty easy thing for anyone to do.

◧◩◪◨
4. mejuto+LB[view] [source] 2022-12-15 14:56:34
>>Gigach+e9
It is called fine-tuning or transfer learning, and you usually train the last layer.

Here is an example for keras (a popular ML framework). https://keras.io/guides/transfer_learning/

[go to top]