>>dredmo+(OP)
Surely, if the next Stable Diffusion had to be trained from a dataset that has been purged of images that were not under a permissive license, this would at most be a minor setback on AI's road to obsoleting painting that is more craft than art. Do artists not realise this (perhaps because they have some kind of conceit along the lines of "it only can produce good-looking images because it is rearranging pieces of some Real Artists' works it was trained on"), are they hoping to inspire overshoot legislation (perhaps something following the music industry model in several countries: AI-generated images assumed pirated until proven otherwise, with protection money to be paid to an artists' guild?), or is this just a desperate rearguard action?