zlacker

[parent] [thread] 2 comments
1. people+(OP)[view] [source] 2022-12-15 13:06:41
Page Rank doesn't reproduce any content claiming it's new.

You can however disallow Google from indexing your content using robots.txt a met tag in the HTML or an HTTP header.

Or you can ask Google to remove it from their indexes.

Your content will disappear from then on.

You can't un-train what's already been trained.

You can't disallow scraping for training.

The damage is already done and it's irreversible.

It's like trying to unbomb Hiroshima.

replies(1): >>CyanBi+c1
2. CyanBi+c1[view] [source] 2022-12-15 13:12:28
>>people+(OP)
That's actually interesting, adding Metadata to the images as a check for allowing or disallowing ai usage

That might be a good way to go about it

replies(1): >>ben_w+2a
◧◩
3. ben_w+2a[view] [source] [discussion] 2022-12-15 13:57:50
>>CyanBi+c1
If you can make the metadata survive cropping, format shifts, and screenshots.

Can probably do all that well-enough (probably doesn't need to be perfect) by leaning on FAANG, with or without legislation.

But: opt-in by default, or opt-out by default?

[go to top]