zlacker

[return to "Tell HN: We should start to add “ai.txt” as we do for “robots.txt”"]
1. samwil+H5[view] [source] 2023-05-10 12:56:05
>>Jeanne+(OP)
Using robots.txt as a model for anything doesn't work. All a robots.txt is is a polite request to please follow the rules in it, there is no "legal" agreement to follow those rules, only a moral imperative.

Robots.txt has failed as a system, if it hadn't we wouldn't have captchas or Cloudflare.

In the age of AI we need to better understand where copyright applies to it, and potentially need reform of copyright to align legislation with what the public wants. We need test cases.

The thing I somewhat struggle with is that after 20-30 years of calls for shorter copyright terms, lesser restrictions on content you access publicly, and what you can do with it, we are now in the situation where the arguments are quickly leaning the other way. "We" now want stricter copyright law when it comes to AI, but at the same time shorter copyright duration...

In many ways an ai.txt would be worse than doing nothing as it's a meaningless veneer that would be ignored, but pointed to as the answer.

◧◩
2. prepen+c9[view] [source] 2023-05-10 13:12:51
>>samwil+H5
> "We" now want stricter copyright law when it comes to AI, but at the same time shorter copyright duration...

While I’m sure others than you share this opinion, I don’t think it’s as uniform as the more common “shorten/rationalize copyright terms and fair use” crowd “we.”

I consider myself a knowledge worker and a pretty staunch proponent of floss and am perfectly fine with training AI on everything publicly available. While create stuff, I don’t make a living off selling particular copies of things I make, so my self preservation bias isn’t kicking in as much as someone who does want to sell items of their work.

But I also made some pretty explicit choices in the 90s based on where I thought IP would go so I was never in a position where I had to sell copies to survive. My decision was more pragmatic first and philosophical second.

I think someone entering the workforce now probably wants to align their livelihood with AI training on everything and not go against that. Even if US/Euro law limits training, there’s no way all other countries are going to, so it’s going to happen. And I don’t think it’s worth locking down the world to try to stop AIs from training on text, images, etc.

◧◩◪
3. JohnFe+Na[view] [source] 2023-05-10 13:21:54
>>prepen+c9
Fair enough. But there should be some mechanism where people who don't want their works to contribute to AI training to be able to prevent that without having to resort to removing their works from the web.
◧◩◪◨
4. prepen+Tf1[view] [source] 2023-05-10 18:10:11
>>JohnFe+Na
I think people who don’t want their content contributing to AI shouldn’t have it on the public web.

There are many ways to restrict access. Use one of them. But if you respond to an anonymous http request with content then it shouldn’t matter if it’s a robot looking at it or a human (or a man or a woman or whatever).

I think this both for simplicity and that I foresee a future where human consciousness is simulated and basically an AI. I don’t want to have rules that biological humans can view and digital humans can’t.

[go to top]