Robots.txt has failed as a system, if it hadn't we wouldn't have captchas or Cloudflare.
In the age of AI we need to better understand where copyright applies to it, and potentially need reform of copyright to align legislation with what the public wants. We need test cases.
The thing I somewhat struggle with is that after 20-30 years of calls for shorter copyright terms, lesser restrictions on content you access publicly, and what you can do with it, we are now in the situation where the arguments are quickly leaning the other way. "We" now want stricter copyright law when it comes to AI, but at the same time shorter copyright duration...
In many ways an ai.txt would be worse than doing nothing as it's a meaningless veneer that would be ignored, but pointed to as the answer.
In general without a fair use exemption or permission from robots.txt saving a copy of a website’s content to your own servers is copyright infringement.
Purely factual information like Amazon’s prices isn’t protected by copyright, but if you want to save artwork or source files to train AI, that’s a copyright issue even before you get into the possibility of your AI being considered a derivative work.