Robots.txt has failed as a system, if it hadn't we wouldn't have captchas or Cloudflare.
In the age of AI we need to better understand where copyright applies to it, and potentially need reform of copyright to align legislation with what the public wants. We need test cases.
The thing I somewhat struggle with is that after 20-30 years of calls for shorter copyright terms, lesser restrictions on content you access publicly, and what you can do with it, we are now in the situation where the arguments are quickly leaning the other way. "We" now want stricter copyright law when it comes to AI, but at the same time shorter copyright duration...
In many ways an ai.txt would be worse than doing nothing as it's a meaningless veneer that would be ignored, but pointed to as the answer.
Robots.txt have served the simple purpose of directing bots like Google to the different parts of your website since the beginning of internet time.
They still serve the same purpose, they tell bots where to go, and most importantly, they tell bots how to find your site map.
Robots.txt is not there to prevent malicious crawlers from accessing pages as you have suggested.
The robots.txt file acts simply like a garden gate. The good and honest people will honor the gate, while the more malicious might ignore it and hop the fence or something.