Change for the sake of it?
It's kind of implied: specifying sitemaps/allowance/copyright for different use cases: search, scraping, republishing, training etc. and perhaps adding some of the non standard extensions: Crawl-delay, default host, even sitemap isn't part of the robots.txt standard
> We believe it’s time for the web and AI communities to explore additional machine-readable means for web publisher choice and control for emerging AI and research use cases.
[0] https://developers.google.com/search/docs/crawling-indexing/...
Maybe they want to have finer details on page content, e.g: "you can index those pages but not those nodes" or "those nodes are also AI generated please ignore".
Otherwise I don't know, robots.txt is not sexy but definitely does the job.