Google also introduced XML sitemaps (and noindex) so technically I could see how robots.txt could be consolidated into sitemapindex.xml with additional attributes. But it's not clear why and why now. Are there new requirements (from Google's PoV), such as tagging content as machine-generated? Not sure opening this discussion and changing the legal status of whether something may be crawled is going to end well for Google ie. from a copyright perspective (requiring explicit and individual consent), and in particular by entering legal terra incognita of moral rights vs generative AI, but maybe blocking competitors/AI startups is what they're after?