You can however disallow Google from indexing your content using robots.txt a met tag in the HTML or an HTTP header.
Or you can ask Google to remove it from their indexes.
Your content will disappear from then on.
You can't un-train what's already been trained.
You can't disallow scraping for training.
The damage is already done and it's irreversible.
It's like trying to unbomb Hiroshima.
A tool that catalogues attributed links can't really be evaluated the same way as pastiche machine.
You'd be much closer using the example of Google's first page answer snippets, that are pulled out of a site's content with minimal attribution.
That might be a good way to go about it
Can probably do all that well-enough (probably doesn't need to be perfect) by leaning on FAANG, with or without legislation.
But: opt-in by default, or opt-out by default?