All 'adversarial compatibility' from projects like Nitter, Teddit, Invidious, and youtube-dl go out the window. Any archive site (archive.org, archive.ph, etc.) can be blocked by sites requiring attestation.
And just like the book industry was terrified of piracy and were 'rescued' by Kindle, so too will journalism outlets that can't find a business model flock to Google to save them.
This is going to be rough.
What will happen if such a thing actually happens is that the underground market for "trusted device" farms grows, not too different from what's currently already happening but possibly at a far larger scale. Of course, that means the financially motivated scraping services still keep going while the honest individuals wanting user-agent freedom get screwed, just like with many other forms of DRM...
Uhh... Those two matters are pretty much unrelated to each other. Scraping is becoming non-existing because the era of static web pages has ended. No need to "scrap" when you have a nice, performant JSON REST API provided for you.
When was the last time you saw a site with a JSON API providing metadata, like the json-ld for a product on an e-commerce site? Or an API just for the open graph data? How would you even discover these APIs for sites that you don't own?
It's also worth noting that very, very few JSON APIs today are actually REST. They rarely include all the context needed, and in general JSON is much less useful than XML when you're talking to other APIs that you don't own since JSON can't easily describe the shape and datatypes of the content.
There are no performant json rest APIs provided these days though. The days of public APIs are long gone.
In practice, if there is a mobile app, there is an API. Whether it's creators object to your usage is mostly their own problem.