Nevermind the other aspect the pat advice doesn’t mention - managing a massive single RDMS is a goddamn nightmare. At a very large scale they are fragile, temperamental beasts. Backups, restores, upgrades all become hard. Migrations become a dark art , often taking down the db despite your best understanding. Errant queries stalling the whole server, tiny subtleties in index semantics doing the same. Yes it’s all solvable with a lot of skill, but it ain’t a free lunch that’s for sure. And tends to become a HUGE drag on innovation, as any change to the db becomes risky.
To your other point yes, replicating data “like for like” into another RDBMS can be cheap. But in my experience this domain data extraction is often taken as an opportunity to move it onto a non RDBMS data store that gives you specific advantages that match that domain, so you don’t have scaling problems again. That takes significantly longer. But yes I am perhaps unfairly including all the domain separation and “datastore flavor change” work in those numbers
I think this kind of anticipation was part of Pinterest's early success, for example. They got ahead of their database scaling early and were able to focus on the product and UX.
eg if it looks key-value ish, or key + timestamp (eg user transaction table), dynamodb is incredible. Scales forever, never have to think about operations. But not generally queryable like pg.
if it looks event-ish or log-ish, offload to a redshift/snowflake/bigtable. But append only & eventually consistent.
if you really need distributed global mutations, and are willing to pay with latency, spanner is great.
if you can cleanly tenent or shard your data and theres little-to-no cross-shard querying then vitess or some other RDBMS shard automation layer can work.
There are a few "postgres but distributed" dbs maturing now, like cockroach - i havent personally used them at a scale that i could tell you if it actually works or not though. AFAIU these systems still have tradeoffs around table layout and access patterns that you have to think about.