* Every database a Postgres 1: Key/Value store
* Every database a Postgres 2: Document stores
* Every database a Postgres 3: Logs (Kafka-esque)
* Every database a Postgres 4: Timeseries
* Every database a Postgres 5: Full Text Search
* Every database a Postgres 6: Message Queues
Low key, you could make almost every single type of database a modern startup needs out of Postgres, and get the benefits (and drawbacks) of Postgres everywhere.
Should you do it? Probably not. Is it good enough for a theoretical ~70% of the startups out there who really don't shuffle around too much data or need to pretend to do any hyper scaling? Maybe.
If anyone from 2ndQuadrant/Citus/EDB see this, please do a series like this, make the solutions open source, and I bet we'd get some pretty decent performance out of Postgres compared to the purpose built solutions (remember, TimescaleDB did amazing compared to InfluxDB, a purpose built tool, not too long ago).
New features like custom table access methods and stuff also shift the capabilities of Postgres a ton. I'm fairly certain I could write a table access method that "just" allocated some memory and gave it to a redis subprocess (or even a compiled-in version) to use.
[EDIT] - It's not clear but the listing is in emacs org mode, those bullet points are expandable and I have tons of notes in each one of these (ex. time series has lots of activity in postgres -- TimescaleDB, native partitioning, Citus, etc). Unfortunately the first bullet point is 43 (!) bullet points down. If someone wants to fund my yak shaving reach out, otherwise someone signal boost this to 2Q/Citus/EDB so professionals can take a stab at it.
[EDIT2] - I forgot some, Postgres actually has:
- Graph support, w/ AgensGraph now known as AGE[0]
- OLAP workloads with Citus Columnar[1] (and zedstore[2]).
[1]: https://www.citusdata.com/blog/2021/03/05/citus-10-release-o...
In the current reality today, implementing everything in Postgres is probably going to be slower to market (i.e. for a start-up) than using off-the-shelf products. When you do need to scale, this is when you get to learn about how valid your assumptions were in your abstraction layer - mostly likely in production. As a concrete example, Kafka isn't designed to work well with large numbers of topics. Similarly, InfluxDB isn't designed to work well with high cardinality time series. I think it is generally wiser to "skate where the puck is going" in this situation.
Of course, everything is a trade-off. Postgres is incredibly reliable (like insane) and simple to operate. I'd say for any kind of internal line-of-business type application where scalability is less of a concern you really would be doing your ops team a service by implementing everything in Postgres.