You are part of The Problem.
This is a solo dev's venture, that has a relatively pure and straightforward goal. If you can't afford it, don't use it and pick one of the others.
Do NOT compare this with a B2C offering that has nothing to do with analytics.
Do NOT compare this with a B2B offering that's free and feeds your user's data into the parent corporation's advertising revenue stream.
Do NOT compare this with a B2B offering that is open-source, with a team of a dozen core contributors that has had a decade of development under its belt.
Plus, I have zero confidence that someone using a naive postgres implementation can scale an analytics backend with customers paying only $12/mo unless all those customers get barely any traffic. Perhaps if he was using Timescale on top of postgres, but even then, $12/mo seems awfully low.
But as it is, the price point signals that he doesn't think it's a particularly valuable service.
By 2014 when I left, we had a few petabytes of analytics data for a very small but high traffic set of customers. Could we query all of that at once within a reasonable online SLA? No. We partitioned and sharded the data easily and only queried the partitions we needed.
If I were to do this now and didn't need near real-time (what is real-time?) I'd use sqlite. Otherwise I'ld use trickle-n-flip on postgres or mysql. There are literally 10+ year-old books[1] on this wrt RDBMS.
And yes, even with 2000 clients reaching billions of requests per day, only the top few stressed the system. The rest is long tail.
1. https://www.amazon.com/Data-Warehousing-Handbook-Rob-Mattiso...