But once you get even just up to say, 40 messages per second with 100 worker processes, you're now up to 4000 updates per second just to see which worker got to claim which job, and up from there becomes untenable.
That’s what our workload is like for our SaaS code analysis platform. We create a few tasks (~10 max) for every customer submission (usually triggered by a code push). We replaced Kafka with a PostgreSQL table a couple of years ago.
We made the schema, functions, and Grafana dashboard open source [0]. I think it’s slightly out-of-date but mostly the same as what we have now in production, and has been running perfectly.