Wanting to offload heavy work to a background job is absolute as old of a best practice as exists in modern software engineering.
This is especially important for the kind of API and/or web development that a large number of people on this site are involved in. By offloading expensive work, you take that work out-of-band of the request that generated it, making that request faster and providing a far superior user experience.
Example: User sign-up where you want to send a verification email. Talking to a foreign API like Mailgun might be a 100 ms to multisecond (worst case scenario) operation — why make the user wait on that? Instead, send it to the background, and give them a tight < 100 ms sign up experience that's so fast that for all intents and purposes, it feels instant.
Yes. I am intimately familiar with background jobs. In fact I've been using them long enough to know, without hesitation, that you don't use a relational database as your job queue.
I wonder maybe if you've limited yourself by assuming relational DBs only have features for relational data. That isn't the case now and really hasn't been the case for quite some time now.
I'm also very familiar with jobs and I have used the usual tools like Redis and RMQ, but I wouldn't make a blanket statement like that. There are people using RDBS as queues in prod so we have some counter-examples. I wouldn't mind at all to get rid of another system (not just one server but the cluster of RMQ/Redis you need for HA). If there's a big risk in using pg as backend for a task queue, I'm all ears.
In theory an append-only and/or HOT strategy leaning on Postgres just ripping through moderate sized in-mem lists could be incredibly fast. Design would be more complicated and perhaps use case dependent but I bet could be done.