For fire and forget type jobs you can use lists instead of pub/sub: save a job to a list by a producer, pop it on the other end by a consumer and execute it. It's also very easy to scale, just start more producers and consumers.
We're currently using this technique, to process ~2M jobs per day, and we're just getting started. Redis needs very little memory for this, just a few mb.
Redis also supports acid style transactions.
At least with Postgres you can scale up trivially. Postgres will efficiently take advantage of as many cores as you give it. For scale out you will need to move to a purpose built queuing solution.