The below article goes over some downsides to using postgres for a job queue: namely that the data model isn't optimized to find and send new jobs to large numbers of subscribers. Actual messaging systems like Kafka/NATS are better at this.
https://www.2ndquadrant.com/en/blog/what-is-select-skip-lock...
Also, there are things that the dedicated messaging systems give you that postgres won't, such as transparent error handling and retry logic. If your worker grabs a job from postgres and fails (in some kind of transient way such that the task should be retried), you'll have to implement the retry logic yourself. A streaming platform like Kafka or NATS will notice that you haven't acknowledged the message and deliver it to someone else.
This is something you could pretty easily implement yourself with something like a process that just scans over the "processing" jobs and looks for timeouts, resetting them as it goes. But there are probably a few of these little papercuts that Kafka-like (…Kafkaesque?) systems would handle for you.
So, I guess if you already have postgres set up in a reliable way, and there's no one around whose job it is to set up new production systems like Kafka, and you don't already have something like Kafka, and you only need basic job queue requirements or you're okay with implementing whatever you need on top of postgres yourself, and your incoming job rate is fewer than maybe a thousand per second, and you have fewer than maybe tens of consumers… postgres is probably a decent job queue.
The above paragraph seems snide but it probably does describe many people's environments.
Of course, the database client and server together form a distributed system. The client might continue processing a job under the mistaken impression that it still holds the lock. Jobs still need to be idempotent, as with the streaming platforms.