If you're already using Postgres, you can avoid increasing operational complexity by introducing another database. Less operational complexity means better availability.
You can atomically modify jobs and the rest of your database. For example, you can atomically create a row and create a job to do processing on it.
Granted I didn't even read the main article because it seems like such a casual headline.
Edit post-read: yeah, using it as a CI jobs database. He lists the alternatives, but seriously, Kafka? Kafka is for linear scaling pub/sub. This guy has a couple CI jobs infrequently run.
Sure this works if the entire thing is throwaway for a non critical pub/sub system.
"It's possible to scale Postgres to storing a billion 1KB rows entirely in memory - This means you could quickly run queries against the full name of everyone on the planet on commodity hardware and with little fine-tuning."
Yeah just because it can does not mean it is suited for this purpose.
Don't do this for any integration at even medium scale.