So if you update your offset to 4 before the transaction for 3 commits, you'll lose 3 forever (unless you control for this, eg by periodically detecting & requeueing orphaned jobs, or by using a strictly serial/gap free integer rather than the built in auto incrementing type).
Advisory locks also exist, if you want to implement logic in the application to inform you of various row conditions without having the DB care about it.
But for the example given, you could do many things:
* Add some boolean columns for ack and complete. Performance due to cardinality of these will eventually start to suck if they’re being indexed, but by that point (millions of rows) you can have thought of another solution.
* Add ctime and atime columns, each of which can be handled by Postgres natively to update when created / written, respectively. This has the advantage of lending itself nicely to partitioning by date range, if that becomes necessary.
* Have three tables - available, in_progress, and completed. Use triggers or application logic to move entries in an atomic manner.
None of this is necessarily normalized, but if you’re receiving JSON payloads for the job, 1NF went out the window anyway.