Unfortunately, Kafka carries with it enough complexity (due to the distributed nature) that it ends up not being worth it for most use-cases.
Personally I'd love something similar that's easier to operate. You'd probably be able to handle hundreds (if not thousands) of events per second on a single node, and without distributed complexity it'd be really nice.
And yes, in theory you could still use postgres for this (and just never delete rows). And maybe that's the answer.
So if you update your offset to 4 before the transaction for 3 commits, you'll lose 3 forever (unless you control for this, eg by periodically detecting & requeueing orphaned jobs, or by using a strictly serial/gap free integer rather than the built in auto incrementing type).