1. Commit changes in the db first: if you fail to enqueue the task, there will be data rows hanging in the db but no task to process them
2. Push the task first: the task may kick start too early, and the DB transaction is not committed yet, it cannot find the rows still in transaction. You will need to retry failure
We also looked at Celery and hope it can provide a similar offer, but the issue seems open for years:
https://github.com/celery/celery/issues/5149
With the needs, I build a simple Python library on top of SQLAlchemy:
https://github.com/LaunchPlatform/bq
It would be super cool if Hatchet also supports native SQL inserts with ORM frameworks. Without the ability to commit tasks with all other data rows, I think it's missing out a bit of the benefit of using a database as the worker queue backend.
It seems like a very lightweight tasks table in your existing PG database which represents whether or not the task has been written to Hatchet would solve both of these cases. Once Hatchet is sent the workflow/task to execute, it's guaranteed to be enqueued/requeued. That way, you could get the other benefits of Hatchet with still getting transactional enqueueing. We could definitely add this for certain ORM frameworks/SDKs with enough interest.