The trouble with hitting an HTTP API to queue a task is: what if it fails, or what if you're not sure about whether it failed? You can continue to retry in-band (although there's a definite latency disadvantage to doing so), but say you eventually give up, you can't be sure that no jobs were queued which you didn't get a proper ack for. In practice, this leads to a lot of uncertainty around the edges, and operators having to reconcile things manually.
There's definite scaling benefits to throwing tasks into Google's limitless compute power, but there's a lot of cases where a smaller, more correct queue is plenty of power, especially where Postgres is already the database of choice.
This is covered in the GCP Tasks documentation.
> There's definite scaling benefits to throwing tasks into Google's limitless compute power, but there's a lot of cases where a smaller, more correct queue is plenty of power, especially where Postgres is already the database of choice.
My post was talking about what I would implement if I was doing my own queue, as the authors were. Not about using GCP Tasks.
No, we don't operate like that. Call me out when I'm wrong technically, but don't tell me that because someone is some sort of celebrity that I should cut them some slack.
Everything he pointed out is literally covered in the GCP Tasks documentation.
You're being "called out" (ugh) incredibly politely mostly because you were being a bit rude; "tell me X without telling me" is just a bit unpleasant, and totally counterproductive.
> because someone is some sort of celebrity that I should cut them some slack.
No one mentioned a celebrity. You're not railing against the power of celebrity here; just a call for politeness.
> Everything he pointed out is literally covered in the GCP Tasks documentation.
Yes, e.g. as pitfalls.
The request to get a message returns a token that identifies this receive.
You use that token to delete the message when you are done.
Jobs that don’t succeed after N retries get marked as dead and go into the dead letter list.
This the way AWS SQS works, it’s tried and true.