It is such a better model for the majority of queues. All you're doing is storing a message, hitting an HTTP endpoint and deleting the message on success. This makes it so much easier to scale, reason, and test task execution.
Update: since multiple people seem confused. I'm talking about the implementation of a job queue system, not suggesting that they use the GCP tasks product. That said, I would have just used GCP tasks too (assuming the usecase dictated it, fantastic and rock solid product.)
Good luck with a long running batch.
Again, I'm getting downvoted. The whole point of my comment isn't about using GCP Tasks, it is about what I would do if I was going to implement my own queue system like the author did.
By the way, that 30 minute limitation can be worked around with checkpoints or breaking up the task into smaller chunks. Something that isn't a bad idea to do anyway. I've seen long running tasks cause all sorts of downstream problems when they fail and then take forever to run again.
- http libraries
- webservers
- application servers
- load balancers
- reverse proxy servers
- the cloud platform you're running on
- waf
It might be alright for smaller "tasks", but not for "jobs".