1. SELECT item_id WHERE expire = 0. If this is empty, no items are available.
2. UPDATE SET expire = some_future_time WHERE item_id = $selected_item_id AND expire = 0. Then check whether UPDATE affected any rows. If it did, item_id is yours. If not, loop. If the database has a sane optimizer it'll note at most one document needs locking as the primary id is given.
All this needs is a very weak property: document level atomic UPDATE which can return whether it changed anything. (How weak? MongoDB could do that in 2009.)
Source code at https://git.drupalcode.org/project/drupal/-/blob/9.2.x/core/... (We cooked this up for Drupal in 2009 but I am reasonably sure we didn't invent anything new.)
Of course, this is not the fastest job queue there is but it is quite often good enough.
Whatever happened to use the proper data structures for the job? PostgreSQL and MySQL are, at the end of the day, b-trees with indices. Throw in relational properties and/or ACID too. Those aren't properties you need or want in a queue structure.
I know I don't have a solid argument against not doing it; it's just experience (and dare I say, common sense) telling me not to. Not quite like parent but I spent the first two years of my professional career in a team that had the brilliant idea to use DBs as queues. The big task I partook in for that stint is moving them off that v2 into a v3 which used---wait for it---Redis. Everyone's quality of life improved with every migration, proportional to the size of the v2 cluster we retired.
This so much. People too often treat databases as magical black-boxes that should handle anything. Database is most often the bottleneck and choosing the proper storage engine with appropriate data structures can be 100x more efficient that just using the defaults. 1 server vs 100 can definitely make a noticeable difference in costs and system complexity.
While premature optimization is bad, choosing the right tool for the job is still somewhat important and will usually pay off in the long run.
I agree that there are many cases with low workload where that would be plenty.
Direct visible effects are wrong decisions entangled in spaghetti-like complexity.
It's hard to reach technical bottleneck in well designed systems. Computers are really fast novadays. They will vary greatly depending on what kind of system it is. Out of resources – cpu, memory, network, disk io – likely the weakest of them will be saturated first – network. But that's not a rule, it's easy to have system which will saturate ie. CPU before.