1. SELECT item_id WHERE expire = 0. If this is empty, no items are available.
2. UPDATE SET expire = some_future_time WHERE item_id = $selected_item_id AND expire = 0. Then check whether UPDATE affected any rows. If it did, item_id is yours. If not, loop. If the database has a sane optimizer it'll note at most one document needs locking as the primary id is given.
All this needs is a very weak property: document level atomic UPDATE which can return whether it changed anything. (How weak? MongoDB could do that in 2009.)
Source code at https://git.drupalcode.org/project/drupal/-/blob/9.2.x/core/... (We cooked this up for Drupal in 2009 but I am reasonably sure we didn't invent anything new.)
Of course, this is not the fastest job queue there is but it is quite often good enough.
You’d be trying to store a tiny bit of simple state and all the books/articles would have you standing up read only views & stored procedures for all your crud ops. The document stores came along with a fresh perspective and easy scaling.
Then their were the columnar stores and time-series stores that really did solve the newer scale problems in ways the existing sql stores didn’t.
I’m a sql guy through and through but it’s important to recognize the nosql movement was a reaction to real pain points. Also it made the sql databases better.
I have been using RDMS all along since 1996.
Nokia NetAct was scaling GB of data across multiple clusters with OLAP reporting engine in 2005 with no hiccups.
The experience I had with DynamoDB kind of proved to me I haven't lost anything by staying in the SQL path.
Most NoSQL deployments I have seen, could have been easily done in Oracle or SQL Server, provided they actually had a DBA on the team.
Mongo said “Throw data at me and I’ll scale with very little work”.
Now, I’ve always largely believed that’s penny wise and pound foolish but it’s certainly a good pitch.