I find the syntax very clunky, and I have been programming professional Clojure for at least 10 years. It reminds me of clojure.async - wonderful idea, but if you use the wrong sigil at the wrong place, you are dead in the water. Been there, done that - thanks but no thanks.
OTOH I know who Nathan is, so I'm sure there is a gem hidden somewhere. But the article did not convince me that I should go the Rama way for my next webapp. I doubt the average JS programmer will be convinced. Maybe someone else will find the gem, polish it, and everybody will be using a derivative in 5 years.
Rama does have a learning curve. If you think its API is "clunky", then you just haven't invested any time in learning and tinkering with it. Here are two examples of how elegant it is:
This one does atomic bank transfers with cross-partition transactions, as well as keeping track of everyone's activity:
https://github.com/redplanetlabs/rama-demo-gallery/blob/mast...
This one does scalable time-series analytics, aggregating across multiple granularities and minimizing reads at query time by intelligently choosing buckets across multiple granularities:
https://github.com/redplanetlabs/rama-demo-gallery/blob/mast...
There are equivalent Java examples in that repository as well.
A microbatch topology is a coordinated computation across the entire cluster. It reads a fixed amount of data from each partition of each depot and processes it all in batch. Changes don't become visible until all computation is finished across all partitions.
Additionally, a microbatch topology always starts computation with the PStates (the indexed views that are like databases) at the state of the last microbatch. This means a microbatch topology has exactly-once semantics – it may need to reprocess if there's a failure (like a node dying), but since it always starts from the same state the results are as if there were no failures at all.
Finally, all events on a partition execute in sequence. So when the code checks if the user has the required amount of funds for the transfer, there's no possibility of a concurrent deduction that would create a race condition that would invalidate the check.
So in this code, it first checks if the user has the required amount of funds. If so, it deducts that amount. This is safe because it's synchronous with the check. The code then changes to the partition storing the funds for the target user and adds that amount to their account. If they're receiving multiple transfers, those will be added one at a time because only one event runs at a time on a partition.
To summarize:
- Colocated computation and storage eliminates race conditions
- Microbatch topologies have exactly-once semantics due to starting computation at the exact same state every time regardless of failures or how much it progressed on the last attempt
The docs have more detail on how this works: https://redplanetlabs.com/docs/~/microbatch.html#_operation_...