zlacker

[return to "Rama on Clojure's terms, and the magic of continuation-passing style"]
1. kamma4+QJ[view] [source] 2024-10-14 11:24:05
>>nathan+(OP)
I don’t want to be a party pooper, but from my very cursory look at the page, I don’t think this will go much farther than a very small community. I feel like you’re adding a lot of complexity compared to normal backhand/frontend/websetvices Scenario that everybody already understand.
◧◩
2. diggan+CN[view] [source] 2024-10-14 12:02:59
>>kamma4+QJ
With that mindset, should we just stop trying to improve anything regarding backend/frontend/webservices since "everybody already understand it"?
◧◩◪
3. kimi+1R[view] [source] 2024-10-14 12:32:24
>>diggan+CN
I second the OP - I'm not sure where the big prize is. I have a feeling that whomever wrote the article thinks there is a 10x (or 100x) improvement to be made, but I was not able to see it.

I find the syntax very clunky, and I have been programming professional Clojure for at least 10 years. It reminds me of clojure.async - wonderful idea, but if you use the wrong sigil at the wrong place, you are dead in the water. Been there, done that - thanks but no thanks.

OTOH I know who Nathan is, so I'm sure there is a gem hidden somewhere. But the article did not convince me that I should go the Rama way for my next webapp. I doubt the average JS programmer will be convinced. Maybe someone else will find the gem, polish it, and everybody will be using a derivative in 5 years.

◧◩◪◨
4. nathan+J31[view] [source] 2024-10-14 14:10:17
>>kimi+1R
Well, this article is to help people understand just Rama's dataflow API, as opposed to an introduction to Rama for backend development.

Rama does have a learning curve. If you think its API is "clunky", then you just haven't invested any time in learning and tinkering with it. Here are two examples of how elegant it is:

This one does atomic bank transfers with cross-partition transactions, as well as keeping track of everyone's activity:

https://github.com/redplanetlabs/rama-demo-gallery/blob/mast...

This one does scalable time-series analytics, aggregating across multiple granularities and minimizing reads at query time by intelligently choosing buckets across multiple granularities:

https://github.com/redplanetlabs/rama-demo-gallery/blob/mast...

There are equivalent Java examples in that repository as well.

◧◩◪◨⬒
5. goosta+uk1[view] [source] 2024-10-14 15:58:00
>>nathan+J31
This question is probably obvious if I knew what a microbatch or topology or depot was, but as a Rama outsider, is there a good high level mental model for what makes the cross-partition transactions work? From the comments that mention queuing and transaction order, is serialized isolation a good way to imagine what's going on behind the scenes or is that way off base?
◧◩◪◨⬒⬓
6. nathan+Bp1[view] [source] 2024-10-14 16:27:44
>>goosta+uk1
A depot is a distributed log of events that you append to as a user. In this case, there's one depot for appending "deposits" (an increase to one user's account) and another depot for appending "transfers" (an attempt to move funds from one account to another).

A microbatch topology is a coordinated computation across the entire cluster. It reads a fixed amount of data from each partition of each depot and processes it all in batch. Changes don't become visible until all computation is finished across all partitions.

Additionally, a microbatch topology always starts computation with the PStates (the indexed views that are like databases) at the state of the last microbatch. This means a microbatch topology has exactly-once semantics – it may need to reprocess if there's a failure (like a node dying), but since it always starts from the same state the results are as if there were no failures at all.

Finally, all events on a partition execute in sequence. So when the code checks if the user has the required amount of funds for the transfer, there's no possibility of a concurrent deduction that would create a race condition that would invalidate the check.

So in this code, it first checks if the user has the required amount of funds. If so, it deducts that amount. This is safe because it's synchronous with the check. The code then changes to the partition storing the funds for the target user and adds that amount to their account. If they're receiving multiple transfers, those will be added one at a time because only one event runs at a time on a partition.

To summarize:

- Colocated computation and storage eliminates race conditions

- Microbatch topologies have exactly-once semantics due to starting computation at the exact same state every time regardless of failures or how much it progressed on the last attempt

The docs have more detail on how this works: https://redplanetlabs.com/docs/~/microbatch.html#_operation_...

[go to top]