zlacker

[parent] [thread] 19 comments
1. diggan+(OP)[view] [source] 2024-10-14 12:02:59
With that mindset, should we just stop trying to improve anything regarding backend/frontend/webservices since "everybody already understand it"?
replies(2): >>jgalt2+e2 >>kimi+p3
2. jgalt2+e2[view] [source] 2024-10-14 12:21:45
>>diggan+(OP)
You're arguing change (a broadly defined term) is not necessarily bad, but the OP is arguing adding complexity (a type of change) is bad.
3. kimi+p3[view] [source] 2024-10-14 12:32:24
>>diggan+(OP)
I second the OP - I'm not sure where the big prize is. I have a feeling that whomever wrote the article thinks there is a 10x (or 100x) improvement to be made, but I was not able to see it.

I find the syntax very clunky, and I have been programming professional Clojure for at least 10 years. It reminds me of clojure.async - wonderful idea, but if you use the wrong sigil at the wrong place, you are dead in the water. Been there, done that - thanks but no thanks.

OTOH I know who Nathan is, so I'm sure there is a gem hidden somewhere. But the article did not convince me that I should go the Rama way for my next webapp. I doubt the average JS programmer will be convinced. Maybe someone else will find the gem, polish it, and everybody will be using a derivative in 5 years.

replies(4): >>bbor+C4 >>stingr+K4 >>educti+89 >>nathan+7g
◧◩
4. bbor+C4[view] [source] [discussion] 2024-10-14 12:43:10
>>kimi+p3
TBF "this Clojure library has clunky syntax that makes it brittle" is a far more sophisticated and valid critique than "it's not built on Node so no one will use it" ;)
◧◩
5. stingr+K4[view] [source] [discussion] 2024-10-14 12:44:26
>>kimi+p3
I would have expected better from HN that to shoot down smart people tinkering with potentially elegant solutions to complex problems. It’s something we should embrace.

Having said that, as a long term Clojure developer myself, I’m also not a big fan of this approach myself (I try to avoid libraries that use a lot of macros, and instead prefer a more “data driven” approach, which is also why I’m not a fan of spec), but I’m not one to judge.

◧◩
6. educti+89[view] [source] [discussion] 2024-10-14 13:18:54
>>kimi+p3
> It reminds me of clojure.async - wonderful idea, but if you use the wrong sigil at the wrong place, you are dead in the water.

Isn’t that how any programming works? If you call the wrong function, pass the wrong var, typo a hash key etc etc the whole thing can blow up. Not sure how it’s a knock on core.async that you have to use the right macro or function in the right place. Are there async libraries that let you typo the name of their core components? (And yes some of the macros are named like “<!”, is that naming the issue?)

replies(2): >>synthc+6h >>Valent+IJ
◧◩
7. nathan+7g[view] [source] [discussion] 2024-10-14 14:10:17
>>kimi+p3
Well, this article is to help people understand just Rama's dataflow API, as opposed to an introduction to Rama for backend development.

Rama does have a learning curve. If you think its API is "clunky", then you just haven't invested any time in learning and tinkering with it. Here are two examples of how elegant it is:

This one does atomic bank transfers with cross-partition transactions, as well as keeping track of everyone's activity:

https://github.com/redplanetlabs/rama-demo-gallery/blob/mast...

This one does scalable time-series analytics, aggregating across multiple granularities and minimizing reads at query time by intelligently choosing buckets across multiple granularities:

https://github.com/redplanetlabs/rama-demo-gallery/blob/mast...

There are equivalent Java examples in that repository as well.

replies(2): >>refulg+1o >>goosta+Sw
◧◩◪
8. synthc+6h[view] [source] [discussion] 2024-10-14 14:17:06
>>educti+89
The difference is in how easy it is to detect the cause of problems. Mistakes like wrong function names are mostly easy to find and fix. Mistakes when using core.async can be very hard to track down.
replies(1): >>educti+UG
◧◩◪
9. refulg+1o[view] [source] [discussion] 2024-10-14 15:02:37
>>nathan+7g
> If you think its API is "clunky", then you just haven't invested any time in learning and tinkering with it.

Sigh.

◧◩◪
10. goosta+Sw[view] [source] [discussion] 2024-10-14 15:58:00
>>nathan+7g
This question is probably obvious if I knew what a microbatch or topology or depot was, but as a Rama outsider, is there a good high level mental model for what makes the cross-partition transactions work? From the comments that mention queuing and transaction order, is serialized isolation a good way to imagine what's going on behind the scenes or is that way off base?
replies(1): >>nathan+ZB
◧◩◪◨
11. nathan+ZB[view] [source] [discussion] 2024-10-14 16:27:44
>>goosta+Sw
A depot is a distributed log of events that you append to as a user. In this case, there's one depot for appending "deposits" (an increase to one user's account) and another depot for appending "transfers" (an attempt to move funds from one account to another).

A microbatch topology is a coordinated computation across the entire cluster. It reads a fixed amount of data from each partition of each depot and processes it all in batch. Changes don't become visible until all computation is finished across all partitions.

Additionally, a microbatch topology always starts computation with the PStates (the indexed views that are like databases) at the state of the last microbatch. This means a microbatch topology has exactly-once semantics – it may need to reprocess if there's a failure (like a node dying), but since it always starts from the same state the results are as if there were no failures at all.

Finally, all events on a partition execute in sequence. So when the code checks if the user has the required amount of funds for the transfer, there's no possibility of a concurrent deduction that would create a race condition that would invalidate the check.

So in this code, it first checks if the user has the required amount of funds. If so, it deducts that amount. This is safe because it's synchronous with the check. The code then changes to the partition storing the funds for the target user and adds that amount to their account. If they're receiving multiple transfers, those will be added one at a time because only one event runs at a time on a partition.

To summarize:

- Colocated computation and storage eliminates race conditions

- Microbatch topologies have exactly-once semantics due to starting computation at the exact same state every time regardless of failures or how much it progressed on the last attempt

The docs have more detail on how this works: https://redplanetlabs.com/docs/~/microbatch.html#_operation_...

◧◩◪◨
12. educti+UG[view] [source] [discussion] 2024-10-14 16:55:01
>>synthc+6h
Not at all my experience. Do you have any examples?

OP called it "clojure.async." I question how much they've really used it.

replies(1): >>kimi+7h2
◧◩◪
13. Valent+IJ[view] [source] [discussion] 2024-10-14 17:10:26
>>educti+89
No it is different because libraries such as core.async or Rama rely on inversion of control [1]: the framework is in charge of the control flow and code fed to the framework will be executed by some kind of black box. To achieve this, these frameworks build their own machinery on top of existing core facilities (normal functions, call stacks, etc) to implement similar concepts (rama ops for instance) one level above. The real issues arise when something goes wrong.

If you're lucky you'll get an exception but it won't tell you anything about the process you described at the framework level using the abstractions it offers (like core.async channels). The exception will just tell you how the framework's "executor" failed at running some particular abstraction. You'll be able to follow the flow of the executor but not the flow of the process it executes. In other words the exception is describing what is happening one level of abstraction too low.

If you're not lucky, the code you wrote will get stuck somewhere, but issuing a ^C from your REPL will have no effect because the problematic code runs in another thread or in another machine. The forced halting happens at the wrong level of abstraction too.

These are serious obstacles because your only recourse is to bisect your code by commenting out portions of it just to identify where the problem arises. I personally have resorted to writing my own half-baked core.async debugger, implementing instrumentation of core.async primitives gradually, as I need them.

Having said that, I don't think this is a fatal flaw of inversion of control, and in fact looking at the problem closely I don't think the root issue is that they come with their own black box execution systems. Those are not black boxes, as shown by the stack traces these frameworks produce which give a clear picture of their internals, they are grey boxes leaking info about one execution level into another level. And this happens because these frameworks (talking about core.async specifically, maybe this isn't the case with Rama) do not but should come with their own exception system to handle errors and forced interruption. Lacking these facilities they fallback on spitting a trace about the executor instead of the executed process.

What does implementing a new exception system entails ?

Case 1, your IoC framework does not modify the shape of execution, it' still a call-tree and there is a unique call-path leading to the error point, but it changes how execution happens, for instance it dislocates the code by running it on different machines/threads. Then the goal is to aggregate those sparse code points that constitute the call-path at the framework's abstraction level. You'll deal with "synthetic exceptions" that still have the shape of a classical exception with a stack of function calls, except that these calls are in succession only from the framework semantics; at a lower-level, they are not.

Case 2, the framework also changes the shape of execution, you're not dealing with a mere call-tree anymore, you're using a dataflow, a DAG. There is not a single call-path up to the error point anymore, but potentially many. You need to replace the stack in your exception type by a graph-shaped trace in addition to handling sparse code point aggregation as in case 1.

Aggregation to put in succession stack trace elements that are distant one abstraction level lower and to hide parts of the code that are not relevant at this level. And new exception types to account for different execution shapes.

In addition to these two requirement, you need to find a way to stitch different exception types together to bridge the gap between the executor process and the executed process as well as between the executed process and callbacks/continuations/predicates the user may provide using the native language execution semantics.

[1] https://en.wikipedia.org/wiki/Inversion_of_control

replies(2): >>educti+5T >>crypto+Kk3
◧◩◪◨
14. educti+5T[view] [source] [discussion] 2024-10-14 18:04:26
>>Valent+IJ
Yes, core.async is CSP style async (not to be confused with CPS programming style, which this article is about) and there is a learning curve. Particularly as the go macro cannot see across function boundaries. (Some of Rich Hickey's videos on it give an overview similar to what you wrote above.)

My confusion was on the OP's statement about "sigils": "if you use the wrong sigil at the wrong place, you are dead in the water."

So don't use the wrong sigil? There are all of two of them; I think OP means the parking take and blocking take macros. One is used inside go blocks and one outside. That was the easy part. The hard part was wrapping my head around how to efficiently program within the constraints imposed by core.async. But the machinery of how to do things (macros, functions) was very simple and easy to learn. You basically just need to learn "go", "<!" and "<!!". Eventually you may need ">!", "alts!", and "chan".

replies(2): >>Valent+TV >>kimi+tk2
◧◩◪◨⬒
15. Valent+TV[view] [source] [discussion] 2024-10-14 18:18:33
>>educti+5T

    (defn test-dbg7 [] ;; test buffers
        (record "test-dbg.svg"
                (let [c ^{:name "chan"} (async-dbg/chan 1)]
                  ^{:name "thread"}
                  (async-dbg/thread
                    (dotimes [n 3]
                      ^{:name "put it!"} (async-dbg/>!! c n))
                    ;; THE BUG IS HERE. FORGOT TO CLOSE GODAMNIT
                    #_(async-dbg/close! c))
                  (loop [x (async-dbg/<!! c)]
                    (when x
                      (println "-->" x)
                      (recur ^{:name "take it!"} (async-dbg/<!! c)))))))
The code above produces the following before hanging:

    --> 0
    --> 1
    --> 2
https://pasteboard.co/L4WjXavcFKaM.png

In this test case, everything sits nicely within the same let statement, but these puts and reads to the same channel could be in different source files, making the bug hard to track.

Once the bug is corrected the sequence diagram should look like this:

https://pasteboard.co/CCyGZKUUkVFL.png

replies(1): >>educti+a21
◧◩◪◨⬒⬓
16. educti+a21[view] [source] [discussion] 2024-10-14 18:59:12
>>Valent+TV
Ya I also needed some time to wrap my head around async programming but OP was talking about "use[ing] the wrong sigil at the wrong place" - that's not what your stumbling block is here, you forgot to close the channel and you have a loop statement that by design is going to read eternally from the channel so as long as the channel is open you're going to "hang". Doesn't have anything to do with mixing up "sigils", it's just that async programming has unique challenges.
◧◩◪◨⬒
17. kimi+7h2[view] [source] [discussion] 2024-10-15 06:15:26
>>educti+UG
Enough to keep wondering if this is the case <! or <<! and if whether I'd be better off with a dead-stupid, surprise-free thread pool.
replies(1): >>educti+GK8
◧◩◪◨⬒
18. kimi+tk2[view] [source] [discussion] 2024-10-15 06:49:14
>>educti+5T
I am with you on this - it's not impossible, but is it maintainable? what if I break a leg? and when something blocks and you don't know why, who can debug it? that's why we went for a different approach.

The problem with core.async is that it is an excellent PoC, but does not actually solve the underlying problem, that is "Hey! I want a new thread here! And I want it cheap.". Project Loom solves it. Of course, the problem is not something that could be solved within the land of bytecode.

◧◩◪◨
19. crypto+Kk3[view] [source] [discussion] 2024-10-15 15:19:53
>>Valent+IJ
In other words: you don't get a stack trace, so it's hard to debug. It's like writing one big function so that your stack traces tend to be very shallow, and then the only clue to what's going on is the _line number_ where something goes wrong. But that's not so bad, is it? You get to know where in the processing of something you were, and that's a pretty good clue. You'll want to have an option to emit traces, sure -- that will be a lot like a stack trace, but better. And you might want to have traces held on to speculatively and then throw them away at select points (like when you're done processing a request w/o exceptions).
◧◩◪◨⬒⬓
20. educti+GK8[view] [source] [discussion] 2024-10-17 15:03:58
>>kimi+7h2
I think you mean <! and <!! - first one is parking take for use inside go blocks (lightweight threads) and second is blocking take for use outside them. I have sometimes wondered if they should have just called them, like, "take!" and "take!!" or even better "take-parking" and "take-blocking". Even if personally it was not hard for me to learn them, versus the whole model of async and the rules around go blocks.

I haven't heard complaints about the thread pool before, I thought it just matched your number of cores by default but could be configured. I do know if you do blocking takes (<!!) where you're supposed to do parking takes (<!) the lightweight threads block the entire parent "real" thread and you can get thread exhaustion, maybe it was that?

[go to top]