zlacker

The bane of my existence: Supporting both async and sync code in Rust

submitted by lukast+(OP) on 2024-01-19 22:00:01 | 183 points 141 comments
[view article] [source] [links] [go to bottom]
replies(26): >>fireyn+6a >>biomcg+7b >>andrew+hc >>gavinh+tc >>bvrmn+ce >>rufius+1f >>the__a+kf >>Laaas+Hf >>toolto+sh >>xedrac+zh >>bruce3+Ko >>doakes+nq >>shpx+Uq >>darren+js >>ndesau+rs >>winrid+Hs >>SethML+dv >>anon29+fC >>Arnavi+3F >>eximiu+2G >>eviks+vK >>nyanpa+UL >>zubair+IN >>Too+UU >>palmfa+HY >>nullde+gw4
1. fireyn+6a[view] [source] 2024-01-19 22:53:16
>>lukast+(OP)
I may be the outlier but sync and async are both tools in the tool belt and ignoring one or the other is mostly silly?
replies(3): >>Yujf+ob >>topspi+Ac >>omgint+Fd
2. biomcg+7b[view] [source] 2024-01-19 22:59:36
>>lukast+(OP)
From the article: >The problem - Well, the thing is that features in Rust must be additive:

>“enabling a feature should not disable functionality, and it should usually be

>safe to enable any combination of features”.

Isn't the real problem that async Rust colors all the functions red?

◧◩
3. Yujf+ob[view] [source] [discussion] 2024-01-19 23:00:50
>>fireyn+6a
What does this have to do with the article? The problem is for libraries: they want to support both sync and async use cases, without having to think about think about it, so that the user of the library can decide what is right for them.
4. andrew+hc[view] [source] 2024-01-19 23:05:58
>>lukast+(OP)
I don’t know how different it is from Rust, but in javascript supporting async and sync is smooth as a Swiss watch.

In Python it takes more thinking and structure because async isn’t built in deeply like it is with javascript.

Even so, async versus sync doesn’t feel like a nightmare in either.

Having said that, it depends entirely on how much experience you have with async programming.

Async programming is a mind bender that will throw you completely until it becomes natural. Synchronous programming feels natural and logical and async requires a completely different mental model.

I can understand why anyone forced to do async would hate it. It’s something you have to want to do.

replies(2): >>sgbeal+Xi >>matt_k+cj
5. gavinh+tc[view] [source] 2024-01-19 23:06:42
>>lukast+(OP)
I hope that someday, we can have a Rust-like language without async.

Bonus points if it has the ability for users to define static analysis a la borrow checking.

replies(2): >>andrew+Hd >>nemoth+nk
◧◩
6. topspi+Ac[view] [source] [discussion] 2024-01-19 23:08:00
>>fireyn+6a
This story is about not ignoring either sync or async...

It's great that the work finally led to both being supported. The cynic in me wonders if it's also async runtime agnostic, and not just Tokio. If that's possible.

◧◩
7. omgint+Fd[view] [source] [discussion] 2024-01-19 23:13:52
>>fireyn+6a
My friend, did you jump to conclusions here? ;)
replies(1): >>fireyn+nF
◧◩
8. andrew+Hd[view] [source] [discussion] 2024-01-19 23:13:54
>>gavinh+tc
Async programming is beautiful. It’s the easiest and most natural way to do multiple things at the same time in single threaded code.

Async makes things possible that are hard or impossible to do with synch programming.

It’s a real game changer for python especially. Cant comment on rust, hopefully its implementation is smooth.

replies(3): >>Twenty+6f >>nsm+mf >>bvrmn+Xf
9. bvrmn+ce[view] [source] 2024-01-19 23:16:13
>>lukast+(OP)
Sync/async mismatch is a curse of a modern development. Countless hours are spent to make things compatible in a diverse ecosystem.

I maintain a simple abstract middleware caching library for python and it was a trick issue how to support combinations of sync/async app code and sync/async actual caching backends with async introduction in Python. In the end the answer was an automatic runtime AST transformation for all cases. Ugh. The same approach I use in validation library to parse requests for different web frameworks. But it's a specific case and could not be generalized.

It's the opposite of fun situation.

10. rufius+1f[view] [source] 2024-01-19 23:21:32
>>lukast+(OP)
The time I the most Rust was the couple of years before Rust's async/await settled. I worked on what is now a large code base that processes a lot of data quickly.

It was multi-threaded but no async/await. When I tried to start converting things over, it was really painful. I have no idea where it landed, but I remember coming away from it thinking that I'd prefer to just not deal with the async code in its current state.

I mostly write Go now and haven't had to think about this. Sounds like it's still painful.

◧◩◪
11. Twenty+6f[view] [source] [discussion] 2024-01-19 23:21:56
>>andrew+Hd
Is it really though? All I personally care for when it comes to "doing multiple things at the same time" is frankly Rust's scoped threads API: Create N threads, have them perform some computations, then join them at the end of the scope.

How is this not more natural than creating various state machines and passing around all sorts of weird highly abstract Future-objects?

replies(2): >>andrew+Zf >>sunsho+Gj
12. the__a+kf[view] [source] 2024-01-19 23:23:07
>>lukast+(OP)
I run into this sometimes for `std` rust libraries; generally involving networking. Not directly mentioned in the article, but I prefer to wrap a `block_on` in a thread, and use `std::sync::mpsc`, or as required, to manage state.

On embedded, I make my own libraries for everything; the open-source community has gone full-in on either Async, or an typestate/generic API. I don't see this changing any time soon, but maybe later down the road. I feel like the only one who doesn't like either.

replies(2): >>bpye+Rf >>MrBudd+201
◧◩◪
13. nsm+mf[view] [source] [discussion] 2024-01-19 23:23:20
>>andrew+Hd
Async/await is a _concurrency_ mechanism, and concurrency is certainly naturally useful. However async/await is certainly not the best implementation of concurrency, except in the narrow design space Rust has chosen. If you have a language with a runtime, green threads/fibers/coroutines, along with a choice mechanism is really the way to go. It trades off some performance for much better ergonomics. Go, Java, Racket all use this. It is unfortunate that Python picked async/await.
replies(1): >>WaxPro+8g
14. Laaas+Hf[view] [source] 2024-01-19 23:25:54
>>lukast+(OP)
Make two crates? rspotify-sync, rspotify-async?

Couldn't you perhaps make an async runtime that isn't async and just blocks? That would let you keep only the async interface.

replies(3): >>anonym+ig >>herman+Ph >>martin+si
◧◩
15. bpye+Rf[view] [source] [discussion] 2024-01-19 23:26:39
>>the__a+kf
Do you have any open source embedded projects? I’m curious of the direction you’ve gone in.
replies(1): >>the__a+1g
◧◩◪
16. bvrmn+Xf[view] [source] [discussion] 2024-01-19 23:27:11
>>andrew+Hd
> It’s a real game changer for python especially

Python is so slow you gain nothing with async. I have a plenty of cool stories how I've fixed crumbled products because they were async and all connection load instead of nginx/backend goes directly into db.

You rarely need long connections and if you choose async for python because it's fast then it's a wrong choice because it's not.

BTW asyncio is awful bloated implementation with huge overhead around task management.

replies(1): >>andrew+2i
◧◩◪◨
17. andrew+Zf[view] [source] [discussion] 2024-01-19 23:27:22
>>Twenty+6f
Threads are far more heavy than async.

Async code maximizes the potential of the thread it’s running in.

replies(2): >>Twenty+Eh >>Animat+xv
◧◩◪
18. the__a+1g[view] [source] [discussion] 2024-01-19 23:27:31
>>bpye+Rf
Here's one: https://github.com/David-OConnor/stm32-hal

Device IC code generally ends as files in my firmware directly.

◧◩◪◨
19. WaxPro+8g[view] [source] [discussion] 2024-01-19 23:28:56
>>nsm+mf
I mostly agree, but I think you can still just pip install gevent and have greenlets as you always did - just not as language 'primitives'.

Personally I've never liked the syntactic sugar on top of function calls very much. If something is a promise or whatever, return me the promise and I can choose what to do with it.

◧◩
20. anonym+ig[view] [source] [discussion] 2024-01-19 23:29:34
>>Laaas+Hf
that solves the whole problem except the part where the consumer of the library has to deal with async
21. toolto+sh[view] [source] 2024-01-19 23:36:57
>>lukast+(OP)
The comment threads here discuss async in many different languages: Rust, Go, JavaScript, Python. Can somebody knowledgeable describe how they are subtly different between languages? Why are they painful in some but not in others?

Is there already an article that describes this well?

replies(4): >>vlovic+jj >>boustr+4k >>stevek+4s >>Too+HU
22. xedrac+zh[view] [source] 2024-01-19 23:38:16
>>lukast+(OP)
When writing some IO bound application, async Rust is great. Less great for libraries that want to support both async and sync without having to make an async runtime a dependency if you just want the sync interface. Mutually exclusive features are taboo unfortunately. One thing I really love about Haskell is you can make any function run in a green thread by simply composing it with the 'async' function. There's nothing special about it. This works much better than say Go, because Haskell is immutable.
replies(4): >>XorNot+ri >>jeremy+Qx >>anon29+VB >>fulafe+OV1
◧◩◪◨⬒
23. Twenty+Eh[view] [source] [discussion] 2024-01-19 23:38:41
>>andrew+Zf
> Threads are far more heavy than async

This sounds like an issue with the implementation of threads and scheduling in common operating systems, and I don't see how replicating all that functionality inside of each sufficiently large programming language is remotely taken seriously.

But also, you didn't respond to what I even said. You claimed that async is 'beautiful and natural'. I disagreed. You...fell back to a performance claim that's uncontroversially true, but irrelevant to what I said.

replies(2): >>coffee+Mt >>andrew+HW
◧◩
24. herman+Ph[view] [source] [discussion] 2024-01-19 23:39:44
>>Laaas+Hf
The article has a whole section about the problems they ran into with two crates:

https://nullderef.com/blog/rust-async-sync/#_duplicating_the...

◧◩◪◨
25. andrew+2i[view] [source] [discussion] 2024-01-19 23:41:01
>>bvrmn+Xf
>> Python is so slow you gain nothing with async.

It’s not necessarily about speed, though this statement above is flat out wrong.

async in Python allows you to build different types of applications. For example you can attach a task to stdout of another process and read and process it.

replies(1): >>bvrmn+Yj
◧◩
26. XorNot+ri[view] [source] [discussion] 2024-01-19 23:42:57
>>xedrac+zh
I'm not clear what your last sentence has to do with anything else? What does immutability have to do with Async/sync conversions?
replies(2): >>polyga+Ol >>throwa+4m
◧◩
27. martin+si[view] [source] [discussion] 2024-01-19 23:43:05
>>Laaas+Hf
It's very obvious you did not read the article. This is literally covered.
◧◩
28. sgbeal+Xi[view] [source] [discussion] 2024-01-19 23:46:33
>>andrew+hc
> in javascript supporting async and sync is smooth as a Swiss watch.

Ha! Integrating async I/O sources, namely the OPFS API, has been the single biggest development-time hit and runtime performance hit in sqlite.org's JS/WASM build of sqlite.

As soon as a single method is async, it cannot be hidden behind a synchronous interface because the async attribute is "viral," requiring the whole API above it to be async (which sqlite is not). We instead have to move all of the async I/O into its own worker and "fake" synchronous access to it using a complex proxy built from SharedArrayBuffer and the Atomics API. It's an abomination but it's the only(?) approach for making async functions behave fully synchronously which doesn't require third-party voodoo like Asyncify.

PS: the opposite - hiding sync stuff behind an async interface - is trivial. Hiding async behind a synchronous interface, however, is a tremendous pain in the bottom in JS.

replies(1): >>sgbeal+Ju1
◧◩
29. matt_k+cj[view] [source] [discussion] 2024-01-19 23:49:10
>>andrew+hc
> in javascript supporting async and sync is smooth as a Swiss watch

How would you solve the problem described in the article in JavaScript? I know `XMLHttpRequest` technically can be used to make synchronus requests, but the behavior is deprecated and is actively being hobbled by browsers.

◧◩
30. vlovic+jj[view] [source] [discussion] 2024-01-19 23:49:54
>>toolto+sh
They’re painful in all contexts because of function coloring. They’re slightly less painful in Go and JS because there’s a single opinionated async runtime built in. In Rust they have yet to standardize a bunch of stuff that would remove the pain:

async traits in std instead of each runtime having their own,

a pluggable interface so that async in code doesn’t have to specify what runtime it’s being built against

potentially an effect system to make different effects composable more easily (eg error effects + async effects) without needing to duplicate code to accomplish composition

Keyword generics as the current thing being explored instead of an effect system to support composition of effects

With these fixes async rust will get less annoying but it’s slow difficult work.

replies(3): >>yawara+RA >>anonym+fB >>pcwalt+yE
◧◩◪◨
31. sunsho+Gj[view] [source] [discussion] 2024-01-19 23:53:51
>>Twenty+6f
For the limited case of data-parallel computations as you're describing, you don't need async.

However, many real-world programs are inherently complicated state machines, where each individual state waits on one or more nested state machines to progress and/or complete -- async is often the most reasonable way to express that.

I wrote a post detailing a non-trivial production example use of async that might be interesting, regarding the cargo-nextest test runner that I wrote and maintain: https://sunshowers.io/posts/nextest-and-tokio/.

Nextest is not "web-scale" (c10k), since the number of tests one is concurrently running is usually bounded by the number of processor hyperthreads. So the fact that async tasks are more lightweight than OS threads doesn't have much bearing in this case. But even beyond that, being able to express state machines via async makes life so much simpler.

Over a year after switching nextest to using async, I have no regrets. The state machine has gotten around twice as complicated since then--for example, nextest now handles SIGTSTP and SIGCONT carefully--and async has coped admirably with it.

(There are currently somewhat serious issues with Rust async, such as a really messy cancellation story. But that's generally not what gets talked about on places like HN.)

replies(2): >>mplanc+7H >>Twenty+qX2
◧◩◪◨⬒
32. bvrmn+Yj[view] [source] [discussion] 2024-01-19 23:57:09
>>andrew+2i
> For example you can attach a task to stdout of another process and read and process it.

How max many processes you handled in this fashion? The catch is: if you need thousands than asyncio has too much overhead and you need manual epoll. If less threads are much easier to use and acceptable performance wise.

◧◩
33. boustr+4k[view] [source] [discussion] 2024-01-19 23:57:54
>>toolto+sh
I think the biggest underlying difference is that Rust does not have a language runtime, whereas the other three you've listed do. Since the language runtime can preempt your code at any time, it becomes a lot easier to make async work - at the expense that now data races are easier to create.

I'm not going to pretend I'm an expert but would be happy if someone could expand further.

replies(1): >>hedgeh+HA
◧◩
34. nemoth+nk[view] [source] [discussion] 2024-01-20 00:00:11
>>gavinh+tc
>I hope that someday, we can have a Rust-like language without async.

That exists today, it's called Rust. You don't have to use async.

replies(1): >>kelsey+6m
◧◩◪
35. polyga+Ol[view] [source] [discussion] 2024-01-20 00:11:51
>>XorNot+ri
Immutability is great for multithreaded/async programs because every thread can rest assured knowing no other thread can sneakily modify objects that they are operating on currently.
replies(3): >>candid+Xl >>IlliOn+2x >>wredue+pD
◧◩◪◨
36. candid+Xl[view] [source] [discussion] 2024-01-20 00:12:57
>>polyga+Ol
Go can prevent this with the race detector among other things
replies(3): >>stouse+cm >>binary+Er >>omgint+vD
◧◩◪
37. throwa+4m[view] [source] [discussion] 2024-01-20 00:13:36
>>XorNot+ri
you can fearlessly run code on another thread if you're not worried it's going to cause a data race or mutate anything
replies(1): >>XorNot+Tm
◧◩◪
38. kelsey+6m[view] [source] [discussion] 2024-01-20 00:13:52
>>nemoth+nk
Until you want to use a library that requires async. Now you do.
replies(2): >>nemoth+et >>sgbeal+Ru1
◧◩◪◨⬒
39. stouse+cm[view] [source] [discussion] 2024-01-20 00:14:54
>>candid+Xl
Sometimes.
◧◩◪◨
40. XorNot+Tm[view] [source] [discussion] 2024-01-20 00:19:44
>>throwa+4m
This is very much not free though. Predicting memory usage in Haskell programs is notoriously tricky (and all the memory copies aren't free either).
replies(3): >>c-cube+ho >>bippih+Pt >>whatev+Uk2
◧◩◪◨⬒
41. c-cube+ho[view] [source] [discussion] 2024-01-20 00:33:03
>>XorNot+Tm
It's also the case with OCaml, Elixir, clojure, etc. Non lazy languages can also have a rich collection of immutable data structures and have more predictable memory usage than Haskell. On the other hand, Go doesn't have a culture or features that encourage immutability.
42. bruce3+Ko[view] [source] 2024-01-20 00:37:06
>>lukast+(OP)
Rewrite it in Erlang
replies(2): >>Britto+Zr >>juped+DR
43. doakes+nq[view] [source] 2024-01-20 00:52:30
>>lukast+(OP)
What's the advantage of async if you immediately call await?
replies(3): >>maleld+3r >>sodali+sr >>empath+nw
44. shpx+Uq[view] [source] 2024-01-20 00:57:23
>>lukast+(OP)
> Another possible way to fix this is, as the features docs suggest, creating separate crates. We’d have rspotify-sync and rspotify-async, and users would just pick whichever crate they want as a dependency

A nitpick, but please, if you do this for your library, name the sync one just the name and put "-async" in the name of the other one, so "rspotify" and "rspotify-async" instead of "rspotify-sync" and "rspotify-async". This mirrors the way the function definition keywords are "fn" and "async fn" instead of "sync fn" and "async fn". Most simple usecases don't need async and the extra typing and extra reading is annoying.

replies(3): >>bbkane+9s >>skybri+Dv >>fshbbd+Ox
◧◩
45. maleld+3r[view] [source] [discussion] 2024-01-20 00:58:47
>>doakes+nq
From the library perspective, they're yielding control to the runtime. This means the user can do whatever they want with the future.
◧◩
46. sodali+sr[view] [source] [discussion] 2024-01-20 01:03:44
>>doakes+nq
If you're I/O bound, it allows for easy and efficient utilization of resources by enabling other tasks to run while waiting for I/O.

I'll give you a real world example. I wrote some code that listened to a websockets URL from thousands of Reddit posts - specifically, the one that sends new messages on new comments - so I could see a stream of Reddit comments for any given sub.

Implemented it using Tungstenite (synchronous) and it created thousands of threads to listen, and used enormous chunks of memory (several GB) for the stack space + memory reading for every single WS stream.

Implemented it using Tokio_tungstenite, the async alternative, and it used a handful of MB of memory and barely any CPU to listen to thousands of WS servers.

replies(1): >>doakes+Zt
◧◩◪◨⬒
47. binary+Er[view] [source] [discussion] 2024-01-20 01:06:25
>>candid+Xl
The race detector needs to actually encounter a race in order to detect it, it's not a complete static analysis.
◧◩
48. Britto+Zr[view] [source] [discussion] 2024-01-20 01:10:01
>>bruce3+Ko
There is this:

https://lunatic.solutions/

◧◩
49. stevek+4s[view] [source] [discussion] 2024-01-20 01:10:36
>>toolto+sh
I gave two talks about this:

* An overview of terminology, and a description of how various languages fit into the various parts of the design space https://www.infoq.com/presentations/rust-2019/

* A deep dive into what Rust does https://www.infoq.com/presentations/rust-async-await/

◧◩
50. bbkane+9s[view] [source] [discussion] 2024-01-20 01:11:08
>>shpx+Uq
I think the explicit -sync/-async is nice.
51. darren+js[view] [source] 2024-01-20 01:12:14
>>lukast+(OP)
I like the clear separation of sync -> async in rust but the lack of first-party lang support for async stuff has been really annoying. Its getting much better though - cool to see async traits in recent release
52. ndesau+rs[view] [source] 2024-01-20 01:12:36
>>lukast+(OP)
Forget "what color is your function?"

"What color is your module?"

53. winrid+Hs[view] [source] 2024-01-20 01:14:26
>>lukast+(OP)
nimlang solves this with multisync
replies(1): >>nullde+Uv4
◧◩◪◨
54. nemoth+et[view] [source] [discussion] 2024-01-20 01:19:48
>>kelsey+6m
Which part of the standard library forces me to use async? Or is the complaint that you can't force other random developers to program in the way you prefer?
replies(1): >>kelsey+sv
◧◩◪◨⬒⬓
55. coffee+Mt[view] [source] [discussion] 2024-01-20 01:24:52
>>Twenty+Eh
When you're reading code, it's really really nice to be able to tell if a line of code blocks. Other than that `Thread.new(_ => doIO()).join()` is pretty much the same as `await doIO()`.
◧◩◪◨⬒
56. bippih+Pt[view] [source] [discussion] 2024-01-20 01:25:55
>>XorNot+Tm
it's not free, but it is explicit. it's nice for the code to explicitly define how the memory is being modified. there are copies in java too, you just have to know how the runtime works to know what each line does.
replies(1): >>ric2b+mw
◧◩◪
57. doakes+Zt[view] [source] [discussion] 2024-01-20 01:27:12
>>sodali+sr
But at what point are you calling await?

If I were using the author's library, I would call `.some_endpoint(...)` and that would return a `SpotifyResult<String>`, so I'm struggling to understand why `some_endpoint` is async. I could see if two different threads were calling `some_endpoint` then awaiting would allow them to both use resources, but if you're running two threads, doesn't that already accomplish the same thing? I'm pretty naive to concurrency.

replies(3): >>sodali+su >>roywig+0w >>fshbbd+vy
◧◩◪◨
58. sodali+su[view] [source] [discussion] 2024-01-20 01:31:12
>>doakes+Zt
I was calling await on ws_stream.next() - awaiting for there to be a new message.

Code: https://gist.github.com/sigaloid/d0e2a7eb42fed8c2397fbf84239...

In the example you give, yes, it's just sequential and everything relies on a previous thing. But say you are making a spotify frontend - you want to render a list of playlists, the profile info, etc - you call await on all of them and they can complete simultaneously.

59. SethML+dv[view] [source] 2024-01-20 01:37:26
>>lukast+(OP)
Nice! This is similar to the solution here: https://github.com/python-trio/unasync
◧◩◪◨⬒
60. kelsey+sv[view] [source] [discussion] 2024-01-20 01:40:16
>>nemoth+et
Can you try steel manning?
replies(1): >>filled+gT
◧◩◪◨⬒
61. Animat+xv[view] [source] [discussion] 2024-01-20 01:40:45
>>andrew+Zf
How many connections to Spotify did you want to keep open simultaneously?
replies(1): >>conrad+Ie1
◧◩
62. skybri+Dv[view] [source] [discussion] 2024-01-20 01:42:11
>>shpx+Uq
Yes, a one-letter difference seems easy to miss.
◧◩◪◨
63. roywig+0w[view] [source] [discussion] 2024-01-20 01:46:05
>>doakes+Zt
If you are only ever doing one thing at a time, then async doesn't do you any good, which is why people like to use sync libraries when they can't make use of it, and why libraries might want to provide a sync and an async version.

Async is useful when you want to have a bunch of things happening (approximately) "at the same time" on a single thread.

With async you can await on two different SpotifyResults at the same time without multithreading. When each one is ready, the runtime will execute the remainder of the function that was awaiting. This means the actual HTTP requests can be in flight at the same time.

replies(1): >>doakes+jC
◧◩◪◨⬒⬓
64. ric2b+mw[view] [source] [discussion] 2024-01-20 01:51:12
>>bippih+Pt
Copies in haskell are not explicit at all, though?
◧◩
65. empath+nw[view] [source] [discussion] 2024-01-20 01:51:15
>>doakes+nq
If you're not using multiple tasks, there's no advantage. If you are running multiple tasks, it passes control back to the executor.
◧◩◪◨
66. IlliOn+2x[view] [source] [discussion] 2024-01-20 01:56:59
>>polyga+Ol
How Haskell deals with access to shared resources which are mutable by their nature, like file system, or the outside world?

(A honest question, I start to think that I'd like to learn more on this language)

replies(2): >>andyfe+kz >>whatev+ak2
◧◩
67. fshbbd+Ox[view] [source] [discussion] 2024-01-20 02:04:12
>>shpx+Uq
Sorry to pick on this comment in particular, but it being the top voted on the post seems like a good illustration of bikeshedding.
◧◩
68. jeremy+Qx[view] [source] [discussion] 2024-01-20 02:04:45
>>xedrac+zh
All threads in Haskell are green. Async just gives you another way to get return values from threads without having to use MVars or Chans.
◧◩◪◨
69. fshbbd+vy[view] [source] [discussion] 2024-01-20 02:11:25
>>doakes+Zt
A common usecase for async is if you are implementing a web service with an API. Suppose your API uses Spotify’s API. Your server can handle many requests at once. It would be nice if all of them can call Spotify’s API at the same time without each holding a thread. Tokio tasks have much lower overhead than OS threads. Your N requests can all await at once and it doesn’t take N threads to do it.
◧◩◪◨⬒
70. andyfe+kz[view] [source] [discussion] 2024-01-20 02:22:19
>>IlliOn+2x
AFAIK they tend to operate through the IO monad, which serves to order read/write events and mark parts of your code as interacting with the global mutable state that lives outside your program.

So the mutable (or is it “volatile”?) environment is there, but you explicitly know when and where you interact with it.

◧◩◪
71. hedgeh+HA[view] [source] [discussion] 2024-01-20 02:40:20
>>boustr+4k
In the early days of Rust there was a debate about whether to support "green threads" and in doing that require runtime support. It was actually implemented and included for a time but it creates problems when trying to do library or embedded code. At the time Go for example chose to go that route, and it was both nice (goroutines are nice to write and well supported) and expensive (effectively requires GC etc). I don't remember the details but there is a Rust RFC from when they removed green threads:

https://github.com/rust-lang/rfcs/blob/0806be4f282144cfcd55b...

◧◩◪
72. yawara+RA[view] [source] [discussion] 2024-01-20 02:41:51
>>vlovic+jj
There's no function colouring in Go. Async functions don't have any special colour.
replies(1): >>SkiFir+Ak1
◧◩◪
73. anonym+fB[view] [source] [discussion] 2024-01-20 02:45:24
>>vlovic+jj
Go doesn't have function coloring. Greenlet, Lua, and libco solve the problem without function coloring by adding a stack-switching primitive. Zig solves the problem without forcing function coloring on all consumers of functions by having the compiler monomorphize functions based on whether they end up being able to suspend.
replies(1): >>lifthr+ME
◧◩
74. anon29+VB[view] [source] [discussion] 2024-01-20 02:53:51
>>xedrac+zh
> This works much better than say Go, because Haskell is immutable.

The immutability has nothing to do with async. Async is for IO threads. If you want pure parallelism you use `par`. But Haskell IO threads (forkIO and friends) are also green when run with GHC.

replies(1): >>xedrac+dG
75. anon29+fC[view] [source] 2024-01-20 02:57:07
>>lukast+(OP)
The main issue with async/sync differentiation in all languages is that all languages have a blessed category / monad in which they execute, and the async support (or whatever other execution schemes) are 'bolted on'. Haskell makes this clear in its type system, and while the monadic functions are much more generalizable than async/await (which seems to be the one monad every other language wants to implement for some reason), they're also not 'complete'... there are functions in the prelude with obvious monadic equivalents that are not part of the prelude. Nevertheless, even if they were, you cannot interchange them with the blessed 'pure' monad (equivalent to the Identity newtype), which is the Kleisli monad of the Hask category.

There has been work to enable generic compilation of all Haskell code to arbitrary categories (see Conal Elliot's compiling with categories) but unfortunately the approach has not caught on.

It would actually be an interesting design space to support arbitrary categories as a basic design principle of a novel programming language. Obviously at some level something would need to be the 'special' one that can be compiled to machine code, but if the complexity could be hidden / abstracted that would indeed be interesting.

◧◩◪◨⬒
76. doakes+jC[view] [source] [discussion] 2024-01-20 02:57:30
>>roywig+0w
I guess I'm just confused that it's labeled async (and does async work) but is actually blocking when invoked (or at least doesn't return until await finishes).

If I'm awaiting on two different results, I have to invoke them in parallel somehow, right? What is that mechanism and why doesn't that already provide asynchrony? Like, if the method was sync, couldn't I still run it async somehow?

replies(3): >>mplanc+xF >>Too+MT >>SkiFir+r31
◧◩◪◨
77. wredue+pD[view] [source] [discussion] 2024-01-20 03:10:20
>>polyga+Ol
Immutability is, quite possibly, the dumbest “silver bullet” solution ever to be praised as a solution to anything.

Congratulations, nobody is going to sneakily update an object on you, but also, nobody knows about your updates either.

It’s not a worthwhile trade off given the massive extra work it causes.

replies(3): >>throwa+zN >>Fire-D+4Q >>whatev+wk2
◧◩◪◨⬒
78. omgint+vD[view] [source] [discussion] 2024-01-20 03:10:56
>>candid+Xl
Detectors detect, they don’t prevent. All detectors suffer misses.
◧◩◪
79. pcwalt+yE[view] [source] [discussion] 2024-01-20 03:24:28
>>vlovic+jj
For Go I'd say there's a single synchronous runtime built-in. People say that Go is async because the implementation of goroutines is async internally, but the implementation of threads on every OS is async internally too. The only real difference as far as sync/async is concerned† between goroutines and threads is that Go's implementation of goroutines is in userspace, while the implementation of OS threads is in kernel space. Both are equally asynchronous under the hood.

† Yes, there are other differences between goroutines and typical OS threads, such as stack sizes, but I'm only talking about I/O differences here.

◧◩◪◨
80. lifthr+ME[view] [source] [discussion] 2024-01-20 03:26:05
>>anonym+fB
A more accurate description would be that Go has a single function color, that is namely green. This distinction is important because, for example, C also has no function coloring problem only because it doesn't care about lightweight threading, i.e. its function color is always red. Only Zig's approach, and hopefully Rust's keyword generics if accepted, can be considered to have no function color.
replies(2): >>thiht+qS >>SkiFir+Rh1
81. Arnavi+3F[view] [source] 2024-01-20 03:28:17
>>lukast+(OP)
Another option is to implement your API in a sans-io form. Since k8s-openapi was mentioned (albeit for a different reason), I'll point out that its API gave you a request value that you could send using whatever sync or async HTTP client you wanted to use. It also gave you a corresponding function to parse the response, that you would call with the response bytes your client gave to you.

https://github.com/Arnavion/k8s-openapi/blob/v0.19.0/README....

(Past tense because I removed all the API features from k8s-openapi after that release, for unrelated reasons.)

◧◩◪
82. fireyn+nF[view] [source] [discussion] 2024-01-20 03:34:02
>>omgint+Fd
I meant to more distill it like, any time you are making a choice between async or sync, you are making the wrong choice!
◧◩◪◨⬒⬓
83. mplanc+xF[view] [source] [discussion] 2024-01-20 03:35:07
>>doakes+jC
This is sort of the core of the whole idea of concurrency. There are lots of good explanations online if you search for “concurrency vs parallelism.”

The gist is that while you await the result of an async function, you yield to the executor, which is then free to work on other tasks until whatever the await is waiting for has completed.

The group of tasks being managed by the executor is all different async functions, which all yield to the executor at various times when they are waiting for some external resource in order to make forward progress, to allow others to make progress in the meantime.

This is why people say it’s good for IO-bound workloads, which spend the majority of their time waiting for external systems (the disk, the network, etc)

84. eximiu+2G[view] [source] 2024-01-20 03:40:59
>>lukast+(OP)
`block_on` seems like the right balance of code duplication and overhead, imo.

It is a somewhat strange compile time jumble of libraries but a compile time overhead isn't so bad.

replies(1): >>junon+UG1
◧◩◪
85. xedrac+dG[view] [source] [discussion] 2024-01-20 03:45:01
>>anon29+VB
Async is definitely nicer when things are immutable. On modern CPUs, async green threads can easily be backed by different OS threads running in parallel on different CPU cores, making data races a real problem for many languages. Async does not guarantee that things will not be run in parallel, although you shouldn't rely on it for explicit parallelism.
replies(1): >>anon29+NJ1
◧◩◪◨⬒
86. mplanc+7H[view] [source] [discussion] 2024-01-20 03:56:19
>>sunsho+Gj
Not who you’re replying to, but this is great context, and I want to thank you for including it. As another heavy async user (for a network service that handles loads of requests and does loads of DB reads and writes), I am also a big fan of Rust’s async at scale. We’re currently in the process of seeing where we can get rid of async_trait with 1.75, which has not been particularly drop-in in many cases but which is still exciting.

Anyway, I have been meaning to try out nextest for our big honking monorepo workspace at work. The cargo test runner has always been essentially fine for our needs, but speeding up test execution in CI could be a huge win for us.

replies(1): >>sunsho+LJ
◧◩◪◨⬒⬓
87. sunsho+LJ[view] [source] [discussion] 2024-01-20 04:28:47
>>mplanc+7H
I'd love to hear how it goes!
88. eviks+vK[view] [source] 2024-01-20 04:36:57
>>lukast+(OP)
> Fixing maybe_async and adding _async and _sync suffixes to each endpoint in our library.

Can you not add a suffix only to one variant, e.g., "_sync", as far as I understood you just need different names?

replies(1): >>mst+l71
89. nyanpa+UL[view] [source] 2024-01-20 04:52:25
>>lukast+(OP)
Can you create separate modules for sync and async user-facing APIs?
◧◩◪◨⬒
90. throwa+zN[view] [source] [discussion] 2024-01-20 05:14:26
>>wredue+pD
Completely uninformed take. Some of the most impressive update notification systems are built off of pass-as-immutable runtimes (for example: phoenix live view + phoenix pubsub). Try implementing that in just about a y other language. You will trip over yourself eight ways to hell

The whole idea of CQRS is to build separate (segregated) pathways for updates. Immutable passing plays extremely well with CQRS. The alternative is the complete clusterfuck that is two way data bindings (e.g. out of the box angularjs)

replies(1): >>tgv+wX
91. zubair+IN[view] [source] 2024-01-20 05:15:42
>>lukast+(OP)
I’ve had the same issue with async and await in JavaScript. These days I just err on the side of simplicity and make everything async
◧◩◪◨⬒
92. Fire-D+4Q[view] [source] [discussion] 2024-01-20 05:48:22
>>wredue+pD
Immutability frees the mind from so much baggage when developing that i'm always shocked it didn't become mainstream
◧◩
93. juped+DR[view] [source] [discussion] 2024-01-20 06:10:08
>>bruce3+Ko
but my rocketship emoji
◧◩◪◨⬒
94. thiht+qS[view] [source] [discussion] 2024-01-20 06:23:32
>>lifthr+ME
I don’t understand the difference between "single color" and "no color", can you explain? What makes Zig’s approach colorless?
replies(1): >>lifthr+wU
◧◩◪◨⬒⬓
95. filled+gT[view] [source] [discussion] 2024-01-20 06:36:16
>>kelsey+sv
I'm not them, but I don't think there's any general-purpose programming language in existence that prevents developers from implementing async runtimes and using them in their libraries.

So yes, if your whole reasoning is "other people might use async and then I won't be able to use their code", then you'll be waiting indefinitely for the magical programming language that's both fully featured for your work and does not have any portion of the ecosystem implemented in async code.

◧◩◪◨⬒⬓
96. Too+MT[view] [source] [discussion] 2024-01-20 06:44:43
>>doakes+jC
You might be awaiting for only one result, but the one calling you may be awaiting both you and 100 other things that you are not aware of.
◧◩◪◨⬒⬓
97. lifthr+wU[view] [source] [discussion] 2024-01-20 06:58:11
>>thiht+qS
While the original use of "function colors" was purely syntactic [1], they can be easily remapped to cooperative vs. preemptitive multitasking. This remapping is important because they change programmer's mental model.

For example, the common form of `await` calls implies cooperative multitasking and people will have a good reason to believe that no other tasks can't affect your code between two `await` calls. This is not generally true (e.g. Rust), but is indeed true for some languages like JS. Now consider two variants of JS, where both had `await` removed but one retains cooperative multitasking and another allows preemptitive tasks. They will necessarily demand different mental models, even though it is no longer syntactically distinguishable. I believe this distinction is important enough that they still have to be considered to have a function color, which is only uniform within a single language.

Zig's approach in comparison is often called "color-blind", because while it provides `async` and `await`, those keywords only change the return type to a promise (Zig term: async frame) and do not guarantee that it will do anything different. Instead, users are given the switch so that most libraries are expected to work equally well regardless of that switch. You can alternatively think this as follows: all Zig modules are implicitly parametrized via an implicit `io_mode` parameter, which affect the meaning of `async` and `await` and propagate to nested dependencies. There is definitely a color here, but it's no longer a function color because functions can no longer paint themselves. So I think it's reasonable to call this to have no function color.

[1] https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...

replies(1): >>thiht+961
◧◩
98. Too+HU[view] [source] [discussion] 2024-01-20 07:00:06
>>toolto+sh
JS makes it easier because it was always single threaded and never had any sync IO to begin with.

This means, before async existed, any library doing IO had to be based on callbacks. Then came Promises, which are essentially glorified callbacks and then came async which can be seen syntax sugar for Promises.

So you will never see synchronous code that depends on an asynchronous result. The concept of sync code waiting for something just never existed in JavaScript. Instead you wake up your sync functions with Promise.then()-callbacks and that same mechanism bridges async functions.

It’s also very rare to have compute heavy sync code in JS so there is rarely any need to run it multi threaded.

replies(1): >>penter+GX
99. Too+UU[view] [source] 2024-01-20 07:03:22
>>lukast+(OP)
The fear of increasing the binary size sounds like a self-imposed constraint. How bad is it to include both versions in same crate?
◧◩◪◨⬒⬓
100. andrew+HW[view] [source] [discussion] 2024-01-20 07:32:14
>>Twenty+Eh
There's no concurrency issues with async programming.

No race conditions, no inter thread coordination, no concerns about all the weird shit that happens when doing multi threading.

replies(2): >>dannym+Ma1 >>nvm0n2+UF1
◧◩◪◨⬒⬓
101. tgv+wX[view] [source] [discussion] 2024-01-20 07:47:10
>>throwa+zN
I think you both are referring to the same point: you can't update an immutable object, so you have to set up some mechanism to keep changes in sync.
replies(1): >>throwa+a01
◧◩◪
102. penter+GX[view] [source] [discussion] 2024-01-20 07:48:52
>>Too+HU
> The concept of sync code waiting for something just never existed in JavaScript.

Have you forgotten prompt() and friends?

103. palmfa+HY[view] [source] 2024-01-20 08:06:55
>>lukast+(OP)
That's a long road to avoid copy/paste and search/replace.
replies(1): >>nullde+Nv4
◧◩
104. MrBudd+201[view] [source] [discussion] 2024-01-20 08:25:42
>>the__a+kf
Nah same, I find the typestate pattern to be overprotective against bugs that usually get caught during development, and async Rust is just a killjoy overall.
◧◩◪◨⬒⬓⬔
105. throwa+a01[view] [source] [discussion] 2024-01-20 08:27:26
>>tgv+wX
Yeah, and update mechanisms are not created equal. two way data bindings suck because they elide the challenges of distributed consistency.

When you're immutable, you can still delete or replace data.

replies(1): >>wredue+sY1
◧◩◪◨⬒⬓
106. SkiFir+r31[view] [source] [discussion] 2024-01-20 09:17:40
>>doakes+jC
> I guess I'm just confused that it's labeled async (and does async work) but is actually blocking when invoked (or at least doesn't return until await finishes).

From the point of view of the `async` block/function, it is blocking, but from the point of view of the thread executing that `async` block/function it is not.

> If I'm awaiting on two different results, I have to invoke them in parallel somehow, right?

No, the whole point of `async` is having concurrency (i.e. multiple tasks running and interleaving) without necessarily using parallelism for it. Take a look at the `join` macro in the `futures` crate for example, it allows you to `.await` two futures while allowing both of them to make progress (two successive `.await`s would force the first one to end before the second one can start) and without spawning dedicated threads for them.

◧◩◪◨⬒⬓⬔
107. thiht+961[view] [source] [discussion] 2024-01-20 09:59:35
>>lifthr+wU
Interesting thinking, thanks for the details! I’ll have to look into Zig’s async mode, it seems pretty good but I’m wondering what are the drawbacks of this approach (and specifically, why would you ever set io_mode to "sync")
replies(1): >>anonym+4v3
◧◩
108. mst+l71[view] [source] [discussion] 2024-01-20 10:16:07
>>eviks+vK
I use quite a few promise/future based APIs that have a sync version and an async version and the async methods are called get_p or get_f or so, which feels like very little noise when I'm already typing 'await' in front of/after it.
replies(1): >>eviks+K91
◧◩◪
109. eviks+K91[view] [source] [discussion] 2024-01-20 10:46:09
>>mst+l71
That's why I thought async should be shorter, and also since it seemed for this lib async is the primary interface
◧◩◪◨⬒⬓⬔
110. dannym+Ma1[view] [source] [discussion] 2024-01-20 10:54:20
>>andrew+HW
One of the main reasons to use async in the first place is to distribute the work to all the cpu cores that you have. So you will still have all the concurrency issues with async in that case.

Only if you limited async to only one thread on one core (why would you do that?) you could avoid that.

replies(1): >>andrew+ih1
◧◩◪◨⬒⬓
111. conrad+Ie1[view] [source] [discussion] 2024-01-20 11:37:41
>>Animat+xv
Not related to Spotify, but managing http2 connections is much easier with async code than with sync, and http3 will be much the same. You can of course probably spawn a thread that handles these connections and use channels, but it's not going to be particularly pleasant to work with.

With the Spotify API before I have wanted to do concurrent API calls. One api call gives you a list of song IDs in a playlist, then you want to get all the song info for each ID. HTTP2 multiplexing would be much nicer than spawning 100 different http1 connections

◧◩◪◨⬒⬓⬔⧯
112. andrew+ih1[view] [source] [discussion] 2024-01-20 11:57:02
>>dannym+Ma1
>> One of the main reasons to use async in the first place is to distribute the work to all the cpu cores that you have

Errrr..... no it's not. Your statement is flat out wrong. async is single threaded.

You're not really understanding async.

async is single threaded. If you want to maximise your cores then run multiple instances of a single threaded process using systemd or something, or use your application to launch multiple async threads.

You can, in python and probably in Rust, run an executor, which essentially is running a synchronous process in a thread to make some bit of synchronous work compatible with async, but that's not really ideal and it's certainly not the purpose of async.

replies(2): >>Twenty+BF1 >>dannym+nl5
◧◩◪◨⬒
113. SkiFir+Rh1[view] [source] [discussion] 2024-01-20 12:00:55
>>lifthr+ME
Last time I checked Zig still had a subtle form function coloring for function pointers. See for example https://github.com/ziglang/zig/issues/8907
◧◩◪◨
114. SkiFir+Ak1[view] [source] [discussion] 2024-01-20 12:22:42
>>yawara+RA
IMO saying there's no function coloring in a language ignores a lot of details.

In Go there's no function coloring because there are only async functions. That's why they don't get any special color, they are the only color. In Go you don't get to use sync functions, which creates problems e.g. when you need to use FFI, because the C ABI is the exact opposite and doesn't have function coloring because it only allows you to use sync functions.

Zig and Rust's async-generic initiative are a bit different in that they want to allow functions to be both sync and async at the same time. Ultimately there are still colors, but you don't have to choose one of them when you write a function. However IMO there are a lot of non-trivial problems to solve to get to that result.

Ultimately Go's approach work well enough, and usually better than other approaches, until you need to do FFI or you need to produce a binary without a runtime (e.g. if you need to program a microcontroller)

replies(3): >>gpdere+df2 >>65a+tu3 >>yawara+fG3
◧◩◪
115. sgbeal+Ju1[view] [source] [discussion] 2024-01-20 13:58:24
>>sgbeal+Xi
> It's an abomination but it's the only(?) approach for making async functions behave fully synchronously which doesn't require third-party voodoo like Asyncify.

PS: it's also limited to argument and result types which we can serialize into a byte array. i.e. its utility is more limited than direct use of async APIs is, as those can accept any argument type and resolve to any type of value. That fine for the case of sqlite but it's not a general-purpose solution.

◧◩◪◨
116. sgbeal+Ru1[view] [source] [discussion] 2024-01-20 14:00:14
>>kelsey+6m
> Until you want to use a library that requires async. Now you do.

Now you do... have an incentive to write your own which is not async.

replies(1): >>kelsey+N42
◧◩◪◨⬒⬓⬔⧯▣
117. Twenty+BF1[view] [source] [discussion] 2024-01-20 15:07:29
>>andrew+ih1
`async` is neither multi-threaded nor single-threaded by default. It all depends on the underlying runtime.

Rust's Tokio runtime, for example, is multi-threaded by default and makes progress on several tasks at the same time by using multiple threads.

◧◩◪◨⬒⬓⬔
118. nvm0n2+UF1[view] [source] [discussion] 2024-01-20 15:09:40
>>andrew+HW
A common belief but not quite true. You can have race conditions and weird shit happening in async systems.

The usual way it happens is that you write some code inside an async function that's straight line not suspending, and then later someone adds an await inside it. The await returns to the event loop, at which point an event arrive that you didn't expect. You now have control flow jumping to somewhere else completely, possibly changing state in a way that you didn't anticipate. When the original function resumes it sees something change concurrently.

◧◩
119. junon+UG1[view] [source] [discussion] 2024-01-20 15:15:09
>>eximiu+2G
This is my take, too. Write async and then use block_on. It's worked for us just fine, albeit a little annoying. We made macros to help clean things up.
◧◩◪◨
120. anon29+NJ1[view] [source] [discussion] 2024-01-20 15:29:52
>>xedrac+dG
Haskell async is run in IO though in which mutability is allowed. Async itself is mutable.
◧◩
121. fulafe+OV1[view] [source] [discussion] 2024-01-20 16:32:17
>>xedrac+zh
I'm not sure it's worth it even for most IO bound applications. The first couple of IO bound examples that come to mind (an app doing bulk disk or network IO, eg sequential file access or bulk data transfer) would logically seem to work just as well without async since the bottleneck is the disk or network card or connection.

I'd guess it could be an advantage for high concurrency applications that are CPU bound, but could be made IO bound by optimizing the userspace code. But OS threads are pretty efficient and you can have zillions of them, so the async upside is quite bounded, so this niche would seem smallish.

◧◩◪◨⬒⬓⬔⧯
122. wredue+sY1[view] [source] [discussion] 2024-01-20 16:44:03
>>throwa+a01
Immutability “maybe” (and that’s a massive grain a salt, because this is not a specific thing I’ve ever worked on to say any different) having certain use cases where it works well is not the same thing as making literally every single object in your entire application immutable.

I agree that immutability is a tool. My issue with it is when you treat it as a rule.

◧◩◪◨⬒
123. kelsey+N42[view] [source] [discussion] 2024-01-20 17:12:07
>>sgbeal+Ru1
That's fair. The Rust community does love a rewrite.
◧◩◪◨⬒
124. gpdere+df2[view] [source] [discussion] 2024-01-20 18:11:47
>>SkiFir+Ak1
That's a bit of nonsense. As far as I know, all functions are sync in go. The fact that they are implemented async in the runtime with an user-space scheduler is irrelevant (you could otherwise make the point that there are truly no sync functions).

If we call the go programming model async, the word has completely lost all meanings.

replies(1): >>SkiFir+mP2
◧◩◪◨⬒
125. whatev+ak2[view] [source] [discussion] 2024-01-20 18:35:10
>>IlliOn+2x
Haskell has full support for IO and mutability. It even has software transactional memory in its standard library.
◧◩◪◨⬒
126. whatev+wk2[view] [source] [discussion] 2024-01-20 18:37:14
>>wredue+pD
> Congratulations, nobody is going to sneakily update an object on you

I've seen Heisenbugs where some random code calls a setter on an object in a shared memory cache. The setter call was for local logic - so immutable update would've saved the day. It had real world impact too: We ordered a rack with a European plug to an American data center (I think a human in the loop caught it thankfully).

Also, how often do you even use mutability really? Like .. for what? Logic is easier to express with expressions than a Rube Goldberg loop mutating state imo.

replies(1): >>wredue+Sn4
◧◩◪◨⬒
127. whatev+Uk2[view] [source] [discussion] 2024-01-20 18:40:02
>>XorNot+Tm
Predicting memory usage in Haskell programs isn't actually tricky. At least, I stopped thinking so once I became an intermediate Haskeller. It's not that hard to have a mental model of the Haskell RTS the same as you'd have the JVM.

Having the ability to do so generally is tablestakes for being an intermediate professional programmer imo. In university, I had to draw diagrams explaining the state of the C stack and heap after each line of code. That's the same thing. And I was 19 lmao. It's not hard.

Maybe you're referring to space leaks? I've run into like 2 in my ten year Haskell career, and neither hit prod

I've actually seem more Go and Java space leaks/OoM bugs hit prod than Haskell - despite having fewer total years using those languages than Haskell! Nobody blamed the language for those though :/

◧◩◪◨⬒⬓
128. SkiFir+mP2[view] [source] [discussion] 2024-01-20 21:45:03
>>gpdere+df2
What is the difference between a sync and an async function for you then?
replies(1): >>gpdere+Wj3
◧◩◪◨⬒
129. Twenty+qX2[view] [source] [discussion] 2024-01-20 22:40:06
>>sunsho+Gj
Thank you, this was a very insightful blog post!
replies(1): >>sunsho+Zg3
◧◩◪◨⬒⬓
130. sunsho+Zg3[view] [source] [discussion] 2024-01-21 01:12:38
>>Twenty+qX2
No problem! I wasn't totally convinced of the value of async myself, before I went through the exercise of building something that is in many ways more complicated than most web services.
◧◩◪◨⬒⬓⬔
131. gpdere+Wj3[view] [source] [discussion] 2024-01-21 01:43:06
>>SkiFir+mP2
An async function is on CPS form and return it's result via a return continuation. Typically when invoked from a non CPS function it also forks the thread of execution.

These days async functions are also typically lazily evaluated via partial evaluation and the return continuation is not necessarily provided at the call site.

A sync function provides it's result via the normal return path.

◧◩◪◨⬒
132. 65a+tu3[view] [source] [discussion] 2024-01-21 03:29:24
>>SkiFir+Ak1
Go works fine for C FFI, all of its problems there are caused by its innovation wrt dynamic stack sizes and having a garbage collector. I'd rather write multithreaded Go FFI than deal with JNI again, anyway. There isn't really a language keyword-level concept of async in Go that's comparable to `await` in JS, or Java futures, or Rust async.
◧◩◪◨⬒⬓⬔⧯
133. anonym+4v3[view] [source] [discussion] 2024-01-21 03:37:27
>>thiht+961
This post is a bit confused. io_mode is an stdlib feature that changes some blocking stuff in the stdlib to use the stdlib's event loop. There's not such a thing for most libraries.

For typical libraries, functions and structs are provided, and to the extent that these functions call functions provided by the user, they are generic over the async-ness of those functions. That's how the language-level async feature works, for library code that doesn't specify that it is async and doesn't specify that it would like to use a specific non-async calling convention for calling the user's callbacks.

◧◩◪◨⬒
134. yawara+fG3[view] [source] [discussion] 2024-01-21 05:53:49
>>SkiFir+Ak1
> IMO saying there's no function coloring in a language ignores a lot of details.

Details which are meant to be ignored. When you use async/await constructs in various languages, you don't care about the fact that they are desugared into callback chains under the hood. You either do async/await in a language or you don't. That's what the concept of 'your function has a colour' means. If you want to change the meaning, OK but then you're talking about something else.

◧◩◪◨⬒⬓
135. wredue+Sn4[view] [source] [discussion] 2024-01-21 14:26:22
>>whatev+wk2
>how many Heisenbugs

I suspect, given the real, actual measurements, the number of difficult to deal with bugs is pretty consistent between immutability and mutability. Actual measurements does not support claims of “easier to reason about”, or “reduced bugs”.

>how often do you use mutability

Whenever something should change and I don’t specifically need functionality that immutability might provide (literally 99.99999999% of every state change).

replies(1): >>whatev+Ft4
◧◩◪◨⬒⬓⬔
136. whatev+Ft4[view] [source] [discussion] 2024-01-21 15:05:12
>>wredue+Sn4
I'm just confused as to what you need mutability for exactly? I get needing it for communicating between processes (STM has you covered there). But for "normal" code that is doing pure logic, what is the benefit of using mutability?

Immutability has some big advantages for pure logic, such as allowing containers to be treated as values the same as numbers. And efficient immutable data structures of all kinds are commonplace now.

◧◩
137. nullde+Nv4[view] [source] [discussion] 2024-01-21 15:17:50
>>palmfa+HY
But where would the fun be then?
◧◩
138. nullde+Uv4[view] [source] [discussion] 2024-01-21 15:18:19
>>winrid+Hs
Can you briefly explain how that works? Is it like Zig?
replies(1): >>winrid+P16
139. nullde+gw4[view] [source] 2024-01-21 15:21:05
>>lukast+(OP)
I'm quite late to the party, but I'm the creator of this article. Let me know if you have any suggestions :)
◧◩◪◨⬒⬓⬔⧯▣
140. dannym+nl5[view] [source] [discussion] 2024-01-21 19:59:45
>>andrew+ih1
I replied to you in order to help you. I'm doing async since 2007.

I use async regularily for my clients, and I'm 100% sure that the usual async executors in Rust are multithreaded. I just ran gdb on an async program again, and, sure enough, the tokio async executor has 16 threads currently (that's just on a laptop with 16 cores).

    async fn say_world() {
        println!("world");
    }

    #[tokio::main]
    async fn main() {
        loop {
            say_world().await;
        }
    }

    (gdb) info threads
      Id   Target Id                 Frame 
      1    LWP 32716 "r1"            0x00007fdc401ab08d in ?? ()
      2    LWP 329 "tokio-runtime-w" 0x00007fdc401ab08d in ?? ()
      3    LWP 330 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
      4    LWP 331 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
      5    LWP 332 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
      6    LWP 333 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
      7    LWP 334 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
      8    LWP 335 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
      9    LWP 336 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     10   LWP 337 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     11   LWP 338 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     12   LWP 339 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     13   LWP 340 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     14   LWP 342 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     15   LWP 343 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     16   LWP 344 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     17   LWP 345 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
Just try it out.

Also, think about it. Using async in order to speed up I/O and then pin the async executor to just one core of your 200 cores on a server is not exactly a winning strategy.

>executor, which essentially is running a synchronous process in a thread to make some bit of synchronous work compatible with async

That's not what an executor is.

Also, the thing above is an example of parallelism, so even worse than concurrency. But even with an one-thread-async-executor you could still get concurrency problems with async.

>If you want to maximise your cores then run multiple instances of a single threaded process using systemd or something, or use your application to launch multiple async threads.

It is not 1995. Your idea would make scheduling even harder than it already was, and it would add massive memory overhead. If you are gonna do that, most of the time, just use synchronous processes to begin with--no need for async.

◧◩◪
141. winrid+P16[view] [source] [discussion] 2024-01-22 01:34:11
>>nullde+Uv4
It's a type-safe macro that allows you to use async or sync system calls transparently: https://nim-lang.org/blog/2016/09/30/version-0150-released.h...
[go to top]