zlacker

[parent] [thread] 30 comments
1. gavinh+(OP)[view] [source] 2024-01-19 23:06:42
I hope that someday, we can have a Rust-like language without async.

Bonus points if it has the ability for users to define static analysis a la borrow checking.

replies(2): >>andrew+e1 >>nemoth+U7
2. andrew+e1[view] [source] 2024-01-19 23:13:54
>>gavinh+(OP)
Async programming is beautiful. It’s the easiest and most natural way to do multiple things at the same time in single threaded code.

Async makes things possible that are hard or impossible to do with synch programming.

It’s a real game changer for python especially. Cant comment on rust, hopefully its implementation is smooth.

replies(3): >>Twenty+D2 >>nsm+T2 >>bvrmn+u3
◧◩
3. Twenty+D2[view] [source] [discussion] 2024-01-19 23:21:56
>>andrew+e1
Is it really though? All I personally care for when it comes to "doing multiple things at the same time" is frankly Rust's scoped threads API: Create N threads, have them perform some computations, then join them at the end of the scope.

How is this not more natural than creating various state machines and passing around all sorts of weird highly abstract Future-objects?

replies(2): >>andrew+w3 >>sunsho+d7
◧◩
4. nsm+T2[view] [source] [discussion] 2024-01-19 23:23:20
>>andrew+e1
Async/await is a _concurrency_ mechanism, and concurrency is certainly naturally useful. However async/await is certainly not the best implementation of concurrency, except in the narrow design space Rust has chosen. If you have a language with a runtime, green threads/fibers/coroutines, along with a choice mechanism is really the way to go. It trades off some performance for much better ergonomics. Go, Java, Racket all use this. It is unfortunate that Python picked async/await.
replies(1): >>WaxPro+F3
◧◩
5. bvrmn+u3[view] [source] [discussion] 2024-01-19 23:27:11
>>andrew+e1
> It’s a real game changer for python especially

Python is so slow you gain nothing with async. I have a plenty of cool stories how I've fixed crumbled products because they were async and all connection load instead of nginx/backend goes directly into db.

You rarely need long connections and if you choose async for python because it's fast then it's a wrong choice because it's not.

BTW asyncio is awful bloated implementation with huge overhead around task management.

replies(1): >>andrew+z5
◧◩◪
6. andrew+w3[view] [source] [discussion] 2024-01-19 23:27:22
>>Twenty+D2
Threads are far more heavy than async.

Async code maximizes the potential of the thread it’s running in.

replies(2): >>Twenty+b5 >>Animat+4j
◧◩◪
7. WaxPro+F3[view] [source] [discussion] 2024-01-19 23:28:56
>>nsm+T2
I mostly agree, but I think you can still just pip install gevent and have greenlets as you always did - just not as language 'primitives'.

Personally I've never liked the syntactic sugar on top of function calls very much. If something is a promise or whatever, return me the promise and I can choose what to do with it.

◧◩◪◨
8. Twenty+b5[view] [source] [discussion] 2024-01-19 23:38:41
>>andrew+w3
> Threads are far more heavy than async

This sounds like an issue with the implementation of threads and scheduling in common operating systems, and I don't see how replicating all that functionality inside of each sufficiently large programming language is remotely taken seriously.

But also, you didn't respond to what I even said. You claimed that async is 'beautiful and natural'. I disagreed. You...fell back to a performance claim that's uncontroversially true, but irrelevant to what I said.

replies(2): >>coffee+jh >>andrew+eK
◧◩◪
9. andrew+z5[view] [source] [discussion] 2024-01-19 23:41:01
>>bvrmn+u3
>> Python is so slow you gain nothing with async.

It’s not necessarily about speed, though this statement above is flat out wrong.

async in Python allows you to build different types of applications. For example you can attach a task to stdout of another process and read and process it.

replies(1): >>bvrmn+v7
◧◩◪
10. sunsho+d7[view] [source] [discussion] 2024-01-19 23:53:51
>>Twenty+D2
For the limited case of data-parallel computations as you're describing, you don't need async.

However, many real-world programs are inherently complicated state machines, where each individual state waits on one or more nested state machines to progress and/or complete -- async is often the most reasonable way to express that.

I wrote a post detailing a non-trivial production example use of async that might be interesting, regarding the cargo-nextest test runner that I wrote and maintain: https://sunshowers.io/posts/nextest-and-tokio/.

Nextest is not "web-scale" (c10k), since the number of tests one is concurrently running is usually bounded by the number of processor hyperthreads. So the fact that async tasks are more lightweight than OS threads doesn't have much bearing in this case. But even beyond that, being able to express state machines via async makes life so much simpler.

Over a year after switching nextest to using async, I have no regrets. The state machine has gotten around twice as complicated since then--for example, nextest now handles SIGTSTP and SIGCONT carefully--and async has coped admirably with it.

(There are currently somewhat serious issues with Rust async, such as a really messy cancellation story. But that's generally not what gets talked about on places like HN.)

replies(2): >>mplanc+Eu >>Twenty+XK2
◧◩◪◨
11. bvrmn+v7[view] [source] [discussion] 2024-01-19 23:57:09
>>andrew+z5
> For example you can attach a task to stdout of another process and read and process it.

How max many processes you handled in this fashion? The catch is: if you need thousands than asyncio has too much overhead and you need manual epoll. If less threads are much easier to use and acceptable performance wise.

12. nemoth+U7[view] [source] 2024-01-20 00:00:11
>>gavinh+(OP)
>I hope that someday, we can have a Rust-like language without async.

That exists today, it's called Rust. You don't have to use async.

replies(1): >>kelsey+D9
◧◩
13. kelsey+D9[view] [source] [discussion] 2024-01-20 00:13:52
>>nemoth+U7
Until you want to use a library that requires async. Now you do.
replies(2): >>nemoth+Lg >>sgbeal+oi1
◧◩◪
14. nemoth+Lg[view] [source] [discussion] 2024-01-20 01:19:48
>>kelsey+D9
Which part of the standard library forces me to use async? Or is the complaint that you can't force other random developers to program in the way you prefer?
replies(1): >>kelsey+Zi
◧◩◪◨⬒
15. coffee+jh[view] [source] [discussion] 2024-01-20 01:24:52
>>Twenty+b5
When you're reading code, it's really really nice to be able to tell if a line of code blocks. Other than that `Thread.new(_ => doIO()).join()` is pretty much the same as `await doIO()`.
◧◩◪◨
16. kelsey+Zi[view] [source] [discussion] 2024-01-20 01:40:16
>>nemoth+Lg
Can you try steel manning?
replies(1): >>filled+NG
◧◩◪◨
17. Animat+4j[view] [source] [discussion] 2024-01-20 01:40:45
>>andrew+w3
How many connections to Spotify did you want to keep open simultaneously?
replies(1): >>conrad+f21
◧◩◪◨
18. mplanc+Eu[view] [source] [discussion] 2024-01-20 03:56:19
>>sunsho+d7
Not who you’re replying to, but this is great context, and I want to thank you for including it. As another heavy async user (for a network service that handles loads of requests and does loads of DB reads and writes), I am also a big fan of Rust’s async at scale. We’re currently in the process of seeing where we can get rid of async_trait with 1.75, which has not been particularly drop-in in many cases but which is still exciting.

Anyway, I have been meaning to try out nextest for our big honking monorepo workspace at work. The cargo test runner has always been essentially fine for our needs, but speeding up test execution in CI could be a huge win for us.

replies(1): >>sunsho+ix
◧◩◪◨⬒
19. sunsho+ix[view] [source] [discussion] 2024-01-20 04:28:47
>>mplanc+Eu
I'd love to hear how it goes!
◧◩◪◨⬒
20. filled+NG[view] [source] [discussion] 2024-01-20 06:36:16
>>kelsey+Zi
I'm not them, but I don't think there's any general-purpose programming language in existence that prevents developers from implementing async runtimes and using them in their libraries.

So yes, if your whole reasoning is "other people might use async and then I won't be able to use their code", then you'll be waiting indefinitely for the magical programming language that's both fully featured for your work and does not have any portion of the ecosystem implemented in async code.

◧◩◪◨⬒
21. andrew+eK[view] [source] [discussion] 2024-01-20 07:32:14
>>Twenty+b5
There's no concurrency issues with async programming.

No race conditions, no inter thread coordination, no concerns about all the weird shit that happens when doing multi threading.

replies(2): >>dannym+jY >>nvm0n2+rt1
◧◩◪◨⬒⬓
22. dannym+jY[view] [source] [discussion] 2024-01-20 10:54:20
>>andrew+eK
One of the main reasons to use async in the first place is to distribute the work to all the cpu cores that you have. So you will still have all the concurrency issues with async in that case.

Only if you limited async to only one thread on one core (why would you do that?) you could avoid that.

replies(1): >>andrew+P41
◧◩◪◨⬒
23. conrad+f21[view] [source] [discussion] 2024-01-20 11:37:41
>>Animat+4j
Not related to Spotify, but managing http2 connections is much easier with async code than with sync, and http3 will be much the same. You can of course probably spawn a thread that handles these connections and use channels, but it's not going to be particularly pleasant to work with.

With the Spotify API before I have wanted to do concurrent API calls. One api call gives you a list of song IDs in a playlist, then you want to get all the song info for each ID. HTTP2 multiplexing would be much nicer than spawning 100 different http1 connections

◧◩◪◨⬒⬓⬔
24. andrew+P41[view] [source] [discussion] 2024-01-20 11:57:02
>>dannym+jY
>> One of the main reasons to use async in the first place is to distribute the work to all the cpu cores that you have

Errrr..... no it's not. Your statement is flat out wrong. async is single threaded.

You're not really understanding async.

async is single threaded. If you want to maximise your cores then run multiple instances of a single threaded process using systemd or something, or use your application to launch multiple async threads.

You can, in python and probably in Rust, run an executor, which essentially is running a synchronous process in a thread to make some bit of synchronous work compatible with async, but that's not really ideal and it's certainly not the purpose of async.

replies(2): >>Twenty+8t1 >>dannym+U85
◧◩◪
25. sgbeal+oi1[view] [source] [discussion] 2024-01-20 14:00:14
>>kelsey+D9
> Until you want to use a library that requires async. Now you do.

Now you do... have an incentive to write your own which is not async.

replies(1): >>kelsey+kS1
◧◩◪◨⬒⬓⬔⧯
26. Twenty+8t1[view] [source] [discussion] 2024-01-20 15:07:29
>>andrew+P41
`async` is neither multi-threaded nor single-threaded by default. It all depends on the underlying runtime.

Rust's Tokio runtime, for example, is multi-threaded by default and makes progress on several tasks at the same time by using multiple threads.

◧◩◪◨⬒⬓
27. nvm0n2+rt1[view] [source] [discussion] 2024-01-20 15:09:40
>>andrew+eK
A common belief but not quite true. You can have race conditions and weird shit happening in async systems.

The usual way it happens is that you write some code inside an async function that's straight line not suspending, and then later someone adds an await inside it. The await returns to the event loop, at which point an event arrive that you didn't expect. You now have control flow jumping to somewhere else completely, possibly changing state in a way that you didn't anticipate. When the original function resumes it sees something change concurrently.

◧◩◪◨
28. kelsey+kS1[view] [source] [discussion] 2024-01-20 17:12:07
>>sgbeal+oi1
That's fair. The Rust community does love a rewrite.
◧◩◪◨
29. Twenty+XK2[view] [source] [discussion] 2024-01-20 22:40:06
>>sunsho+d7
Thank you, this was a very insightful blog post!
replies(1): >>sunsho+w43
◧◩◪◨⬒
30. sunsho+w43[view] [source] [discussion] 2024-01-21 01:12:38
>>Twenty+XK2
No problem! I wasn't totally convinced of the value of async myself, before I went through the exercise of building something that is in many ways more complicated than most web services.
◧◩◪◨⬒⬓⬔⧯
31. dannym+U85[view] [source] [discussion] 2024-01-21 19:59:45
>>andrew+P41
I replied to you in order to help you. I'm doing async since 2007.

I use async regularily for my clients, and I'm 100% sure that the usual async executors in Rust are multithreaded. I just ran gdb on an async program again, and, sure enough, the tokio async executor has 16 threads currently (that's just on a laptop with 16 cores).

    async fn say_world() {
        println!("world");
    }

    #[tokio::main]
    async fn main() {
        loop {
            say_world().await;
        }
    }

    (gdb) info threads
      Id   Target Id                 Frame 
      1    LWP 32716 "r1"            0x00007fdc401ab08d in ?? ()
      2    LWP 329 "tokio-runtime-w" 0x00007fdc401ab08d in ?? ()
      3    LWP 330 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
      4    LWP 331 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
      5    LWP 332 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
      6    LWP 333 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
      7    LWP 334 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
      8    LWP 335 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
      9    LWP 336 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     10   LWP 337 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     11   LWP 338 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     12   LWP 339 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     13   LWP 340 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     14   LWP 342 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     15   LWP 343 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     16   LWP 344 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
     17   LWP 345 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
Just try it out.

Also, think about it. Using async in order to speed up I/O and then pin the async executor to just one core of your 200 cores on a server is not exactly a winning strategy.

>executor, which essentially is running a synchronous process in a thread to make some bit of synchronous work compatible with async

That's not what an executor is.

Also, the thing above is an example of parallelism, so even worse than concurrency. But even with an one-thread-async-executor you could still get concurrency problems with async.

>If you want to maximise your cores then run multiple instances of a single threaded process using systemd or something, or use your application to launch multiple async threads.

It is not 1995. Your idea would make scheduling even harder than it already was, and it would add massive memory overhead. If you are gonna do that, most of the time, just use synchronous processes to begin with--no need for async.

[go to top]