How is this not more natural than creating various state machines and passing around all sorts of weird highly abstract Future-objects?
Async code maximizes the potential of the thread it’s running in.
This sounds like an issue with the implementation of threads and scheduling in common operating systems, and I don't see how replicating all that functionality inside of each sufficiently large programming language is remotely taken seriously.
But also, you didn't respond to what I even said. You claimed that async is 'beautiful and natural'. I disagreed. You...fell back to a performance claim that's uncontroversially true, but irrelevant to what I said.
However, many real-world programs are inherently complicated state machines, where each individual state waits on one or more nested state machines to progress and/or complete -- async is often the most reasonable way to express that.
I wrote a post detailing a non-trivial production example use of async that might be interesting, regarding the cargo-nextest test runner that I wrote and maintain: https://sunshowers.io/posts/nextest-and-tokio/.
Nextest is not "web-scale" (c10k), since the number of tests one is concurrently running is usually bounded by the number of processor hyperthreads. So the fact that async tasks are more lightweight than OS threads doesn't have much bearing in this case. But even beyond that, being able to express state machines via async makes life so much simpler.
Over a year after switching nextest to using async, I have no regrets. The state machine has gotten around twice as complicated since then--for example, nextest now handles SIGTSTP and SIGCONT carefully--and async has coped admirably with it.
(There are currently somewhat serious issues with Rust async, such as a really messy cancellation story. But that's generally not what gets talked about on places like HN.)
Anyway, I have been meaning to try out nextest for our big honking monorepo workspace at work. The cargo test runner has always been essentially fine for our needs, but speeding up test execution in CI could be a huge win for us.
No race conditions, no inter thread coordination, no concerns about all the weird shit that happens when doing multi threading.
Only if you limited async to only one thread on one core (why would you do that?) you could avoid that.
With the Spotify API before I have wanted to do concurrent API calls. One api call gives you a list of song IDs in a playlist, then you want to get all the song info for each ID. HTTP2 multiplexing would be much nicer than spawning 100 different http1 connections
Errrr..... no it's not. Your statement is flat out wrong. async is single threaded.
You're not really understanding async.
async is single threaded. If you want to maximise your cores then run multiple instances of a single threaded process using systemd or something, or use your application to launch multiple async threads.
You can, in python and probably in Rust, run an executor, which essentially is running a synchronous process in a thread to make some bit of synchronous work compatible with async, but that's not really ideal and it's certainly not the purpose of async.
Rust's Tokio runtime, for example, is multi-threaded by default and makes progress on several tasks at the same time by using multiple threads.
The usual way it happens is that you write some code inside an async function that's straight line not suspending, and then later someone adds an await inside it. The await returns to the event loop, at which point an event arrive that you didn't expect. You now have control flow jumping to somewhere else completely, possibly changing state in a way that you didn't anticipate. When the original function resumes it sees something change concurrently.
I use async regularily for my clients, and I'm 100% sure that the usual async executors in Rust are multithreaded. I just ran gdb on an async program again, and, sure enough, the tokio async executor has 16 threads currently (that's just on a laptop with 16 cores).
async fn say_world() {
println!("world");
}
#[tokio::main]
async fn main() {
loop {
say_world().await;
}
}
(gdb) info threads
Id Target Id Frame
1 LWP 32716 "r1" 0x00007fdc401ab08d in ?? ()
2 LWP 329 "tokio-runtime-w" 0x00007fdc401ab08d in ?? ()
3 LWP 330 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
4 LWP 331 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
5 LWP 332 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
6 LWP 333 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
7 LWP 334 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
8 LWP 335 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
9 LWP 336 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
10 LWP 337 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
11 LWP 338 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
12 LWP 339 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
13 LWP 340 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
14 LWP 342 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
15 LWP 343 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
16 LWP 344 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
17 LWP 345 "tokio-runtime-w" 0x00007fdc401a8482 in ?? ()
Just try it out.Also, think about it. Using async in order to speed up I/O and then pin the async executor to just one core of your 200 cores on a server is not exactly a winning strategy.
>executor, which essentially is running a synchronous process in a thread to make some bit of synchronous work compatible with async
That's not what an executor is.
Also, the thing above is an example of parallelism, so even worse than concurrency. But even with an one-thread-async-executor you could still get concurrency problems with async.
>If you want to maximise your cores then run multiple instances of a single threaded process using systemd or something, or use your application to launch multiple async threads.
It is not 1995. Your idea would make scheduling even harder than it already was, and it would add massive memory overhead. If you are gonna do that, most of the time, just use synchronous processes to begin with--no need for async.