Deferred computation is a primitive, and threads do not solve it.
But as for the arguments:
> But even in 1999 Dan says this about cooperative M:N models:
>> At one point, M:N was thought to be higher performance, but it's so complex that it's hard to get right, and most people are moving away from it.
It is higher performance. If you have M jobs and you can get N workers to work on them at the same time, you win!
It is also complex. So if you want the feature, let the smart people working on runtime figure it out, so that each team of application developers in every company doesn't invent their own way of doing it. If not in the runtime, then let library developers invent it, so there's at least some sharing of work. (Honestly I probably prefer the library situation, because things can improve over time, rather than stagnate.)
> Many operating systems have tried M:N scheduling models and all of them use 1:1 model today.
Nope! At the application level, M is jobs and N is threads. But at the OS level, M is threads and N is cores. Would I be exaggerating to say that doing M:N scheduling is the OS's primary purpose?
> but how come M:N model is used in Golang and Erlang - 2 languages known for their superior concurrency features?
These examples are "the rule", as opposed to "the exceptions that prove the rule".
> The Coloring Problem
I'm sick of the What Color Is Your Function argument. The red/blue split exists, and not just for asynchrony. Your language can either acknowledge the split or ignore it:
* A blocking function can call a non-blocking function, but not vice-versa.
* An auth'd function can call a non-auth'd function, but not vice-versa.
* An impure function can call a pure function, but not vice-versa.
* An allocating function can call a non-allocating function, but not vice-versa.
* A subclass can call into a superclass, but not vice-versa.
* A non-deterministic function can call a deterministic function, but not vice-versa.
* A exception-throwing function can call a non-exception-throwing function, but not vice-versa.
Even the dependency inversion principle works this way: it's a plea for concretions to call abstractions, and not the other way around!
Trying to remove the red/blue split will not work, and you'll only be pretending it doesn't exist.
The "solution" (if you can call it that) is simply for library writers to expose more blue code and less red code, where possible. If your language acknowledges that red and blue are different, then application developers have an easier time selecting blue library imports and rejecting red ones. Which is somewhat aligned with the article's title. But application developers can do whatever - red/blue, go nuts.
Any fundamentally blocking operations could be forced by the compiler to have have two implementations - sync (normal) and async, which defers to some abstract userspace scheduler that's part of the language itself.
Go managed to so it. What exactly would "you're only pretending it doesn't exist" mean in context of Goroutines?
I don't think for a second that async Rust should be picked for performance reasons.
You get a feeling for what is a good use of async and bad use of async relatively easily these days as the ecosystem is maturing.
You still have concurrency or interleaved execution to contend with but that could be represented more explicitly since it's not unique to async.
I haven't done much rust but implementing blocking operations as async functions is commonly achieved in Python by using threads under the hood anyway
The choice and ordering and soforth of yield points / poll order really drastically can change the semantics of your program. But if you don’t care..
There's not much else of a way to do it any better. Not sure your exact gripe here, other than dogmatic.
> Violation of the zero-cost abstractions principle.
It's not a principle, it's just a benefit of Rust's design that you get often but not always. `Clone` is not zero cost, should we throw that out too?
> Major degradation in developer's productivity.
Yawn, speak for yourself. I implemented incredibly extensive firmware with Embassy (async embedded framework) in months instead of years for a custom PCB I made. Async was literally the last thing on the list that caused problems - in fact it sped up my productivity and reduced power usage of the board overall.
> Most advertised benefits are imaginary, too expensive (unless you are FAANG) or can be achieved without async
No, they cannot. You are so confidently incorrect to an impressive extent.
Stopped reading after that section. This person has some bone to pick and left level-headedness at the door in doing so.
How do you know if what is best doesn't change as the project you're working on progresses and your manager tosses in new requirements?
I'd say better pick a technique (or even language) that works all the time.
Async Rust is rather nice to use when you're writing a web server. Structuring your code in an async manner is honestly very useful. Writing a composite Future or a Future state machine by hand is super tedious. Async makes most of that pain go away.
> Async Rust is objectively bad language feature that actively harms otherwise a good language.
This is an objectively false statement :) and is so inflammatory that I don't see much of a reason to read past it. Especially since I, and many other people, have been using async Rust in production quite happily for years.
It feels like Rust is trying to be "The Language" suitable for both low-level system programming and high level application development.
But you can't do both. Rust will never be as ergonomic and simple to cook as Java, Go, OCaml, Scala, Erlang/Elixir and other high level languages. Yet this async split brings the perilous language schism somewhat akin to D's GC/non-GC dialects, where people have to write and maintain two versions of libraries. And I doubt that parametric async will solve the problem fully.
However, Tokio tries to be the best of both threads and async and sometimes ends up being the worst of both when Sync/Send/etc creep into function signatures.
https://blog.djha.skin/blog/the-down-sides-of-gos-goroutines...
From this recent discussion:
It's not to say that Go is bad in this regard! It is just (always) doing the heavy lifting for you of abstracting over different colors of functions. This may have some performance or compatibility (especially wrt FFI) concerns.
Rust chose not to do this, which approach is "right" is subjective and will likely be argued elsewhere in this thread.
Async (in any language) is not a panacea. Async is for allowing multiple things to make progress simultaneously that would otherwise be blocked on I/O. If you thread them your threads will be independently blocked on I/O and you will have additional locking overhead. If you have an embarrassingly parallel task and you aren’t blocked on I/O of course async will be slower than pure parallelism because that’s not what it’s for. It’s almost literally so you can have one async thread consuming exactly 1 CPU doing all the I/O and it will all make good progress.
main {
chan = makeChannel()
sendMsg(chan, "one")
sendMsg(chan, "two")
print(recvMsg(chan))
print(recvMsg(chan))
}
sendMsg(...) {
async {
// ...
}
}
I argue that this code is all-red when sendMsg is allowed to spawn an extra (green)thread to do its work (at the async keyword.) The order of the prints in main is unknown. If you remove the async, the code becomes all-blue and the order of the prints becomes known.I doubt it would have been added to the language if it was just for those use cases though.
I think part of what is feeding this sort of backlash against it is the way that it creates two different rust ecosystems. One of them, the non async version, being decidedly a second class citizen.
Similarly, if someone said "trying to marry async to a language with lifetime analysis and no GC will not work", it would be reasonable to point to Rust as a counterexample, even though Rust async has various problems.
It is possible! They're called blue (sync) functions.
Choosing performance as your #1 priority is often a bad idea as it gets you into a straight-jacket from the start, making everything else much more difficult and slows down development to a crawl. Unless you're developing an OS kernel perhaps. Computers are fast enough these days, let them do part of the work for you! And you can always write a faster version of your software when there is a demand for it.
I think that's awesome. They've been afraid to "bless" an executor for good reasons, but pollster has 0 chance of "winning" even if blessed since it lacks so many features. However it's a solution to the problem you expressed: I/O crates can be async and used with pollster in sync contexts.
https://github.com/embassy-rs/embassy?tab=readme-ov-file#rus...
"Rust's async/await allows for unprecedently easy and efficient multitasking in embedded systems. Tasks get transformed at compile time into state machines that get run cooperatively. It requires no dynamic memory allocation, and runs on a single stack, so no per-task stack size tuning is required. It obsoletes the need for a traditional RTOS with kernel context switching, and is faster and smaller than one!"
I'm just toying with Raspberry Pi Pico and it's pretty nice.
Go and Rust have different use cases, the async-await is nice at a low level.
You can also keep async relatively local to a function that does these things and is itself blocking otherwise.
> ...you'll only be pretending it doesn't exist
Which is what I was providing evidence of that Go does.
It removes coloring to the user by handling it under the hood. The linked article calls this "colorblind instead of colorless".
I do make separate threads when necessary (e.g. to encapsulate blocking I/O).
It can approximate an Erlang experience.
But with a lot more boilerplate and lack of good actor library patterns.
Let's look at a less loaded example:
"Trying to remove the distinction between stack and heap allocation will not work, and you'll only be pretending that it doesn't exist."
It's true that on some level there's going to be a distinction between stack and heap allocation. But it totally does work to abstract away from this distinction ('pretend that it doesn't exist'). Go, for example, will usually allocate non-escaping values on the stack, but unless you are tweaking your code for performance, you'll never have to worry about this.
You can write inefficient code and optimize it later.
> it gets you into a straight-jacket from the start, making everything else much more difficult and slows down development time to a crawl. Unless you're developing an OS kernel perhaps
The argument seems to break down: Surely you don't want to be in a strait-jacket if you're developing an OS kernel. Somehow Rust is equated with always being in a strait jacket.
The cost of writing highly concurrent programs is pretty much the same in every language except ones that have concurrency at the core (Erlang). I don't see much difference between starting with Java or Rust in terms of avoiding complexity caused by having to build things that a concurrent runtime could give to you for free.
Am I mistaken when I say that `ToOwned` is sometimes zero-cost?
And that `.to_owned()` vs. `.clone()` is free when the trait instances allow it?
For example, to_owned on an owned type is a no-op typically (it's a blanket implementation).
Clone on a unit struct or a unit enum variant is also a no-op in most cases (unless explicitly implemented not to be, which is very much frowned upon).
If you're developing an OS, there is no escaping from the straight-jacket. Your design freedom is severely limited by the fact that your constraints include all applications that will run on your OS.
(Actually, they can, but you're going to stop the whole scheduler, or at least one of its worker threads, which is something you really don't want to do...)
But I was wondering if the same thing could be brought to Rust, while still keeping the runtime away from the language. I probably forgot to mention Rust in the grandparent comment.
> There are two key drawbacks to this otherwise interesting and useful decision. First, Go can't have exceptions. Second, Go does not have the ability to synchronize tasks in real (wall clock) time. Both of these drawbacks stem from Go's emphasis on coroutines.
1) Go can't have exceptions? What exactly are panics, if not a peculiar implementation of exceptions? They print stack trace of the panicking goroutine, just like exceptions print stack traces of the thread they are thrown in. What exactly is the difference?
2) For real-time workloads, you can pin goroutine to an OS thread and use a spinlock. How does this make it different than in any other language?
> Since goroutine stacks are thus made disparate -- goroutines do not "share" common "ancestor" stack frames like Scheme's continuations do -- they can unwind their own stacks. However, this also means that when a goroutine is spawned, it has no memory of its parent, nor the parent for the child. This has already been noticed by other thinkers as a bad thing.
Goroutines are made to resemble lightweight threads. Maybe the author considers threads bad, but that's just a subjective opinion. But-- at the end of the blog, there's a sentence:
> OS threads provide some very nice constructs for programmers, and are hardened, battle-tested tools.
Goroutines provide almost exactly the same semantics as OS threads, so I don't really get what they're trying to say.
> Consider something of a converse scenario: Goroutine a spawns a goroutine b, without using an anonymous function this time. No closure, just a simple function spawn. Coroutine a opens a database connection. Goroutine b panics, crashing the program. The database connection is then left open as a zombie TCP connection.
On any sane OS, when the program crashes, the kernel closes the TCP connection - there is no such thing as a "zombie" TCP connection.
With all due respect to whoever the author is, I think this blogpost is full of crap.
This story seems to very much belong on HN. Just because the statement is opinionated and some users don't like it, it doesn't mean that we can't debate about its merits.
- Leaky abstraction - check
- Violation of the zero-cost abstractions principle - check
- Major degradation in developer's productivity - check
- Most advertised benefits are imaginary, too expensive (unless you are AAA) or can be achieved without it - check
I think you've misunderstood. "Zero cost abstraction" is not the same as "zero cost".