zlacker

[parent] [thread] 29 comments
1. xedrac+(OP)[view] [source] 2024-01-19 23:38:16
When writing some IO bound application, async Rust is great. Less great for libraries that want to support both async and sync without having to make an async runtime a dependency if you just want the sync interface. Mutually exclusive features are taboo unfortunately. One thing I really love about Haskell is you can make any function run in a green thread by simply composing it with the 'async' function. There's nothing special about it. This works much better than say Go, because Haskell is immutable.
replies(4): >>XorNot+S >>jeremy+hg >>anon29+mk >>fulafe+fE1
2. XorNot+S[view] [source] 2024-01-19 23:42:57
>>xedrac+(OP)
I'm not clear what your last sentence has to do with anything else? What does immutability have to do with Async/sync conversions?
replies(2): >>polyga+f4 >>throwa+v4
◧◩
3. polyga+f4[view] [source] [discussion] 2024-01-20 00:11:51
>>XorNot+S
Immutability is great for multithreaded/async programs because every thread can rest assured knowing no other thread can sneakily modify objects that they are operating on currently.
replies(3): >>candid+o4 >>IlliOn+tf >>wredue+Ql
◧◩◪
4. candid+o4[view] [source] [discussion] 2024-01-20 00:12:57
>>polyga+f4
Go can prevent this with the race detector among other things
replies(3): >>stouse+D4 >>binary+5a >>omgint+Wl
◧◩
5. throwa+v4[view] [source] [discussion] 2024-01-20 00:13:36
>>XorNot+S
you can fearlessly run code on another thread if you're not worried it's going to cause a data race or mutate anything
replies(1): >>XorNot+k5
◧◩◪◨
6. stouse+D4[view] [source] [discussion] 2024-01-20 00:14:54
>>candid+o4
Sometimes.
◧◩◪
7. XorNot+k5[view] [source] [discussion] 2024-01-20 00:19:44
>>throwa+v4
This is very much not free though. Predicting memory usage in Haskell programs is notoriously tricky (and all the memory copies aren't free either).
replies(3): >>c-cube+I6 >>bippih+gc >>whatev+l32
◧◩◪◨
8. c-cube+I6[view] [source] [discussion] 2024-01-20 00:33:03
>>XorNot+k5
It's also the case with OCaml, Elixir, clojure, etc. Non lazy languages can also have a rich collection of immutable data structures and have more predictable memory usage than Haskell. On the other hand, Go doesn't have a culture or features that encourage immutability.
◧◩◪◨
9. binary+5a[view] [source] [discussion] 2024-01-20 01:06:25
>>candid+o4
The race detector needs to actually encounter a race in order to detect it, it's not a complete static analysis.
◧◩◪◨
10. bippih+gc[view] [source] [discussion] 2024-01-20 01:25:55
>>XorNot+k5
it's not free, but it is explicit. it's nice for the code to explicitly define how the memory is being modified. there are copies in java too, you just have to know how the runtime works to know what each line does.
replies(1): >>ric2b+Ne
◧◩◪◨⬒
11. ric2b+Ne[view] [source] [discussion] 2024-01-20 01:51:12
>>bippih+gc
Copies in haskell are not explicit at all, though?
◧◩◪
12. IlliOn+tf[view] [source] [discussion] 2024-01-20 01:56:59
>>polyga+f4
How Haskell deals with access to shared resources which are mutable by their nature, like file system, or the outside world?

(A honest question, I start to think that I'd like to learn more on this language)

replies(2): >>andyfe+Lh >>whatev+B22
13. jeremy+hg[view] [source] 2024-01-20 02:04:45
>>xedrac+(OP)
All threads in Haskell are green. Async just gives you another way to get return values from threads without having to use MVars or Chans.
◧◩◪◨
14. andyfe+Lh[view] [source] [discussion] 2024-01-20 02:22:19
>>IlliOn+tf
AFAIK they tend to operate through the IO monad, which serves to order read/write events and mark parts of your code as interacting with the global mutable state that lives outside your program.

So the mutable (or is it “volatile”?) environment is there, but you explicitly know when and where you interact with it.

15. anon29+mk[view] [source] 2024-01-20 02:53:51
>>xedrac+(OP)
> This works much better than say Go, because Haskell is immutable.

The immutability has nothing to do with async. Async is for IO threads. If you want pure parallelism you use `par`. But Haskell IO threads (forkIO and friends) are also green when run with GHC.

replies(1): >>xedrac+Eo
◧◩◪
16. wredue+Ql[view] [source] [discussion] 2024-01-20 03:10:20
>>polyga+f4
Immutability is, quite possibly, the dumbest “silver bullet” solution ever to be praised as a solution to anything.

Congratulations, nobody is going to sneakily update an object on you, but also, nobody knows about your updates either.

It’s not a worthwhile trade off given the massive extra work it causes.

replies(3): >>throwa+0w >>Fire-D+vy >>whatev+X22
◧◩◪◨
17. omgint+Wl[view] [source] [discussion] 2024-01-20 03:10:56
>>candid+o4
Detectors detect, they don’t prevent. All detectors suffer misses.
◧◩
18. xedrac+Eo[view] [source] [discussion] 2024-01-20 03:45:01
>>anon29+mk
Async is definitely nicer when things are immutable. On modern CPUs, async green threads can easily be backed by different OS threads running in parallel on different CPU cores, making data races a real problem for many languages. Async does not guarantee that things will not be run in parallel, although you shouldn't rely on it for explicit parallelism.
replies(1): >>anon29+es1
◧◩◪◨
19. throwa+0w[view] [source] [discussion] 2024-01-20 05:14:26
>>wredue+Ql
Completely uninformed take. Some of the most impressive update notification systems are built off of pass-as-immutable runtimes (for example: phoenix live view + phoenix pubsub). Try implementing that in just about a y other language. You will trip over yourself eight ways to hell

The whole idea of CQRS is to build separate (segregated) pathways for updates. Immutable passing plays extremely well with CQRS. The alternative is the complete clusterfuck that is two way data bindings (e.g. out of the box angularjs)

replies(1): >>tgv+XF
◧◩◪◨
20. Fire-D+vy[view] [source] [discussion] 2024-01-20 05:48:22
>>wredue+Ql
Immutability frees the mind from so much baggage when developing that i'm always shocked it didn't become mainstream
◧◩◪◨⬒
21. tgv+XF[view] [source] [discussion] 2024-01-20 07:47:10
>>throwa+0w
I think you both are referring to the same point: you can't update an immutable object, so you have to set up some mechanism to keep changes in sync.
replies(1): >>throwa+BI
◧◩◪◨⬒⬓
22. throwa+BI[view] [source] [discussion] 2024-01-20 08:27:26
>>tgv+XF
Yeah, and update mechanisms are not created equal. two way data bindings suck because they elide the challenges of distributed consistency.

When you're immutable, you can still delete or replace data.

replies(1): >>wredue+TG1
◧◩◪
23. anon29+es1[view] [source] [discussion] 2024-01-20 15:29:52
>>xedrac+Eo
Haskell async is run in IO though in which mutability is allowed. Async itself is mutable.
24. fulafe+fE1[view] [source] 2024-01-20 16:32:17
>>xedrac+(OP)
I'm not sure it's worth it even for most IO bound applications. The first couple of IO bound examples that come to mind (an app doing bulk disk or network IO, eg sequential file access or bulk data transfer) would logically seem to work just as well without async since the bottleneck is the disk or network card or connection.

I'd guess it could be an advantage for high concurrency applications that are CPU bound, but could be made IO bound by optimizing the userspace code. But OS threads are pretty efficient and you can have zillions of them, so the async upside is quite bounded, so this niche would seem smallish.

◧◩◪◨⬒⬓⬔
25. wredue+TG1[view] [source] [discussion] 2024-01-20 16:44:03
>>throwa+BI
Immutability “maybe” (and that’s a massive grain a salt, because this is not a specific thing I’ve ever worked on to say any different) having certain use cases where it works well is not the same thing as making literally every single object in your entire application immutable.

I agree that immutability is a tool. My issue with it is when you treat it as a rule.

◧◩◪◨
26. whatev+B22[view] [source] [discussion] 2024-01-20 18:35:10
>>IlliOn+tf
Haskell has full support for IO and mutability. It even has software transactional memory in its standard library.
◧◩◪◨
27. whatev+X22[view] [source] [discussion] 2024-01-20 18:37:14
>>wredue+Ql
> Congratulations, nobody is going to sneakily update an object on you

I've seen Heisenbugs where some random code calls a setter on an object in a shared memory cache. The setter call was for local logic - so immutable update would've saved the day. It had real world impact too: We ordered a rack with a European plug to an American data center (I think a human in the loop caught it thankfully).

Also, how often do you even use mutability really? Like .. for what? Logic is easier to express with expressions than a Rube Goldberg loop mutating state imo.

replies(1): >>wredue+j64
◧◩◪◨
28. whatev+l32[view] [source] [discussion] 2024-01-20 18:40:02
>>XorNot+k5
Predicting memory usage in Haskell programs isn't actually tricky. At least, I stopped thinking so once I became an intermediate Haskeller. It's not that hard to have a mental model of the Haskell RTS the same as you'd have the JVM.

Having the ability to do so generally is tablestakes for being an intermediate professional programmer imo. In university, I had to draw diagrams explaining the state of the C stack and heap after each line of code. That's the same thing. And I was 19 lmao. It's not hard.

Maybe you're referring to space leaks? I've run into like 2 in my ten year Haskell career, and neither hit prod

I've actually seem more Go and Java space leaks/OoM bugs hit prod than Haskell - despite having fewer total years using those languages than Haskell! Nobody blamed the language for those though :/

◧◩◪◨⬒
29. wredue+j64[view] [source] [discussion] 2024-01-21 14:26:22
>>whatev+X22
>how many Heisenbugs

I suspect, given the real, actual measurements, the number of difficult to deal with bugs is pretty consistent between immutability and mutability. Actual measurements does not support claims of “easier to reason about”, or “reduced bugs”.

>how often do you use mutability

Whenever something should change and I don’t specifically need functionality that immutability might provide (literally 99.99999999% of every state change).

replies(1): >>whatev+6c4
◧◩◪◨⬒⬓
30. whatev+6c4[view] [source] [discussion] 2024-01-21 15:05:12
>>wredue+j64
I'm just confused as to what you need mutability for exactly? I get needing it for communicating between processes (STM has you covered there). But for "normal" code that is doing pure logic, what is the benefit of using mutability?

Immutability has some big advantages for pure logic, such as allowing containers to be treated as values the same as numbers. And efficient immutable data structures of all kinds are commonplace now.

[go to top]