zlacker

[parent] [thread] 80 comments
1. bnchrc+(OP)[view] [source] 2025-12-13 17:46:50
Gleam is a beautiful language, and what I wish Elixir would become (re:typing).

For those that don't know its also built upon OTP, the erlang vm that makes concurrency and queues a trivial problem in my opinion.

Absolutely wonderful ecosystem.

I've been wanting to make Gleam my primary language, but I fear LLMs have frozen programming language advancement and adoption for anything past 2021.

But I am hopeful that Gleam has slid just under the closing door and LLMs will get up to speed on it fast.

replies(7): >>sbuttg+z7 >>agos+N8 >>Uehrek+Qb >>innoce+1V >>market+k91 >>troupo+ac1 >>devale+zL1
2. sbuttg+z7[view] [source] 2025-12-13 18:39:58
>>bnchrc+(OP)
> For those that don't know its also built upon OTP, the erlang vm

This isn't correct. It can compile to run on the BEAM: that is the Erlang VM. OTP isn't the Erlang VM; rather, "OTP is set of Erlang libraries and design principles providing middle-ware to develop [concurrent/distributed/fault tolerant] systems."

Gleam itself provides what I believe is a substantial subset of OTP support via a library: https://github.com/gleam-lang/otp

Importantly: "Gleam has its own version of OTP which is type safe, but has a smaller feature set. [vs. Elixir, another BEAM language with OTP support]"

replies(1): >>lpil+5c
3. agos+N8[view] [source] 2025-12-13 18:52:08
>>bnchrc+(OP)
the Erlang vm is called BEAM, not OTP. sadly, Gleam's implementation of OTP is not at the same level as Elixir's or Erlang.
replies(1): >>lpil+ec
4. Uehrek+Qb[view] [source] 2025-12-13 19:20:34
>>bnchrc+(OP)
> I fear LLMs have frozen programming language advancement and adoption for anything past 2021.

Why would that be the case? Many models have knowledge cutoffs in this calendar year. Furthermore I’ve found that LLMs are generally pretty good at picking up new (or just obscure) languages as long as you have a few examples. As wide and varied as programming languages are, syntactically and ideologically they can only be so different.

replies(3): >>schrod+hj >>miki12+Jr >>zeroto+4z2
◧◩
5. lpil+5c[view] [source] [discussion] 2025-12-13 19:21:25
>>sbuttg+z7
Hi, I’m the creator of Gleam!

The comment you are replying to is correct, and you are incorrect.

All OTP APIs are usable as normal within Gleam, the language is designed with it in mind, and there’s an additional set of Gleam specific additions to OTP (which you have linked there).

Gleam does not have access to only a subset of OTP, and it does not have its own distinct OTP inspired OTP. It uses the OTP framework.

replies(3): >>tazjin+Vf >>miki12+Np >>sbuttg+2C
◧◩
6. lpil+ec[view] [source] [discussion] 2025-12-13 19:22:31
>>agos+N8
Gleam uses regular OTP, it doesn’t have a distinct OTP inspired framework. Source: I’m the author of Gleam.
replies(4): >>girvo+aq >>andy_p+rq >>constr+fP1 >>agos+Kr2
◧◩◪
7. tazjin+Vf[view] [source] [discussion] 2025-12-13 19:47:55
>>lpil+5c
(I know Erlang well, but haven't used Gleam)

The library the parent links to says this:

> Not all Erlang/OTP functionality is included in this library. Some is not possible to represent in a type safe way, so it is not included.

Does this mean in practice that you can use all parts of OTP, but you might lose type checking for the parts the library doesn't cover?

replies(1): >>lpil+Er
◧◩
8. schrod+hj[view] [source] [discussion] 2025-12-13 20:13:57
>>Uehrek+Qb
The motivation isn’t there to create new languages for humans when you’re programming at a higher level of abstraction now (AI prompting).

It’d be like inventing a new assembly language when everyone is writing code in higher level languages that compile to assembly.

I hope it’s not true, but I believe that’s what OP meant and I think the concern is valid!

replies(4): >>pxc+Sl >>rapind+rm >>merlin+jn >>abound+yn
◧◩◪
9. pxc+Sl[view] [source] [discussion] 2025-12-13 20:31:39
>>schrod+hj
> It’d be like inventing a new assembly language when everyone is writing code in higher level languages that compile to assembly.

Isn't that what WASM is? Or more or less what is going on when people devise a new intermediate representation for a new virtual machine? Creating new assembly languages is a useful thing that people continue to do!

◧◩◪
10. rapind+rm[view] [source] [discussion] 2025-12-13 20:35:04
>>schrod+hj
We may end up using AI to create simplified bespoke subset languages that fit our preferences. Like a DSL of sorts but with better performance characteristics than a traditional DSL and a small enough surface area.
◧◩◪
11. merlin+jn[view] [source] [discussion] 2025-12-13 20:40:59
>>schrod+hj
I believe prompting an AI is more like delegation than abstraction especially considering the non-deterministic nature of the results.
replies(1): >>sarche+su
◧◩◪
12. abound+yn[view] [source] [discussion] 2025-12-13 20:42:31
>>schrod+hj
I would argue it's more important than ever to make new languages with new ideas as we move towards new programming paradigms. I think the existence of modern LLMs encourages designing a language with all of the following attributes:

- Simple semantics (e.g. easy to understand for developers + LLMs, code is "obviously" correct)

- Very strongly typed, so you can model even very complex domains in a way the compiler can verify

- Really good error messages, to make agent loops more productive

- [Maybe] Easily integrates with existing languages, or at least makes it easy to port from existing languages

We may get to a point where humans don't need to look at the code at all, but we aren't there yet, so making the code easy to vet is important. Plus, there's also a few bajillion lines of legacy code that we need to deal with, wouldn't it be cool if you could port (or at least extend it) it into some standardized, performant, LLM-friendly language for future development?

replies(2): >>kevind+Ms >>aaronb+2z
◧◩◪
13. miki12+Np[view] [source] [discussion] 2025-12-13 20:57:08
>>lpil+5c
> Hi, I’m the creator of Gleam!

What's the state of Gleam's JSON parsing / serialization capabilities right now?

I find it to be a lovely little language, but having to essentially write every type three times (once for the type definition, once for the serializer, once for the deserializer) isn't something I'm looking forward to.

A functional language that can run both on the backend (Beam) and frontend (JS) lets one do a lot of cool stuff, like optimistic updates, server reconciliation, easy rollback on failure etc, but that requires making actions (and likely also states) easily serializable and deserializable.

replies(3): >>lawn+9r >>lpil+sr >>worthl+jh1
◧◩◪
14. girvo+aq[view] [source] [discussion] 2025-12-13 20:59:37
>>lpil+ec
I wonder why so many have got this wrong across this thread? Was it true once upon a time or something, or have people just misunderstood your docs or similar?
replies(1): >>lpil+Cs
◧◩◪
15. andy_p+rq[view] [source] [discussion] 2025-12-13 21:01:16
>>lpil+ec
"Big Elixir" must be paying people to misunderstand Gleam today eh ;-)
◧◩◪◨
16. lawn+9r[view] [source] [discussion] 2025-12-13 21:05:53
>>miki12+Np
This is also what really annoyed me when I tried out Gleam.

I'm waiting for something similar to serde in Rust, where you simply tag your type and it'll generate type-safe serialization and deserialization for you.

Gleam has some feature to generate the code for you via the LSP, but it's just not good enough IMHO.

replies(1): >>lpil+Nr
◧◩◪◨
17. lpil+sr[view] [source] [discussion] 2025-12-13 21:08:10
>>miki12+Np
You can generate those conversions, most people do.

But also, you shouldn’t think of it as writing the same type twice! If you couple your external API and your internal data model you are greatly restricting your domain modelling cability. Even in languages where JSON serialisation works with reflection I would recommend having a distinct definition for the internal and external structure so you can have the optimal structure for each context, dodging the “lowest common decimator” problem.

replies(2): >>miki12+Ds >>premek+DI
◧◩◪◨
18. lpil+Er[view] [source] [discussion] 2025-12-13 21:09:41
>>tazjin+Vf
No, it means that one specific package only offers bindings to certain parts. It’s the documentation for one library, not the language.
◧◩
19. miki12+Jr[view] [source] [discussion] 2025-12-13 21:10:12
>>Uehrek+Qb
There's a flywheel where programmers choose languages that LLMs already understand, but LLMs can only learn languages that programmers write a sufficient amount of code in.

Because LLMs make it that much faster to develop software, any potential advantage you may get from adopting a very niche language is overshadowed by the fact that you can't use it with an LLM. This makes it that much harder for your new language to gain traction. If your new language doesn't gain enough traction, it'll never end up in LLM datasets, so programmers are never going to pick it up.

replies(5): >>crysta+sE >>croes+BU >>treyd+G41 >>CraigJ+hp1 >>ferfum+hM1
◧◩◪◨⬒
20. lpil+Nr[view] [source] [discussion] 2025-12-13 21:10:38
>>lawn+9r
Multiple of such tools exist and have done for years. Serde isn’t a Rust-core project, and similarly the Gleam alternatives are not Gleam-core.
replies(1): >>lawn+lt
◧◩◪◨
21. lpil+Cs[view] [source] [discussion] 2025-12-13 21:14:48
>>girvo+aq
OTP is a very complex subject and quite unusual in its scope, and it’s not even overly clear what it even is. Even in Erlang and Elixir it’s commonly confused, so I think it’s understandable that Gleam has the same problem further still with its more distinct programming style.
◧◩◪◨⬒
22. miki12+Ds[view] [source] [discussion] 2025-12-13 21:14:57
>>lpil+sr
I understand your point, and I agree with it in most contexts! However, for the specific use case where one assumes that the client and server are running the exact same code (and the client auto-refreshes if this isn't the case), and where serialization is only used for synchronizing between the two, decoupling the state from it's representation on the wire doesn't really make sense.
replies(1): >>lpil+uE
◧◩◪◨
23. kevind+Ms[view] [source] [discussion] 2025-12-13 21:16:13
>>abound+yn
I think that LLMs will be complemented best with a declarative language, as inserting new conditions/effects in them can be done without modifying much (if any!) of the existing code. Especially if the declarative language is a logic and/or constraint-based language.

We're still in early days with LLMs! I don't think we're anywhere near the global optimum yet.

◧◩◪◨⬒⬓
24. lawn+lt[view] [source] [discussion] 2025-12-13 21:21:02
>>lpil+Nr
Rust has macros that make serde very convenient, which Gleam doesn't have.

Could you point to a solution that provides serde level of convenience?

Edit: The difference with generating code (like with Gleam) and having macros generate the code from a few tags is quite big. Small tweaks are immediately obvious in serde in Rust, but they drown in the noise in the complete serialization code like with the Gleam tools.

replies(2): >>lpil+nE >>sshine+eR
◧◩◪◨
25. sarche+su[view] [source] [discussion] 2025-12-13 21:29:01
>>merlin+jn
It does further than non-determinism. LLM output is chaotic. 2 nearly identical prompts with a single minor difference can result in 2 radically different outputs.
◧◩◪◨
26. aaronb+2z[view] [source] [discussion] 2025-12-13 21:59:17
>>abound+yn
This is why I use rust for everything practicable now. Llms make the tedious bits go away and I can just enjoy the fun bits.
◧◩◪
27. sbuttg+2C[view] [source] [discussion] 2025-12-13 22:19:15
>>lpil+5c
Fair enough, but to be fair to my statements, the quotes I chose were largely from gleam-lang.org or the Gleam OTP library.

Take for example this section of the Gleam website FAQ section:

https://gleam.run/frequently-asked-questions/#how-does-gleam...

"Elixir has better support for the OTP actor framework. Gleam has its own version of OTP which is type safe, but has a smaller feature set."

At least on the surface, "but has a smaller feature set" suggests that there are features left of the table: which I think it would be fair to read as a subset of support.

If I look at this statement from the Gleam OTP Library `readme.md`:

"Not all Erlang/OTP functionality is included in this library. Some is not possible to represent in a type safe way, so it is not included. Other features are still in development, such as further process supervision strategies."

That quote leaves the impression that OTP is not fully supported and therefore only a subset is. It doesn't expound further to say unsupported OTP functionality is alternatively available by accessing the Erlang modules/functions directly or through other mechanisms.

In all of this I'll take your word for it over the website and readme files; these things are often not written directly by the principals and are often not kept as up-to-date as you'd probably like. Still even taking that at face value, I think it leaves some questions open. What is meant by supporting all of OTP? Where the documentation and library readme equivocates to full OTP support, are there trade-offs? Is "usable as normal" usable as normal for Erlang or as normal for Gleam? For example, are the parts left out of the library available via directly accessing the Erlang modules/functions, but only at the cost of abandoning the Gleam type safety guarantees for those of Erlang? How does this hold for Gleam's JavaScript compilation target?

As you know, Elixir also provides for much OTP functionality via direct access to the Erlang libraries. However, there I expect the distinction between Elixir support and the Erlang functionality to be substantially more seamless than with Gleam: Elixir integrates the Erlang concepts of typing (etc.) much more directly than does Gleam. If, however, we're really talking about full OTP support in Gleam while not losing the reasons you might choose Gleam over Elixir or Erlang, which I think is mostly going to be about the static typing... then yes, I'm very wrong. If not... I could see how strictly speaking I'm wrong, but perhaps not completely wrong in spirit.

replies(1): >>lpil+eE
◧◩◪◨
28. lpil+eE[view] [source] [discussion] 2025-12-13 22:34:17
>>sbuttg+2C
Ah, that’s good feedback. I agree, that documentation is misleading. I’ll fix them ASAP.

> Elixir also provides for much OTP functionality via direct access to the Erlang libraries.

This is the norm in Gleam too! Gleam’s primary design constraint is interop with Erlang code, so using these libraries is straightforward and commonplace.

replies(1): >>gr4vit+EB1
◧◩◪◨⬒⬓⬔
29. lpil+nE[view] [source] [discussion] 2025-12-13 22:35:02
>>lawn+lt
In Gleam code generators are most commonly used, similar to in C#, Go, or Elm.
replies(1): >>lawn+nh1
◧◩◪
30. crysta+sE[view] [source] [discussion] 2025-12-13 22:35:25
>>miki12+Jr
> Because LLMs make it that much faster to develop software

I feel as though "facts" such as this are presented to me all the time on HN, but in my every day job I encounter devs creating piles of slop that even the most die-hard AI enthusiasts in my office can't stand and have started to push against.

I know, I know "they just don't know how to use LLMs the right way!!!", but all of the better engineers I know, the ones capable of quickly assessing the output of an LLM, tend to use LLMs much more sparingly in their code. Meanwhile the ones that never really understood software that well in the first place are the ones building agent-based Rube Goldberg machines that ultimately slow everyone down

If we can continue living in the this AI hallucination for 5 more years, I think the only people capable of producing anything of use or value will be devs that continued to devote some of their free time to coding in languages like Gleam, and continued to maintain and sharpen their ability to understand and reason about code.

replies(4): >>Verdex+oW >>Syzygi+gz1 >>blitz_+lU1 >>citize+vq2
◧◩◪◨⬒⬓
31. lpil+uE[view] [source] [discussion] 2025-12-13 22:35:38
>>miki12+Ds
Totally. This is where I would generate them.
◧◩◪◨⬒
32. premek+DI[view] [source] [discussion] 2025-12-13 23:10:10
>>lpil+sr
>You can generate those conversions, most people do.

Hi, what do people use to generate them, I found gserde (edit: and glerd-json)

replies(2): >>okkdev+2W >>lpil+lp1
◧◩◪◨⬒⬓⬔
33. sshine+eR[view] [source] [discussion] 2025-12-14 00:30:21
>>lawn+lt
> Rust has macros that make serde very convenient, which Gleam doesn't have.

To be fair, Rust's proc macros are only locally optimal:

While they're great to use, they're only okay to program.

Your proc-macro needs to live in another crate, and writing proc macros is difficult.

Compare this to dependently typed languages og Zig's comptime: It should be easier to make derive(Serialize, Deserialize) as compile-time features inside the host language.

When Gleam doesn't have Rust's derivation, it leaves for a future where this is solved even better.

◧◩◪
34. croes+BU[view] [source] [discussion] 2025-12-14 01:02:35
>>miki12+Jr
I bet LLMs create their version of Jevons paradox.

More trial and error because trial is cheap, in the end less typing but hardly faster end results

35. innoce+1V[view] [source] 2025-12-14 01:07:53
>>bnchrc+(OP)
I don’t mean to minimize the huge effort by the Gleam team; however, Elixir cannot become Gleam without breaking OTP/BEAM in the same ways Gleam does. As it stands now, Elixir is the superior language between the two, if using the full Erlang VM is your goal.
replies(1): >>worthl+vh1
◧◩◪◨⬒⬓
36. okkdev+2W[view] [source] [discussion] 2025-12-14 01:18:42
>>premek+DI
The language server code action :)
◧◩◪◨
37. Verdex+oW[view] [source] [discussion] 2025-12-14 01:23:24
>>crysta+sE
This last week:

* One developer tried to refactor a bunch of graph ql with an LLM and ended up checking in a bunch of completely broken code. Thankfully there were api tests.

* One developer has an LLM making his PRs. He slurped up my unfinished branch, PRed it, and merged (!) it. One can only guess that the approved was also using an LLM. When I asked him why he did it, he was completely baffled and assured me he would never. Source control tells a different story.

* And I forgot to turn off LLM auto complete after setting up my new machine. The LLM wouldn't stop hallucinating non-existent constructors for non-existent classes. Bog standard intellisense did in seconds what I needed after turning off LLM auto complete.

LLMs sometimes save me some time. But overall I'm sitting at a pretty big amount of time wasted by them that the savings have not yet offset.

replies(2): >>brabel+es1 >>st3fan+4E1
◧◩◪
38. treyd+G41[view] [source] [discussion] 2025-12-14 03:21:07
>>miki12+Jr
I don't think this is actually true. LLMs have an impressive amount of ability to do knowledge-transfer between domains, it only makes sense that that would also apply to programming languages, since the basic underlying concepts (functions, data structures, etc.) exist nearly everywhere.

If this does appear to become a problem, is it not hard to apply the same RLHF infrastructure that's used to get LLMs effective at writing syntactically-correct code that accomplishes sets of goals in existing programming languages to new ones.

replies(1): >>troupo+Dc1
39. market+k91[view] [source] 2025-12-14 04:37:42
>>bnchrc+(OP)
i just implemented a project in elixir with LLM support and would never have considered that before. (i had never used elixir before) - So who knows maybe it will help adoption?
40. troupo+ac1[view] [source] 2025-12-14 05:26:46
>>bnchrc+(OP)
> what I wish Elixir would become (re:typing).

Elixir is slowly rolling out set-theoretic typing: https://hexdocs.pm/elixir/main/gradual-set-theoretic-types.h...

replies(1): >>tasuki+xi1
◧◩◪◨
41. troupo+Dc1[view] [source] [discussion] 2025-12-14 05:33:27
>>treyd+G41
> LLMs have an impressive amount of ability to do knowledge-transfer between domains, it only makes sense that that would also apply to programming languages, since the basic underlying concepts (functions, data structures, etc.) exist nearly everywhere.

That would make sense if LLMs understood the domains and the concepts. They don't. They need a lot of training data to "map" the "knowledge transfer".

Personal anecdote: Claude stopped writing Java-like Elixir only some time around summer this year (Elixir is 13 years old), and is still incapable of writing "modern HEEX" which changed some of the templaring syntax in Phoenix almost two years ago.

◧◩◪◨
42. worthl+jh1[view] [source] [discussion] 2025-12-14 06:56:10
>>miki12+Np
Also. The lisp cam now generate serialisers and deserializers for some types iirc.
◧◩◪◨⬒⬓⬔⧯
43. lawn+nh1[view] [source] [discussion] 2025-12-14 06:58:12
>>lpil+nE
Yes, my point is that it's not a good experience.
replies(1): >>lpil+ap1
◧◩
44. worthl+vh1[view] [source] [discussion] 2025-12-14 07:01:16
>>innoce+1V
I use many of the otp functions in gleam on thr regular, what functionality cant i call?

Gleam can call any erlang function, and can somewhat handle the idc types. [ im sure it has another name ].

Did i miss something that gleam fails on, because this is one of my concerns.

replies(1): >>innoce+h83
◧◩
45. tasuki+xi1[view] [source] [discussion] 2025-12-14 07:20:36
>>troupo+ac1
I dunno, my unfounded guess is that gradual type systems are super complex and very hard to get right.

Why use something complex and half working, when you can have the real thing?

replies(2): >>troupo+pn1 >>foxyge+LU2
◧◩◪
46. troupo+pn1[view] [source] [discussion] 2025-12-14 08:34:48
>>tasuki+xi1
They are working towards "the real thing", whatever your definition of real is.

BTW in the 90s people tried to come up with a type system for Erlang, and failed:

--- start quote ---

Phil Wadler[1] and Simon Marlow [2] worked on a type system for over a year and the results were published in [3]. The results of the project were somewhat disappointing. To start with, only a subset of the language was type-checkable, the major omission being the lack of process types and of type checking inter-process mes-sages. Although their type system was never put into production, it did result in a notation for types which is still in use today for informally annotating types.

Several other projects to type check Erlang also failed to produce results that could be put into production. It was not until the advent of the Dialyzer [4] that realistic type analysis of Erlang programs became possible.

https://lfe.io/papers/%5B2007%5D%20Armstrong%20-%20HOPL%20II...

--- end quote ---

[1] Yes, that Philip Wadler, https://en.wikipedia.org/wiki/Philip_Wadler

[2] Yes, that Simon Marlow, https://en.wikipedia.org/wiki/Simon_Marlow

[3] A practical subtyping system for Erlang https://dl.acm.org/doi/10.1145/258948.258962

[4] https://www.erlang.org/doc/apps/dialyzer/dialyzer.html

◧◩◪◨⬒⬓⬔⧯▣
47. lpil+ap1[view] [source] [discussion] 2025-12-14 09:01:38
>>lawn+nh1
I would be very interested in hearing about your experience writing Gleam and the problems you have!

We regularly collect feedback and haven’t got problems reported here, so your feedback saying otherwise would be a useful data point.

replies(1): >>lawn+4r1
◧◩◪
48. CraigJ+hp1[view] [source] [discussion] 2025-12-14 09:02:58
>>miki12+Jr
> but LLMs can only learn languages that programmers write a sufficient amount of code in

i wrote my own language, LLMs have been able to work with it at a good level for over a year. I don't do anything special to enable that - just front load some key examples of the syntax before giving the task. I don't need to explain concepts like iteration.

Also llm's can work with languages with unconventional paradigms - kdb comes up fairly often in my world (array language but also written right to left).

replies(1): >>Xmd5a+Sv1
◧◩◪◨⬒⬓
49. lpil+lp1[view] [source] [discussion] 2025-12-14 09:03:55
>>premek+DI
There’s several options, depending on what you want. The most commonly used option is the language server.
replies(1): >>premek+K02
◧◩◪◨⬒⬓⬔⧯▣▦
50. lawn+4r1[view] [source] [discussion] 2025-12-14 09:30:57
>>lpil+ap1
I might try to collect my thoughts somewhere when I get some time over.

Thank you for Gleam btw, I do really like the rest of the language.

◧◩◪◨⬒
51. brabel+es1[view] [source] [discussion] 2025-12-14 09:52:01
>>Verdex+oW
> One developer tried to refactor a bunch of graph ql with an LLM and ended up checking in a bunch of completely broken code. Thankfully there were api tests.

So the LLM was not told how to run the tests? Without that they cannot know if what they did works, and they are a bit like humans, they try something and then they need to check if that does the right thing. Without a test cycle you definitely don’t get a lot out of LLMs.

replies(1): >>Capric+8S1
◧◩◪◨
52. Xmd5a+Sv1[view] [source] [discussion] 2025-12-14 10:53:10
>>CraigJ+hp1
LLMs still struggle with lisp parens though
replies(1): >>igrego+HG2
◧◩◪◨
53. Syzygi+gz1[view] [source] [discussion] 2025-12-14 11:45:46
>>crysta+sE
Using AI to write good code faster is hard work.

I once toured a dairy farm that had been a pioneer test site for Lasix. Like all good hippies, everyone I knew shunned additives. This farmer claimed that Lasix wasn't a cheat because it only worked on really healthy cows. Best practices, and then add Lasix.

I nearly dropped out of Harvard's mathematics PhD program. Sticking around and finishing a thesis was the hardest thing I've ever done. It didn't take smarts. It took being the kind of person who doesn't die on a mountain.

There's a legendary Philadelphia cook who does pop-up meals, and keeps talking about the restaurant he plans to open. Professional chefs roll their eyes; being a good cook is a small part of the enterprise of engineering a successful restaurant.

(These are three stool legs. Neurodivergents have an advantage using AI. A stool is more stable when its legs are further apart. AI is an association engine. Humans find my sense of analogy tedious, but spreading out analogies defines more accurate planes in AI's association space. One doesn't simply "tell AI what to do".)

Learning how to use AI effectively was the hardest thing I've done recently, many brutal months of experiment, test projects with a dozen languages. One maintains several levels of planning, as if a corporate CTO. One tears apart all code in many iterations of code review. Just as a genius manager makes best use of flawed human talent, one learns to make best use of flawed AI talent.

My guess is that programmers who write bad code with AI were already writing bad code before AI.

Best practices, and then add AI.

◧◩◪◨⬒
54. gr4vit+EB1[view] [source] [discussion] 2025-12-14 12:19:09
>>lpil+eE
Thanks for the clarification. I've read about Gleam here and there, and played with it a bit, and thought there was no way to directly access OTP through the Erlang libraries.

This can be just my lack of familiarity with the ecosystem though.

Gleam looks lovely and IMO is the most readable language that runs on the BEAM VM. Good job!

replies(1): >>lpil+zP1
◧◩◪◨⬒
55. st3fan+4E1[view] [source] [discussion] 2025-12-14 12:56:24
>>Verdex+oW
The first two cases indicate that you have some gaps in your change management process. Strict requirements for pulls and ci/cd checks.
56. devale+zL1[view] [source] 2025-12-14 14:18:47
>>bnchrc+(OP)
I've also been wanting to make Gleam my primary language (am generally a Typescript dev), and I have not had any issue with using it with LLMs (caveat, I'm obviously still new with it, so might just be ignorant).

In fact, I'd say most of the Gleam code that has been generated has been surprisingly reliable and easy to reason about. I suspect this has to do with the static typing, incredible language tooling, and small surface area of the language.

I literally just copy the docs from https://tour.gleam.run/everything/ into a local MD file and let it run. Packages are also well documented, and Claude has had no issue looping with tests/type checking.

In the past month I've built the following, all primarily with Claude writing the Gleam parts:

- A websocket-first analytics/feature flag platform (Gleam as the backend): https://github.com/devdumpling/beacon

- A realtime holiday celebration app for my team where Gleam manages presence, cursor state, emojis, and guestbook writes (still rough): https://github.com/devdumpling/snowglobe

- A private autobattler game backend built for the web

While it's obviously not as well-trodden as building in typescript or Go or Rust, I've been really happy with the results as someone not super familiar with the BEAM/Erlang.

EDIT: Sorry I don't have demos up for these yet. Wasn't really ready to share them but felt relevant to this thread.

◧◩◪
57. ferfum+hM1[view] [source] [discussion] 2025-12-14 14:23:22
>>miki12+Jr
You raise such an interesting point!

But consider: as LLMs get better and approach AGI you won't need a corpus: only a specification.

In this way, AI may enable more languages, not less.

◧◩◪
58. constr+fP1[view] [source] [discussion] 2025-12-14 14:46:48
>>lpil+ec
who cares, just dont shove political opinions into a software project that developers. we are devs not jobless sjw's running around the road with some useless sign board
replies(2): >>lpil+uP1 >>Capric+MS1
◧◩◪◨
59. lpil+uP1[view] [source] [discussion] 2025-12-14 14:49:04
>>constr+fP1
Free software was started as a political movement by Stallman et al. Why would we stop now?
◧◩◪◨⬒⬓
60. lpil+zP1[view] [source] [discussion] 2025-12-14 14:49:22
>>gr4vit+EB1
Thank you!
◧◩◪◨⬒⬓
61. Capric+8S1[view] [source] [discussion] 2025-12-14 15:09:24
>>brabel+es1
You guys always find a way to say "you can be an LLM maximalist too, you just skipped a step."

The bigger story here is not that they forgot to tell the LLM to run tests, it's that agentic use has been so normalized and overhyped that an entire PR was attempted without any QA. Even if you're personally against this, this is how most people talk about agents online.

You don't always have the privilege of working on a project with tests, and rarely are they so thorough that they catch everything. Blindly trusting LLM output without QA or Review shouldn't be normalized.

replies(2): >>blitz_+FU1 >>brabel+u94
◧◩◪◨
62. Capric+MS1[view] [source] [discussion] 2025-12-14 15:14:10
>>constr+fP1
> who cares, just dont shove political opinions into a software project that developers. we are devs not jobless sjw's running around the road with some useless sign board

Here we are, having a technical discussion and here you are, shoving politics into it.

◧◩◪◨
63. blitz_+lU1[view] [source] [discussion] 2025-12-14 15:27:16
>>crysta+sE
I wish I could just ship 99% AI generated code and never have to check anything.

Where is everyone working where they can just ship broken code all the time?

I use LLMs for hours, every single day, yes sometimes they output trash. That’s why the bottleneck is checking the solutions and iterating on them.

All the best engineers I know, the ones managing 3-4 client projects at once, are using LLMs nonstop and outputting 3-4x their normal output. That doesn’t mean LLMs are one-shotting their problems.

◧◩◪◨⬒⬓⬔
64. blitz_+FU1[view] [source] [discussion] 2025-12-14 15:29:40
>>Capric+8S1
Who is normalizing merging ANYTHING, LLM-generated or human-generated, without QA or review?

You should be reviewing everything that touches your codebase regardless of source.

replies(1): >>Capric+Ft2
◧◩◪◨⬒⬓⬔
65. premek+K02[view] [source] [discussion] 2025-12-14 16:14:08
>>lpil+lp1
Oh nice, didn't know about it. (I have migrated from vim to neovim and half of it doesn't work for me yet)

I wonder why this is preferred over codegen (during build), possibly using some kind of annotations?

replies(1): >>lpil+cg2
◧◩◪◨⬒⬓⬔⧯
66. lpil+cg2[view] [source] [discussion] 2025-12-14 17:48:41
>>premek+K02
We've not had any proposals for a design like that. We are open to proposals though! I wrote a blog post detailing the process here: https://lpil.uk/blog/how-to-add-metaprogramming-to-gleam/
◧◩◪◨
67. citize+vq2[view] [source] [discussion] 2025-12-14 18:56:16
>>crysta+sE
I agree with everything you wrote.

You are overlooking a blind spot, that is increasingly becoming a weakness for devs. You assume that businesses care that their software actually works. It sounds crazy from the dev side but they really don't. As long as cash keeps hitting accounts the people in charge MBAs do not care how it gets there and the program to find that out only requires one simple unmistakable algo Money In - money out.

evidence

Spreadsheets. These DSL lite tools are almost universally known to be generally wrong and full of bugs. Yet, the world literally runs on them.

Lowest bidder outsourcing. Its well known that various low cost outsourcing produces non functional or failed projects or projects that limp along for years with nonstop bug stomping. Yet business is booming.

This only works in a very rich empire that is in the collapse/looting phase. Which we are in and will not change. See: History.

◧◩◪
68. agos+Kr2[view] [source] [discussion] 2025-12-14 19:04:48
>>lpil+ec
hey, thanks for the clarification. I was under the impression that Gleam had a few shortcomings re: OTP, like missing APIs or the need to fall back to Erlang. Many people I know who work regularly with Elixir hold similar opinions - do you have any idea what happened there? Is there a lack of publicity for this support? Is it a documentation problem?
replies(1): >>lpil+rL2
◧◩◪◨⬒⬓⬔⧯
69. Capric+Ft2[view] [source] [discussion] 2025-12-14 19:19:22
>>blitz_+FU1
A LOT of people, if you're paying attention. Why do you think that happened at their company?

It's not hard to find comments from people vibe coding apps without understanding the code, even apps handling sensitive data. And it's not hard to find comments saying agents can run by themselves.

I mean people are arguing AGI is already here. What do you mean who is normalizing this?

replies(1): >>blitz_+393
◧◩
70. zeroto+4z2[view] [source] [discussion] 2025-12-14 19:56:37
>>Uehrek+Qb
Pure anecdote. Over the last year I've taken the opportunity to compare app development in Swift (+ SwiftUI and SwiftData) for iOS with React Native via Expo. I used Cursor with both OpenAI and Anthropic models. The difference was stark. With Swift the pace of development was painfully slow with confused outputs and frequent hallucinations. With React and Expo the AI was able to generate from the first few short prompts what it took me a month to produce with Swift. AI in development is all about force multipliers, speed of delivery, and driving down cost per product iteration. IMO There is absolutely no reason to choose languages, frameworks, or ecosystems with weaker open corpuses.
◧◩◪◨⬒
71. igrego+HG2[view] [source] [discussion] 2025-12-14 20:45:46
>>Xmd5a+Sv1
I think most people struggle to one-shot Lisp parens. Visual guides or structured editing are sorta necessary. LLMs don't have that kind of UI (yet?)
◧◩◪◨
72. lpil+rL2[view] [source] [discussion] 2025-12-14 21:15:31
>>agos+Kr2
I presume they checked out Gleam years ago, or their investigation was more shallow.

That aside, it is normal in Elixir to use Erlang OTP directly. Neither Elixir nor Gleam provides an entirely alternative API for OTP. It is a strength that BEAM languages call each other, not a weakness.

◧◩◪
73. foxyge+LU2[view] [source] [discussion] 2025-12-14 22:10:28
>>tasuki+xi1
I hate how people talk about type systems as if there were no trade-offs to be considered. A Hindley–Milner style type system would effectively kill half the features that make Elixir amazing, and worse, would break pretty much all existing code.
◧◩◪
74. innoce+h83[view] [source] [discussion] 2025-12-14 23:44:23
>>worthl+vh1
- No state machine behaviours. Gleam cannot do gen_statem.

- Limited OTP system messages. Gleam doesn't yet support all OTP system messages, so some OTP debugging messages are discarded by Gleam.

- Gleam doesn't have an equivalent of gen_event to handle event handlers.

- Gleam doesn't support DynamicSupervisor or the :simple_one_for_one for dynamically starting children at runtime.

replies(1): >>worthl+Gab
◧◩◪◨⬒⬓⬔⧯▣
75. blitz_+393[view] [source] [discussion] 2025-12-14 23:49:55
>>Capric+Ft2
I fully believe there are misguided leaders advocating for "increasing velocity" or "productivity" or whatever, but the technical leaders should be pushing back. You can't make a ship go faster by removing the hull.

And if you want to try... well you get what you get!

But again, no one who is serious about their business and serious about building useful products is doing this.

replies(1): >>Verdex+ek3
◧◩◪◨⬒⬓⬔⧯▣▦
76. Verdex+ek3[view] [source] [discussion] 2025-12-15 01:15:57
>>blitz_+393
> But again, no one who is serious about their business and serious about building useful products is doing this.

While this is potentially true for software companies, there are many companies for which software or even technology in general is not a core competency. They are very serious about their very useful products. They also have some, er, interesting ideas about what LLMs allow them to accomplish.

◧◩◪◨⬒⬓⬔
77. brabel+u94[view] [source] [discussion] 2025-12-15 09:48:22
>>Capric+8S1
I am not saying you should be a LLM maximalist at all. I am just saying LLMs need to have a change-test cycle, like humans, in order to be effective. But looks like your goal is not really to be effective at using LLMs, but to bitch about it on the internet.
replies(1): >>Capric+rka
◧◩◪◨⬒⬓⬔⧯
78. Capric+rka[view] [source] [discussion] 2025-12-16 23:17:30
>>brabel+u94
> But looks like your goal is not really to be effective at using LLMs, but to bitch about it on the internet

Listen, you can engage with the comment or ignore everything but the first sentence and throw out personal insults. If you don't want to sound like a shill, don't write like one.

When you're telling people the problem is the LLM did not have tests, you're saying "Yeah I know you caught it spitting out random unrelated crap, but if you just let it verify if it was crap or not, maybe it would get it right after a dozen tries." Does that not seem like a horribly ineffectual way to output code? Maybe that's how some people write code, but I evaluate myself with tests to see if I accidentally broke something elsewhere. Not because I have no idea what I'm even writing to begin with.

You wrote

> Without that they cannot know if what they did works, and they are a bit like humans

They are exactly not like humans this way. LLMs break code by not writing valid code to begin with. Humans break code by forgetting an obscure business rule they heard about 6 months ago. People work on very successful projects without tests all the time. It's not my preference, but tests are non-exhaustive and no replacement for a human that knows what they're doing. And the tests are meaningless without that human writing them.

So your response to that comment, pushing them further down the path of agentic code doing everything for them, smacks of maximalism, yes.

replies(1): >>brabel+drh
◧◩◪◨
79. worthl+Gab[view] [source] [discussion] 2025-12-17 07:52:13
>>innoce+h83
I didn't know about the statem limitation, I have howerver worked around it with gen server like wrapper, that way all state transitions were handled with gleams type system.

I have been meaning to ask about that on the discord but its one of the ten thousand things on my backlog.

Maybe i could write a gen_event equivalent.. I have some code which does very similar things.

Thank you for taking the time to respond.

replies(1): >>innoce+Mcb
◧◩◪◨⬒
80. innoce+Mcb[view] [source] [discussion] 2025-12-17 08:13:14
>>worthl+Gab
You're welcome.

I'm sure at some point, Gleam will figure it all out.

◧◩◪◨⬒⬓⬔⧯▣
81. brabel+drh[view] [source] [discussion] 2025-12-19 08:04:07
>>Capric+rka
You need to seek medical help. LLM is not your enemy. I am not your enemy. The world is not against you.
[go to top]