zlacker

[return to "I tried Gleam for Advent of Code"]
1. bnchrc+56[view] [source] 2025-12-13 17:46:50
>>tymsca+(OP)
Gleam is a beautiful language, and what I wish Elixir would become (re:typing).

For those that don't know its also built upon OTP, the erlang vm that makes concurrency and queues a trivial problem in my opinion.

Absolutely wonderful ecosystem.

I've been wanting to make Gleam my primary language, but I fear LLMs have frozen programming language advancement and adoption for anything past 2021.

But I am hopeful that Gleam has slid just under the closing door and LLMs will get up to speed on it fast.

◧◩
2. Uehrek+Vh[view] [source] 2025-12-13 19:20:34
>>bnchrc+56
> I fear LLMs have frozen programming language advancement and adoption for anything past 2021.

Why would that be the case? Many models have knowledge cutoffs in this calendar year. Furthermore I’ve found that LLMs are generally pretty good at picking up new (or just obscure) languages as long as you have a few examples. As wide and varied as programming languages are, syntactically and ideologically they can only be so different.

◧◩◪
3. miki12+Ox[view] [source] 2025-12-13 21:10:12
>>Uehrek+Vh
There's a flywheel where programmers choose languages that LLMs already understand, but LLMs can only learn languages that programmers write a sufficient amount of code in.

Because LLMs make it that much faster to develop software, any potential advantage you may get from adopting a very niche language is overshadowed by the fact that you can't use it with an LLM. This makes it that much harder for your new language to gain traction. If your new language doesn't gain enough traction, it'll never end up in LLM datasets, so programmers are never going to pick it up.

◧◩◪◨
4. crysta+xK[view] [source] 2025-12-13 22:35:25
>>miki12+Ox
> Because LLMs make it that much faster to develop software

I feel as though "facts" such as this are presented to me all the time on HN, but in my every day job I encounter devs creating piles of slop that even the most die-hard AI enthusiasts in my office can't stand and have started to push against.

I know, I know "they just don't know how to use LLMs the right way!!!", but all of the better engineers I know, the ones capable of quickly assessing the output of an LLM, tend to use LLMs much more sparingly in their code. Meanwhile the ones that never really understood software that well in the first place are the ones building agent-based Rube Goldberg machines that ultimately slow everyone down

If we can continue living in the this AI hallucination for 5 more years, I think the only people capable of producing anything of use or value will be devs that continued to devote some of their free time to coding in languages like Gleam, and continued to maintain and sharpen their ability to understand and reason about code.

◧◩◪◨⬒
5. blitz_+q02[view] [source] 2025-12-14 15:27:16
>>crysta+xK
I wish I could just ship 99% AI generated code and never have to check anything.

Where is everyone working where they can just ship broken code all the time?

I use LLMs for hours, every single day, yes sometimes they output trash. That’s why the bottleneck is checking the solutions and iterating on them.

All the best engineers I know, the ones managing 3-4 client projects at once, are using LLMs nonstop and outputting 3-4x their normal output. That doesn’t mean LLMs are one-shotting their problems.

[go to top]