zlacker

[return to "Perl's decline was cultural"]
1. superk+P2[view] [source] 2025-12-06 18:09:16
>>todsac+(OP)
Perl's "decline" saved it from a fate worst than death: popularity and splitting into dozens of incompatible versions from added/removed features (like python). Instead Perl is just available everywhere in the same stable form. Scripts always can just use the system perl interpreter. And most of the time a script written in $currentyear can run just as well on a perl system interpreter from 2 decades ago (and vice versa). It is the perfect language for system adminstration and personal use. Even if it isn't for machine learning and those kinds of bleeding edge things that need constant major changes. There are trade-offs.

This kind of ubiquitous availablility (from early popularity) combined with the huge drop-off in popularity due to raku/etc, lead to a unique and very valuable situation unmatched by any other comparable language. Perl just works everywhere. No containers, no dep hell, no specific versions of the language needed. Perl is Perl and it does what it always has reliably.

I love it. The decline was a savior.

◧◩
2. keepam+E3[view] [source] 2025-12-06 18:16:28
>>superk+P2
My language learning trajectory (from 10 years old) was 8086 assembly, QBASIC, C, Perl, Java, MAGMA, JavaScript/HTML/CSS, Python, Haskell, C++, vibe coding
◧◩◪
3. pomati+E7[view] [source] 2025-12-06 18:44:58
>>keepam+E3
How old are you now? Mid fifties here. And 'vibe coding' in what exactly - it is not of interest from a programming perspective, but from a 'what does the AI know best perspective'? I've followed a similar, but not identical trajectory and now vibe in python/htmx/flask without needing to review the code in depth (NB internal apps, not public facing ones), with claude code max. Vibe coding in the last 6-8 weeks now also seems to make a decent fist of embedded coding - esp32/arduino/esp-32, also claude code.
◧◩◪◨
4. keepam+551[view] [source] 2025-12-07 04:57:17
>>pomati+E7
35–44. Same thing, sometimes it makes planning errors, or misses context that should be obvious based on the files, but overall a huge booster. No need to review in depth, just set it against tests and let it iterate. So much potential, so exciting.

My feeling is the current suite of LLMs are "not smarter than US" they simply have far greater knowledge, unlimited focus, and unconstrained energy (modulo plan/credits/quotas of course!). I can't wait for the AIs that are actually smarter than us. Exciting to see what they'll do.

[go to top]