zlacker

[return to "I miss thinking hard"]
1. gyomu+v4[view] [source] 2026-02-04 04:42:51
>>jernes+(OP)
This March 2025 post from Aral Balkan stuck with me:

https://mastodon.ar.al/@aral/114160190826192080

"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.

When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."

◧◩
2. hellop+r7[view] [source] 2026-02-04 05:08:36
>>gyomu+v4
And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.

There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.

◧◩◪
3. GCUMst+Jb[view] [source] 2026-02-04 05:58:36
>>hellop+r7
I can't help but imagine training horses vs training cats. One of them is rewarding, a pleasure, beautiful to see, the other is frustrating, leaves you with a lot of scratches and ultimately both of you "agreeing" on a marginal compromise.
◧◩◪◨
4. lambda+ZI[view] [source] 2026-02-04 10:30:30
>>GCUMst+Jb
Right now vibe coding is more like training cats. You are constantly pushing against the model's tendency to produce its default outputs regardless of your directions. When those default outputs are what you want - which they are in many simple cases of effectively English-to-code translation with memorized lookup - it's great. When they are not, you might as well write the code yourself and at least be able to understand the code you've generated.
◧◩◪◨⬒
5. kimixa+oT[view] [source] 2026-02-04 11:48:25
>>lambda+ZI
Yup - I've related it to working with Juniors, often smart and have good understandings and "book knowledge" of many of the languages and tools involved, but you often have to step back and correct things regularly - normally around local details and project specifics. But then the "junior" you work with every day changes, so you have to start again from scratch.

I think there needs to be a sea change in the current LLM tech to make that no longer the case - either massively increased context sizes, so they can contain near a career worth of learning (without the tendency to start ignoring that context, as the larger end of the current still-way-too-small-for-this context windows available today), or even allow continuous training passes to allow direct integration of these "learnings" into the weights themselves - which might be theoretically possible today, but is many orders of magnitude higher in compute requirements than available today even if you ignore cost.

◧◩◪◨⬒⬓
6. throwt+a81[view] [source] 2026-02-04 13:32:02
>>kimixa+oT
Try writing more documentation. If your project is bigger than a one man team then you need it anyways and with LLM coding you effectively have an infinite man team.
◧◩◪◨⬒⬓⬔
7. kimixa+GH4[view] [source] 2026-02-05 13:21:16
>>throwt+a81
But that doesn't actually work for my use cases though, plenty of other people have already told me "I'm Holding It Wrong" without actual suggestions that work I've started ignoring them. At this stage I just assume many people work in very different sectors, and some see the "great benefits" often proselytized on the internet. And other areas don't see that. Systems programming, where I work, seems to be a poor fit - possibly due to relatively lack of content in the training corpus, perhaps due to company internal styles and APIs meaning lots of the context is taken up simply detailing takes a huge amount of the context leaving little for further corrections or details, or some other failure modes.

We have lots of documentation. Arguably too much - it quickly fills much of the claude opus context window with relevant documentation alone, and even then repeatedly outputs things directly counter to the documentation it just ingested.

[go to top]