zlacker

[return to "My AI skeptic friends are all nuts"]
1. matthe+y41[view] [source] 2025-06-03 06:58:13
>>tablet+(OP)
I think this article is pretty spot on — it articulates something I’ve come to appreciate about LLM-assisted coding over the past few months.

I started out very sceptical. When Claude Code landed, I got completely seduced — borderline addicted, slot machine-style — by what initially felt like a superpower. Then I actually read the code. It was shockingly bad. I swung back hard to my earlier scepticism, probably even more entrenched than before.

Then something shifted. I started experimenting. I stopped giving it orders and began using it more like a virtual rubber duck. That made a huge difference.

It’s still absolute rubbish if you just let it run wild, which is why I think “vibe coding” is basically just “vibe debt” — because it just doesn’t do what most (possibly uninformed) people think it does.

But if you treat it as a collaborator — more like an idiot savant with a massive brain but no instinct or nous — or better yet, as a mech suit [0] that needs firm control — then something interesting happens.

I’m now at a point where working with Claude Code is not just productive, it actually produces pretty good code, with the right guidance. I’ve got tests, lots of them. I’ve also developed a way of getting Claude to document intent as we go, which helps me, any future human reader, and, crucially, the model itself when revisiting old code.

What fascinates me is how negative these comments are — how many people seem closed off to the possibility that this could be a net positive for software engineers rather than some kind of doomsday.

Did Photoshop kill graphic artists? Did film kill theatre? Not really. Things changed, sure. Was it “better”? There’s no counterfactual, so who knows? But change was inevitable.

What’s clear is this tech is here now, and complaining about it feels a bit like mourning the loss of punch cards when terminals showed up.

[0]: https://matthewsinclair.com/blog/0178-why-llm-powered-progra...

◧◩
2. raxxor+yi1[view] [source] 2025-06-03 09:29:26
>>matthe+y41
The better I am at solving a problem, the less I use AI assistants. I use them if I try a new language or framework.

Busy code I need to generate is difficult to do with AI too. Because then you need to formalize the necessary context for an AI assistant, which is exhausting with an unsure result. So perhaps it is just simpler to write it yourself quickly.

I understand comments being negative, because there is so much AI hype without having to many practical applications yet. Or at least good practical applications. Some of that hype is justified, some of it is not. I enjoyed the image/video/audio synthesis hype more tbh.

Test cases are quite helpful and comments are decent too. But often prompting is more complex than programming something. And you can never be sure if any answer is usable.

◧◩◪
3. Cthulh+tA1[view] [source] 2025-06-03 12:19:46
>>raxxor+yi1
> But often prompting is more complex than programming something.

I'd challenge this one; is it more complex, or is all the thinking and decision making concentrated into a single sentence or paragraph? For me, programming something is taking a big high over problem and breaking it down into smaller and smaller sections until it's a line of code; the lines of code are relatively low effort / cost little brain power. But in my experience, the problem itself and its nuances are only defined once all code is written. If you have to prompt an AI to write it, you need to define the problem beforehand.

It's more design and more thinking upfront, which is something the development community has moved away from in the past ~20 years with the rise of agile development and open source. Techniques like TDD have shifted more of the problem definition forwards as you have to think about your desired outcomes before writing code, but I'm pretty sure (I have no figures) it's only a minority of developers that have the self-discipline to practice test-driven development consistently.

(disclaimer: I don't use AI much, and my employer isn't yet looking into or paying for agentic coding, so it's chat style or inline code suggestions)

◧◩◪◨
4. sksiso+VR1[view] [source] 2025-06-03 13:58:45
>>Cthulh+tA1
The issue with prompting is English (or any other human language) is nowhere near as rigid or strict a language as a programming language. Almost always an idea can be expressed much more succinctly in code than language.

Combine that with when you’re reading the code it’s often much easier to develop a prototype solution as you go and you end up with prompting feeling like using 4 men to carry a wheelbarrow instead of having 1 push it.

◧◩◪◨⬒
5. michae+Dg2[view] [source] 2025-06-03 16:27:40
>>sksiso+VR1
I think we are going to end up with common design/code specification language that we use for prompting and testing. There's always going to be a need to convey the exact semantics of what we want. If not, for AI then for the humans who have to grapple with what is made.
◧◩◪◨⬒⬓
6. rerdav+It2[view] [source] 2025-06-03 17:44:08
>>michae+Dg2
Sounds like "Heavy process". "Specifying exact semantics" has been tried and ended up unimaginably badly.
◧◩◪◨⬒⬓⬔
7. bcrosb+oA2[view] [source] 2025-06-03 18:21:41
>>rerdav+It2
Nah, imagine a programming language optimized for creating specifications.

Feed it to an llm and it implements it. Ideally it can also verify it's solution with your specification code. If LLMs don't gain significantly more general capabilities I could see this happening in the longer term. But it's too early to say.

In a sense the llm turns into a compiler.

◧◩◪◨⬒⬓⬔⧯
8. cess11+AD2[view] [source] 2025-06-03 18:40:32
>>bcrosb+oA2
We've had that for a long, long time. Notably RAD-tooling running on XML.

The main lesson has been that it's actually not much of an enabler and the people doing it end up being specialised and rather expensive consultants.

◧◩◪◨⬒⬓⬔⧯▣
9. Camper+3R2[view] [source] 2025-06-03 19:59:40
>>cess11+AD2
RAD before transformers was like trying to build an iPhone before capacitive multitouch: a total waste of time.

Things are different now.

◧◩◪◨⬒⬓⬔⧯▣▦
10. cess11+jS2[view] [source] 2025-06-03 20:05:48
>>Camper+3R2
I'm not so sure. What can you show me that you think would be convincing?
◧◩◪◨⬒⬓⬔⧯▣▦▧
11. Camper+PW2[view] [source] 2025-06-03 20:31:41
>>cess11+jS2
I think there are enough examples of genuine AI-facilitated rapid application development out there already, honestly. I wouldn't have anything to add to the pile, since I'm not a RAD kind of guy.

Disillusionment seems to spring from expecting the model to be a god or a genie instead of a code generator. Some people are always going to be better at using tools than other people are. I don't see that changing, even though the tools themselves are changing radically.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
12. sorami+AI3[view] [source] 2025-06-04 04:56:36
>>Camper+PW2
That's a straw man. Asking for real examples to back up your claims isn't overt perfectionism.
[go to top]