zlacker

[return to "My AI skeptic friends are all nuts"]
1. matthe+y41[view] [source] 2025-06-03 06:58:13
>>tablet+(OP)
I think this article is pretty spot on — it articulates something I’ve come to appreciate about LLM-assisted coding over the past few months.

I started out very sceptical. When Claude Code landed, I got completely seduced — borderline addicted, slot machine-style — by what initially felt like a superpower. Then I actually read the code. It was shockingly bad. I swung back hard to my earlier scepticism, probably even more entrenched than before.

Then something shifted. I started experimenting. I stopped giving it orders and began using it more like a virtual rubber duck. That made a huge difference.

It’s still absolute rubbish if you just let it run wild, which is why I think “vibe coding” is basically just “vibe debt” — because it just doesn’t do what most (possibly uninformed) people think it does.

But if you treat it as a collaborator — more like an idiot savant with a massive brain but no instinct or nous — or better yet, as a mech suit [0] that needs firm control — then something interesting happens.

I’m now at a point where working with Claude Code is not just productive, it actually produces pretty good code, with the right guidance. I’ve got tests, lots of them. I’ve also developed a way of getting Claude to document intent as we go, which helps me, any future human reader, and, crucially, the model itself when revisiting old code.

What fascinates me is how negative these comments are — how many people seem closed off to the possibility that this could be a net positive for software engineers rather than some kind of doomsday.

Did Photoshop kill graphic artists? Did film kill theatre? Not really. Things changed, sure. Was it “better”? There’s no counterfactual, so who knows? But change was inevitable.

What’s clear is this tech is here now, and complaining about it feels a bit like mourning the loss of punch cards when terminals showed up.

[0]: https://matthewsinclair.com/blog/0178-why-llm-powered-progra...

◧◩
2. raxxor+yi1[view] [source] 2025-06-03 09:29:26
>>matthe+y41
The better I am at solving a problem, the less I use AI assistants. I use them if I try a new language or framework.

Busy code I need to generate is difficult to do with AI too. Because then you need to formalize the necessary context for an AI assistant, which is exhausting with an unsure result. So perhaps it is just simpler to write it yourself quickly.

I understand comments being negative, because there is so much AI hype without having to many practical applications yet. Or at least good practical applications. Some of that hype is justified, some of it is not. I enjoyed the image/video/audio synthesis hype more tbh.

Test cases are quite helpful and comments are decent too. But often prompting is more complex than programming something. And you can never be sure if any answer is usable.

◧◩◪
3. avemur+jw1[view] [source] 2025-06-03 11:42:08
>>raxxor+yi1
I agree with your points but I'm also reminded of one my bigger learnings as a manager - the stuff I'm best at is the hardest, but most important, to delegate.

Sure it was easier to do it myself. But putting in the time to train, give context, develop guardrails, learn how to monitor etc ultimately taught me the skills needed to delegate effectively and multiply the teams output massively as we added people.

It's early days but I'm getting the same feeling with LLMs. It's as exhausting as training an overconfident but talented intern, but if you can work through it and somehow get it to produce something as good as you would do yourself, it's a massive multiplier.

◧◩◪◨
4. Goblin+CJ1[view] [source] 2025-06-03 13:16:09
>>avemur+jw1
Do LLMs learn? I had an impression you borrow a pretrained LLM that handles each query starting with the same initial state.
◧◩◪◨⬒
5. simonw+TM1[view] [source] 2025-06-03 13:34:22
>>Goblin+CJ1
No, LLMs don't learn - each new conversation effectively clears the slate and resets them to their original state.

If you know what you're doing you can still "teach" them though, but it's on you to do that - you need to keep on iterating on things like the system prompt you are using and the context you feed in to the model.

◧◩◪◨⬒⬓
6. rerdav+rv2[view] [source] 2025-06-03 17:53:04
>>simonw+TM1
That's mostly, but not completely true. There are various strategies to get LLMs to remember previous conversations. ChatGPT, for example, remembers (for some loose definition of "remembers") all previous conversations you've had with it.
◧◩◪◨⬒⬓⬔
7. runarb+r53[view] [source] 2025-06-03 21:27:03
>>rerdav+rv2
I think if you use a very loose definition of learning: A stimuli which alters subsequent behavior you can claim this is learning. But if you tell a human to replace the word “is” with “are” in the next two sentences, this could hardly be considered learning, rather it is just following commands, even though it meets the previous loose definition. This is why in psychology we usually include some timescale for how long the altered behavior must last for it to be considered learning. A short-term altered behavior is usually called priming. But even then I wouldn’t even consider “following commands” to be neither priming nor learning, I would simply call it obeying.

If an LLM learned something when you gave it commands, it would probably be reflected in some adjusted weights in some of its operational matrix. This is true of human learning, we strengthen some neural connection, and when we receive a similar stimuli in a similar situation sometime in the future, the new stimuli will follow a slightly different path along its neural pathway and result in a altered behavior (or at least have a greater probability of an altered behavior). For an LLM to “learn” I would like to see something similar.

◧◩◪◨⬒⬓⬔⧯
8. rerdav+d35[view] [source] 2025-06-04 16:27:52
>>runarb+r53
I think you have an overly strict definition of what "learning" means. ChatGPT now has memory that lasts beyond the lifetime of it's context buffer, and now has at least medium term memory. (Actually I'm not entirely sure that they are not just using long persistent context buffers, but anyway).

Admittedly, you have to wrap LLMs to with stuff to get them to do that. If you want to rewrite the rules to excluded that then I will have to revise my statement that it is "mostly, but not completely true".

:-P

◧◩◪◨⬒⬓⬔⧯▣
9. runarb+du5[view] [source] 2025-06-04 18:51:57
>>rerdav+d35
You also have to alter some neural pathways in your brain to follow commands. That doesn’t make it learning. Learned behavior is usually (but not always) reflected in long term changes to neural pathways outside of the language centers of the brain, and outside of the short-term memory. Ones you forget the command, and still apply the behavior, that is learning.

I think SSR schedulers are a good example of a Machine Learning algorithms that learns from it’s previous interactions. If you run the optimizer you will end up with a different weight matrix, and flashcards will be schedule differently. It has learned how well you retain these cards. But an LLM that is simply following orders has not learned anything, unless you feed the previous interaction back into the system to alter future outcomes, regardless of whether it “remembers” the original interactions. With the SSR, your review history is completely forgotten about. You could delete it, but the weight matrix keeps the optimized weights. If you delete your chat history with ChatGPT, it will not behave any differently based on the previous interaction.

[go to top]