zlacker

[parent] [thread] 21 comments
1. Cthulh+(OP)[view] [source] 2025-06-03 12:19:46
> But often prompting is more complex than programming something.

I'd challenge this one; is it more complex, or is all the thinking and decision making concentrated into a single sentence or paragraph? For me, programming something is taking a big high over problem and breaking it down into smaller and smaller sections until it's a line of code; the lines of code are relatively low effort / cost little brain power. But in my experience, the problem itself and its nuances are only defined once all code is written. If you have to prompt an AI to write it, you need to define the problem beforehand.

It's more design and more thinking upfront, which is something the development community has moved away from in the past ~20 years with the rise of agile development and open source. Techniques like TDD have shifted more of the problem definition forwards as you have to think about your desired outcomes before writing code, but I'm pretty sure (I have no figures) it's only a minority of developers that have the self-discipline to practice test-driven development consistently.

(disclaimer: I don't use AI much, and my employer isn't yet looking into or paying for agentic coding, so it's chat style or inline code suggestions)

replies(4): >>sksiso+sh >>algori+YC >>starlu+0W >>bcrosb+uZ
2. sksiso+sh[view] [source] 2025-06-03 13:58:45
>>Cthulh+(OP)
The issue with prompting is English (or any other human language) is nowhere near as rigid or strict a language as a programming language. Almost always an idea can be expressed much more succinctly in code than language.

Combine that with when you’re reading the code it’s often much easier to develop a prototype solution as you go and you end up with prompting feeling like using 4 men to carry a wheelbarrow instead of having 1 push it.

replies(1): >>michae+aG
3. algori+YC[view] [source] 2025-06-03 16:10:39
>>Cthulh+(OP)
> It's more design and more thinking upfront, which is something the development community has moved away from in the past ~20 years with the rise of agile development and open source.

I agree, but even smaller than thinking in agile is just a tight iteration loop when i'm exploring a design. My ADHD makes upfront design a challenge for me and I am personally much more effective starting with a sketch of what needs to be done and then iterating on it until I get a good result.

The loop of prompt->study->prompt->study... is disruptive to my inner loop for several reasons, but a big one is that the machine doesn't "think" like i do. So the solutions it scaffolds commonly make me say "huh?" and i have to change my thought process to interpet them and then study them for mistakes. My intution and iteration is, for the time being, more effective than this machine assited loop for the really "interesting" code i have to write.

But i will say that AI has been a big time saver for more mundane tasks, especially when I can say "use this example and apply it to the rest of this code/abstraction".

replies(1): >>samsep+W4c
◧◩
4. michae+aG[view] [source] [discussion] 2025-06-03 16:27:40
>>sksiso+sh
I think we are going to end up with common design/code specification language that we use for prompting and testing. There's always going to be a need to convey the exact semantics of what we want. If not, for AI then for the humans who have to grapple with what is made.
replies(1): >>rerdav+fT
◧◩◪
5. rerdav+fT[view] [source] [discussion] 2025-06-03 17:44:08
>>michae+aG
Sounds like "Heavy process". "Specifying exact semantics" has been tried and ended up unimaginably badly.
replies(1): >>bcrosb+VZ
6. starlu+0W[view] [source] 2025-06-03 17:59:32
>>Cthulh+(OP)
A big challenge is that programmers all have unique ever changing personal style and vision that they've never had to communicate before. As well they generally "bikeshed" and add undefined unrequested requirements, because you know someday we might need to support 10000x more users than we have. This is all well and good when the programmer implements something themselves but falls apart when it must be communicated to an LLM. Most projects/systems/orgs don't have the necessary level of detail in their documentation, documentation is fragmented across git/jira/confluence/etc/etc/etc., and it's a hodge podge of technologies without a semblance of consistency.

I think we'll find that over the next few years the first really big win will be AI tearing down the mountain of tech & documentation debt. Bringing efficiency to corporate knowledge is likely a key element to AI working within them.

replies(1): >>mlsu+P41
7. bcrosb+uZ[view] [source] 2025-06-03 18:19:05
>>Cthulh+(OP)
I design and think upfront but I don't write it down until I start coding. I can do this for pretty large chunks of code at once.

The fastest way I can transcribe a design is with code or pseudocode. Converting it into English can be hard.

It reminds me a bit of the discussion of if you have an inner monologue. I don't and turning thoughts into English takes work, especially if you need to be specific with what you want.

replies(1): >>averag+UD1
◧◩◪◨
8. bcrosb+VZ[view] [source] [discussion] 2025-06-03 18:21:41
>>rerdav+fT
Nah, imagine a programming language optimized for creating specifications.

Feed it to an llm and it implements it. Ideally it can also verify it's solution with your specification code. If LLMs don't gain significantly more general capabilities I could see this happening in the longer term. But it's too early to say.

In a sense the llm turns into a compiler.

replies(2): >>cess11+731 >>rerdav+Vp3
◧◩◪◨⬒
9. cess11+731[view] [source] [discussion] 2025-06-03 18:40:32
>>bcrosb+VZ
We've had that for a long, long time. Notably RAD-tooling running on XML.

The main lesson has been that it's actually not much of an enabler and the people doing it end up being specialised and rather expensive consultants.

replies(1): >>Camper+Ag1
◧◩
10. mlsu+P41[view] [source] [discussion] 2025-06-03 18:52:19
>>starlu+0W
Efficiency to corporate knowledge? Absolutely not, no way. My coworkers are beginning to use AI to write PR descriptions and git commits.

I notice, because the amount of text has been increased tenfold while the amount of information has stayed exactly the same.

This is a torrent of shit coming down on us, that we are all going to have to deal with it. The vibe coders will be gleefully putting up PRs with 12 paragraphs of "descriptive" text. Thanks no thanks!

replies(2): >>aloisd+8H1 >>starlu+5e3
◧◩◪◨⬒⬓
11. Camper+Ag1[view] [source] [discussion] 2025-06-03 19:59:40
>>cess11+731
RAD before transformers was like trying to build an iPhone before capacitive multitouch: a total waste of time.

Things are different now.

replies(1): >>cess11+Qh1
◧◩◪◨⬒⬓⬔
12. cess11+Qh1[view] [source] [discussion] 2025-06-03 20:05:48
>>Camper+Ag1
I'm not so sure. What can you show me that you think would be convincing?
replies(1): >>Camper+mm1
◧◩◪◨⬒⬓⬔⧯
13. Camper+mm1[view] [source] [discussion] 2025-06-03 20:31:41
>>cess11+Qh1
I think there are enough examples of genuine AI-facilitated rapid application development out there already, honestly. I wouldn't have anything to add to the pile, since I'm not a RAD kind of guy.

Disillusionment seems to spring from expecting the model to be a god or a genie instead of a code generator. Some people are always going to be better at using tools than other people are. I don't see that changing, even though the tools themselves are changing radically.

replies(2): >>sorami+782 >>cess11+Dh2
◧◩
14. averag+UD1[view] [source] [discussion] 2025-06-03 22:31:15
>>bcrosb+uZ
I also don't have an inner monologue and can relate somewhat. However I find that natural language (usually) allows me to be more expressive than pseudocode in the same period of time.

There's also an intangible benefit of having someone to "bounce off". If I'm using an LLM, I am tweaking the system prompt to slow it down, make it ask questions and bug me before making changes. Even without that, writing out the idea displays quickly potential logic or approach flaws - much fast than writing pseudo in my experience.

◧◩◪
15. aloisd+8H1[view] [source] [discussion] 2025-06-03 23:00:25
>>mlsu+P41
Use a llm to summarize the PR /j
◧◩◪◨⬒⬓⬔⧯▣
16. sorami+782[view] [source] [discussion] 2025-06-04 04:56:36
>>Camper+mm1
That's a straw man. Asking for real examples to back up your claims isn't overt perfectionism.
replies(1): >>Camper+Lb2
◧◩◪◨⬒⬓⬔⧯▣▦
17. Camper+Lb2[view] [source] [discussion] 2025-06-04 05:47:52
>>sorami+782
If you weren't paying attention to what's been happening for the last couple of years, you certainly won't believe anything I have to say.

Trust me on this, at least: I don't need the typing practice.

◧◩◪◨⬒⬓⬔⧯▣
18. cess11+Dh2[view] [source] [discussion] 2025-06-04 07:08:17
>>Camper+mm1
"Nothing" would have been shorter and more convenient for us both.
◧◩◪
19. starlu+5e3[view] [source] [discussion] 2025-06-04 15:13:13
>>mlsu+P41
Well I'm certainly not saying that AI should generate more corporate spam. That's part of them problem! And also a strawman argument!
◧◩◪◨⬒
20. rerdav+Vp3[view] [source] [discussion] 2025-06-04 16:11:12
>>bcrosb+VZ
It's an interesting idea. I get it. Although I wonder.... do you really need formal languages anymore now that we have LLMs that can take natural language specifications as input.

I tried running the idea on a programming task I did yesterday. "Create a dialog to edit the contents of THIS data structure." It did actually produce a dialog that worked the first time. Admitedly a very ugly dialog. But all the fields and labels and controls were there in the right order with the right labels, and were all properly bound to props of a react control, that was grudgingly fit for purpose. I suspect I could have corrected some of the layout issues with supplementary prompts. But it worked. I will do it again, with supplementary prompts next time.

Anyway. I next thought about how I would specify the behavior I wanted. The informal specification would be "Open the Looping dialog. Set Start to 1:00, then open the Timebase dialog. Select "Beats", set the tempo to 120, and press the back button. Verify that the Start text edit now contains "30:1" (the same time expressed in bars and beats). Set it to 10:1,press the back button, and verify that the corresponding "Loop" <description of storage for that data omited for clarity> for the currently selected plugin contains 20.0. I can actually see that working (and I plan to see if I can convince an AI to turn that into test code for me).

Any imaginable formal specification for that would be just grim. In fact, I can't imagine a "formal" specification for that. But a natural language specification seems eminently doable. And even if there were such a formal specification, I am 100% positive that I would be using natural language AI prompts to generate the specifications. Which makes me wonder why anyone needs a formal language for that.

And I can't help thinking that "Write test code for the specifications given in the previous prompt" is something I need to try. How to give my AI tooling to get access to UI controls though....

replies(1): >>michae+qFc
◧◩
21. samsep+W4c[view] [source] [discussion] 2025-06-08 08:35:02
>>algori+YC
> "The loop of prompt->study->prompt->study... is disruptive to my inner loop for several reasons, but a big one is that the machine doesn't "think" like i do. So the solutions it scaffolds commonly make me say "huh?" and i have to change my thought process to interpet them and then study them for mistakes. My intution and iteration is, for the time being, more effective than this machine assisted loop..."

My thoughts exactly as an ADHD dev.

Was having trouble describing my main issue with LLM-assisted development...

Thank you for giving me the words!

◧◩◪◨⬒⬓
22. michae+qFc[view] [source] [discussion] 2025-06-08 16:17:16
>>rerdav+Vp3
That doesn't sound like the sort of problem you'd use it for. I think it would be used for the ~10% of code you have in some applications that are part of the critical core. UI, not so much.
[go to top]