zlacker

[parent] [thread] 46 comments
1. raxxor+(OP)[view] [source] 2025-06-03 09:29:26
The better I am at solving a problem, the less I use AI assistants. I use them if I try a new language or framework.

Busy code I need to generate is difficult to do with AI too. Because then you need to formalize the necessary context for an AI assistant, which is exhausting with an unsure result. So perhaps it is just simpler to write it yourself quickly.

I understand comments being negative, because there is so much AI hype without having to many practical applications yet. Or at least good practical applications. Some of that hype is justified, some of it is not. I enjoyed the image/video/audio synthesis hype more tbh.

Test cases are quite helpful and comments are decent too. But often prompting is more complex than programming something. And you can never be sure if any answer is usable.

replies(4): >>brular+6d >>avemur+Ld >>Cthulh+Vh >>echelo+GU
2. brular+6d[view] [source] 2025-06-03 11:35:43
>>raxxor+(OP)
> But often prompting is more complex than programming something. It may be more complex, but it is in my opinion better long term. We need to get good at communicating with AIs to get results that we want. Forgive me assuming that you probably didn't use these assistants long enough to get good at using them. I'm web developer for 20 years already and AI tools are multiplying my output even in problems I'm very good at. And they are getting better very quickly.
replies(1): >>Goblin+nm
3. avemur+Ld[view] [source] 2025-06-03 11:42:08
>>raxxor+(OP)
I agree with your points but I'm also reminded of one my bigger learnings as a manager - the stuff I'm best at is the hardest, but most important, to delegate.

Sure it was easier to do it myself. But putting in the time to train, give context, develop guardrails, learn how to monitor etc ultimately taught me the skills needed to delegate effectively and multiply the teams output massively as we added people.

It's early days but I'm getting the same feeling with LLMs. It's as exhausting as training an overconfident but talented intern, but if you can work through it and somehow get it to produce something as good as you would do yourself, it's a massive multiplier.

replies(3): >>johnma+6o >>conart+co >>Goblin+4r
4. Cthulh+Vh[view] [source] 2025-06-03 12:19:46
>>raxxor+(OP)
> But often prompting is more complex than programming something.

I'd challenge this one; is it more complex, or is all the thinking and decision making concentrated into a single sentence or paragraph? For me, programming something is taking a big high over problem and breaking it down into smaller and smaller sections until it's a line of code; the lines of code are relatively low effort / cost little brain power. But in my experience, the problem itself and its nuances are only defined once all code is written. If you have to prompt an AI to write it, you need to define the problem beforehand.

It's more design and more thinking upfront, which is something the development community has moved away from in the past ~20 years with the rise of agile development and open source. Techniques like TDD have shifted more of the problem definition forwards as you have to think about your desired outcomes before writing code, but I'm pretty sure (I have no figures) it's only a minority of developers that have the self-discipline to practice test-driven development consistently.

(disclaimer: I don't use AI much, and my employer isn't yet looking into or paying for agentic coding, so it's chat style or inline code suggestions)

replies(4): >>sksiso+nz >>algori+TU >>starlu+Vd1 >>bcrosb+ph1
◧◩
5. Goblin+nm[view] [source] [discussion] 2025-06-03 12:50:33
>>brular+6d
Yep, it looks like LLMs are used as fast typists, and coincidentally in webdev typing speed is the most important bottleneck when you need to add cookie consent, spinners, dozens of ad providers, tracking pixels, twitter metadata, google metadata, manual rendering, buttons web components with material design and react, hover panels, fontawesome, recaptcha, and that's only 1% of modern web boilerplate, then it's easy to see how a fast typist can help you.
◧◩
6. johnma+6o[view] [source] [discussion] 2025-06-03 13:01:48
>>avemur+Ld
I don't totally understand the parallel you're drawing here. As a manager, I assume you're training more junior (in terms of their career or the company) engineers up so they can perform more autonomously in the future.

But you're not training LLMs as you use them really - do you mean that it's best to develop your own skill using LLMs in an area you already understand well?

I'm finding it a bit hard to square your comment about it being exhausting to catherd the LLM with it being a force multiplier.

replies(2): >>wpietr+dx >>avemur+1R
◧◩
7. conart+co[view] [source] [discussion] 2025-06-03 13:02:08
>>avemur+Ld
But... But... the multiplier isn't NEW!

You just explained how your work was affected by a big multiplier. At the end of training an intern you get a trained intern -- potentially a huge multiplier. ChatGPT is like an intern you can never train and will never get much better.

These are the same people who would no longer create or participate deeply in OSS (+100x multipler) bragging about the +2x multiplier they got in exchange.

replies(1): >>conart+rq
◧◩◪
8. conart+rq[view] [source] [discussion] 2025-06-03 13:13:14
>>conart+co
The first person you pass your knowledge onto can pass it onto a second. ChatGPT will not only never build knowledge, it will never turn from the learner to the mentor passing hard-won knowledge on to another learner.
◧◩
9. Goblin+4r[view] [source] [discussion] 2025-06-03 13:16:09
>>avemur+Ld
Do LLMs learn? I had an impression you borrow a pretrained LLM that handles each query starting with the same initial state.
replies(2): >>simonw+lu >>bodega+xA
◧◩◪
10. simonw+lu[view] [source] [discussion] 2025-06-03 13:34:22
>>Goblin+4r
No, LLMs don't learn - each new conversation effectively clears the slate and resets them to their original state.

If you know what you're doing you can still "teach" them though, but it's on you to do that - you need to keep on iterating on things like the system prompt you are using and the context you feed in to the model.

replies(2): >>rerdav+Tc1 >>runarb+sK1
◧◩◪
11. wpietr+dx[view] [source] [discussion] 2025-06-03 13:47:46
>>johnma+6o
Great point.

Humans really like to anthropomorphize things. Loud rumbles in the clouds? There must be a dude on top of a mountain somewhere who's in charge of it. Impressed by that tree? It must have a spirit that's like our spirits.

I think a lot of the reason LLMs are enjoying such a huge hype wave is that they invite that sort of anthropomorphization. It can be really hard to think about them in terms of what they actually are, because both our head-meat and our culture has so much support for casting things as other people.

◧◩
12. sksiso+nz[view] [source] [discussion] 2025-06-03 13:58:45
>>Cthulh+Vh
The issue with prompting is English (or any other human language) is nowhere near as rigid or strict a language as a programming language. Almost always an idea can be expressed much more succinctly in code than language.

Combine that with when you’re reading the code it’s often much easier to develop a prototype solution as you go and you end up with prompting feeling like using 4 men to carry a wheelbarrow instead of having 1 push it.

replies(1): >>michae+5Y
◧◩◪
13. bodega+xA[view] [source] [discussion] 2025-06-03 14:05:26
>>Goblin+4r
Yes with few shots. you need to provide at least 2 examples of similar instructions and their corresponding solutions. But when you have to build few shots every time you prompt it feels like you're doing the work already.

Edit: grammar

◧◩◪
14. avemur+1R[view] [source] [discussion] 2025-06-03 15:52:22
>>johnma+6o
No I'm talking about my own skills. How I onboard, structure 1on1s, run meetings, create and reuse certain processes, manage documentation (a form of org memory), check in on status, devise metrics and other indicators of system health. All of these compound and provide leverage even if the person leaves and a new one enters.the 30th person I onboarded and managed was orders of magnitude easier (for both of us) than the first.

With LLMs the better I get at the scaffolding and prompting, the less it feels like catherding (so far at least). Hence the comparison.

15. echelo+GU[view] [source] 2025-06-03 16:09:55
>>raxxor+(OP)
> The better I am at solving a problem, the less I use AI assistants.

Yes, but you're expensive.

And these models are getting better at solving a lot of business-relevant problems.

Soon all business-relevant problems will be bent to the shape of the LLM because it's cost-effective.

replies(2): >>onemor+mt1 >>sorami+ZG2
◧◩
16. algori+TU[view] [source] [discussion] 2025-06-03 16:10:39
>>Cthulh+Vh
> It's more design and more thinking upfront, which is something the development community has moved away from in the past ~20 years with the rise of agile development and open source.

I agree, but even smaller than thinking in agile is just a tight iteration loop when i'm exploring a design. My ADHD makes upfront design a challenge for me and I am personally much more effective starting with a sketch of what needs to be done and then iterating on it until I get a good result.

The loop of prompt->study->prompt->study... is disruptive to my inner loop for several reasons, but a big one is that the machine doesn't "think" like i do. So the solutions it scaffolds commonly make me say "huh?" and i have to change my thought process to interpet them and then study them for mistakes. My intution and iteration is, for the time being, more effective than this machine assited loop for the really "interesting" code i have to write.

But i will say that AI has been a big time saver for more mundane tasks, especially when I can say "use this example and apply it to the rest of this code/abstraction".

replies(1): >>samsep+Rmc
◧◩◪
17. michae+5Y[view] [source] [discussion] 2025-06-03 16:27:40
>>sksiso+nz
I think we are going to end up with common design/code specification language that we use for prompting and testing. There's always going to be a need to convey the exact semantics of what we want. If not, for AI then for the humans who have to grapple with what is made.
replies(1): >>rerdav+ab1
◧◩◪◨
18. rerdav+ab1[view] [source] [discussion] 2025-06-03 17:44:08
>>michae+5Y
Sounds like "Heavy process". "Specifying exact semantics" has been tried and ended up unimaginably badly.
replies(1): >>bcrosb+Qh1
◧◩◪◨
19. rerdav+Tc1[view] [source] [discussion] 2025-06-03 17:53:04
>>simonw+lu
That's mostly, but not completely true. There are various strategies to get LLMs to remember previous conversations. ChatGPT, for example, remembers (for some loose definition of "remembers") all previous conversations you've had with it.
replies(2): >>runarb+TM1 >>simonw+n22
◧◩
20. starlu+Vd1[view] [source] [discussion] 2025-06-03 17:59:32
>>Cthulh+Vh
A big challenge is that programmers all have unique ever changing personal style and vision that they've never had to communicate before. As well they generally "bikeshed" and add undefined unrequested requirements, because you know someday we might need to support 10000x more users than we have. This is all well and good when the programmer implements something themselves but falls apart when it must be communicated to an LLM. Most projects/systems/orgs don't have the necessary level of detail in their documentation, documentation is fragmented across git/jira/confluence/etc/etc/etc., and it's a hodge podge of technologies without a semblance of consistency.

I think we'll find that over the next few years the first really big win will be AI tearing down the mountain of tech & documentation debt. Bringing efficiency to corporate knowledge is likely a key element to AI working within them.

replies(1): >>mlsu+Km1
◧◩
21. bcrosb+ph1[view] [source] [discussion] 2025-06-03 18:19:05
>>Cthulh+Vh
I design and think upfront but I don't write it down until I start coding. I can do this for pretty large chunks of code at once.

The fastest way I can transcribe a design is with code or pseudocode. Converting it into English can be hard.

It reminds me a bit of the discussion of if you have an inner monologue. I don't and turning thoughts into English takes work, especially if you need to be specific with what you want.

replies(1): >>averag+PV1
◧◩◪◨⬒
22. bcrosb+Qh1[view] [source] [discussion] 2025-06-03 18:21:41
>>rerdav+ab1
Nah, imagine a programming language optimized for creating specifications.

Feed it to an llm and it implements it. Ideally it can also verify it's solution with your specification code. If LLMs don't gain significantly more general capabilities I could see this happening in the longer term. But it's too early to say.

In a sense the llm turns into a compiler.

replies(2): >>cess11+2l1 >>rerdav+QH3
◧◩◪◨⬒⬓
23. cess11+2l1[view] [source] [discussion] 2025-06-03 18:40:32
>>bcrosb+Qh1
We've had that for a long, long time. Notably RAD-tooling running on XML.

The main lesson has been that it's actually not much of an enabler and the people doing it end up being specialised and rather expensive consultants.

replies(1): >>Camper+vy1
◧◩◪
24. mlsu+Km1[view] [source] [discussion] 2025-06-03 18:52:19
>>starlu+Vd1
Efficiency to corporate knowledge? Absolutely not, no way. My coworkers are beginning to use AI to write PR descriptions and git commits.

I notice, because the amount of text has been increased tenfold while the amount of information has stayed exactly the same.

This is a torrent of shit coming down on us, that we are all going to have to deal with it. The vibe coders will be gleefully putting up PRs with 12 paragraphs of "descriptive" text. Thanks no thanks!

replies(2): >>aloisd+3Z1 >>starlu+0w3
◧◩
25. onemor+mt1[view] [source] [discussion] 2025-06-03 19:30:57
>>echelo+GU
You're forgetting how much money is being burned in keeping these LLMs cheap. Remember when Uber was a fraction of the cost of a cab? Yeah, those days didn't last.
replies(3): >>a4isms+ZK1 >>averag+sW1 >>ido+Zx2
◧◩◪◨⬒⬓⬔
26. Camper+vy1[view] [source] [discussion] 2025-06-03 19:59:40
>>cess11+2l1
RAD before transformers was like trying to build an iPhone before capacitive multitouch: a total waste of time.

Things are different now.

replies(1): >>cess11+Lz1
◧◩◪◨⬒⬓⬔⧯
27. cess11+Lz1[view] [source] [discussion] 2025-06-03 20:05:48
>>Camper+vy1
I'm not so sure. What can you show me that you think would be convincing?
replies(1): >>Camper+hE1
◧◩◪◨⬒⬓⬔⧯▣
28. Camper+hE1[view] [source] [discussion] 2025-06-03 20:31:41
>>cess11+Lz1
I think there are enough examples of genuine AI-facilitated rapid application development out there already, honestly. I wouldn't have anything to add to the pile, since I'm not a RAD kind of guy.

Disillusionment seems to spring from expecting the model to be a god or a genie instead of a code generator. Some people are always going to be better at using tools than other people are. I don't see that changing, even though the tools themselves are changing radically.

replies(2): >>sorami+2q2 >>cess11+yz2
◧◩◪◨
29. runarb+sK1[view] [source] [discussion] 2025-06-03 21:10:36
>>simonw+lu
This sounds like trying to glue on supervised learning post-hoc.

Makes me wonder if there had been equal investment into specialized tools which used more fine-tuned statistical methods (like supervised learning), that we would have something much better then LLMs.

I keep thinking about spell checkers and auto-translators, which have been using machine learning for a while, with pretty impressive results (unless I’m mistaken I think most of those use supervised learning models). I have no doubt we will start seeing companies replacing these proven models with an LLM and a noticeable reduction in quality.

◧◩◪
30. a4isms+ZK1[view] [source] [discussion] 2025-06-03 21:13:35
>>onemor+mt1
I have been in this industry since the mid 80s. I can't tell you how many people worry that I can't handle change because as a veteran, I must cling to what was. Meanwhile, of course, the reason I am still in the industry is because of my plasticity. Nothing is as it was for me, and I have changed just about everything about how I work multiple times. But what does stay the same all this time are people and businesses and how we/they behave.

Which brings me to your comment. The comparison to Uber drivers is apt, and to use a fashionable word these days, the threat to people and startups alike is "enshittification." These tools are not sold, they are rented. Should a few behemoths gain effective control of the market, we know from history that we won't see these tools become commodities and nearly free, we'll see the users of these tools (again, both people and businesses) squeezed until their margins are paper-thin.

Back when articles by Joel Spolsky regularly hit the top page of Hacker News, he wrote "Strategy Letter V:" https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/

The relevant takeaway was that companies try to commoditize their complements, and for LLM vendors, every startup is a complement. A brick-and-mortar metaphor is that of a retailer in a mall. If you as a retailer are paying more in rent than you're making, you are "working for the landlord," just as if you are making less than 30% of profit on everything you sell or rent through Apple's App Store, you're working for Apple.

I once described that as "Sharecropping in Apple's Orchard," and if I'm hesitant about the direction we're going, it's not anything about clinging to punch cards and ferromagnetic RAM, it's more the worry that it's not just a question of programmers becoming enshittified by their tools, it's also the entire notion of a software business "Sharecropping the LLM vendor's fields."

We spend way too much time talking about programming itself and not enough about whither the software business if its leverage is bound to tools that can only be rented on terms set by vendors.

--------

I don't know for certain where things will go or how we'll get there. I actually like the idea that a solo founder could create a billion-dollar company with no employees in my lifetime. And I have always liked the idea of software being "Wheels for the Mind," and we could be on a path to that, rather than turning humans into "reverse centaurs" that labour for the software rather than the other way around.

Once upon a time, VCs would always ask a startup, "What is your Plan B should you start getting traction and then Microsoft decides to compete with you/commoditize you by giving the same thing away?" That era passed, and Paul Graham celebrated it: https://paulgraham.com/microsoft.html

Then when startups became cheap to launch—thank you increased tech leverage and cheap money and YCombinator industrializing early-stage venture capital—the question became, "What is your moat against three smart kids launching a competitor?"

Now I wonder if the key question will bifurcate:

1. What is your moat against somebody launching competition even more cheaply than smart kids with YCombinator's backing, and;

2. How are you insulated against the cost of load-bearing tooling for everything in your business becoming arbitrarily more expensive?

◧◩◪◨⬒
31. runarb+TM1[view] [source] [discussion] 2025-06-03 21:27:03
>>rerdav+Tc1
I think if you use a very loose definition of learning: A stimuli which alters subsequent behavior you can claim this is learning. But if you tell a human to replace the word “is” with “are” in the next two sentences, this could hardly be considered learning, rather it is just following commands, even though it meets the previous loose definition. This is why in psychology we usually include some timescale for how long the altered behavior must last for it to be considered learning. A short-term altered behavior is usually called priming. But even then I wouldn’t even consider “following commands” to be neither priming nor learning, I would simply call it obeying.

If an LLM learned something when you gave it commands, it would probably be reflected in some adjusted weights in some of its operational matrix. This is true of human learning, we strengthen some neural connection, and when we receive a similar stimuli in a similar situation sometime in the future, the new stimuli will follow a slightly different path along its neural pathway and result in a altered behavior (or at least have a greater probability of an altered behavior). For an LLM to “learn” I would like to see something similar.

replies(1): >>rerdav+FK3
◧◩◪
32. averag+PV1[view] [source] [discussion] 2025-06-03 22:31:15
>>bcrosb+ph1
I also don't have an inner monologue and can relate somewhat. However I find that natural language (usually) allows me to be more expressive than pseudocode in the same period of time.

There's also an intangible benefit of having someone to "bounce off". If I'm using an LLM, I am tweaking the system prompt to slow it down, make it ask questions and bug me before making changes. Even without that, writing out the idea displays quickly potential logic or approach flaws - much fast than writing pseudo in my experience.

◧◩◪
33. averag+sW1[view] [source] [discussion] 2025-06-03 22:37:33
>>onemor+mt1
> Remember when Uber was a fraction of the cost of a cab? Yeah, those days didn't last.

They're still much cheaper where I am. But regardless, why not take the Uber while it's cheaper?

There's the argument of the taxi industry collapsing (it hasn't yet). Is your concern some sort of long term knowledge loss from programmers and a rug pull? There are many good LLM options out there, they're getting cheaper and the knowledge loss wouldn't be impactful (and rug pull-able) for at least a decade or so.

◧◩◪◨
34. aloisd+3Z1[view] [source] [discussion] 2025-06-03 23:00:25
>>mlsu+Km1
Use a llm to summarize the PR /j
◧◩◪◨⬒
35. simonw+n22[view] [source] [discussion] 2025-06-03 23:34:57
>>rerdav+Tc1
I'd count ChatGPT memory as a feature of ChatGPT, not of the underlying LLM.

I wrote a bit about that here - I've turned it off: https://simonwillison.net/2025/May/21/chatgpt-new-memory/

◧◩◪◨⬒⬓⬔⧯▣▦
36. sorami+2q2[view] [source] [discussion] 2025-06-04 04:56:36
>>Camper+hE1
That's a straw man. Asking for real examples to back up your claims isn't overt perfectionism.
replies(1): >>Camper+Gt2
◧◩◪◨⬒⬓⬔⧯▣▦▧
37. Camper+Gt2[view] [source] [discussion] 2025-06-04 05:47:52
>>sorami+2q2
If you weren't paying attention to what's been happening for the last couple of years, you certainly won't believe anything I have to say.

Trust me on this, at least: I don't need the typing practice.

◧◩◪
38. ido+Zx2[view] [source] [discussion] 2025-06-04 06:50:54
>>onemor+mt1
Even at 100x the cost (currently $20/month for most of these via subscriptions) it’s still cheaper than an intern, let alone a senior dev.
replies(1): >>nipah+DLk
◧◩◪◨⬒⬓⬔⧯▣▦
39. cess11+yz2[view] [source] [discussion] 2025-06-04 07:08:17
>>Camper+hE1
"Nothing" would have been shorter and more convenient for us both.
◧◩
40. sorami+ZG2[view] [source] [discussion] 2025-06-04 08:18:49
>>echelo+GU
Actually, I agree. It won't be long before businesses handle software engineering like Google does "support." You know, that robotic system that sends out passive-aggressive mocking emails to people who got screwed over by another robot that locks them out of their digital lives for made up reasons [1]. It saves the suits a ton of cash while letting them dodge any responsibility for the inevitable harm it'll cause to society. Mediocrity will be seen as a feature, and the worst part is, the zealots will wave it like a badge of honor.

[1]: >>26061935

◧◩◪◨
41. starlu+0w3[view] [source] [discussion] 2025-06-04 15:13:13
>>mlsu+Km1
Well I'm certainly not saying that AI should generate more corporate spam. That's part of them problem! And also a strawman argument!
◧◩◪◨⬒⬓
42. rerdav+QH3[view] [source] [discussion] 2025-06-04 16:11:12
>>bcrosb+Qh1
It's an interesting idea. I get it. Although I wonder.... do you really need formal languages anymore now that we have LLMs that can take natural language specifications as input.

I tried running the idea on a programming task I did yesterday. "Create a dialog to edit the contents of THIS data structure." It did actually produce a dialog that worked the first time. Admitedly a very ugly dialog. But all the fields and labels and controls were there in the right order with the right labels, and were all properly bound to props of a react control, that was grudgingly fit for purpose. I suspect I could have corrected some of the layout issues with supplementary prompts. But it worked. I will do it again, with supplementary prompts next time.

Anyway. I next thought about how I would specify the behavior I wanted. The informal specification would be "Open the Looping dialog. Set Start to 1:00, then open the Timebase dialog. Select "Beats", set the tempo to 120, and press the back button. Verify that the Start text edit now contains "30:1" (the same time expressed in bars and beats). Set it to 10:1,press the back button, and verify that the corresponding "Loop" <description of storage for that data omited for clarity> for the currently selected plugin contains 20.0. I can actually see that working (and I plan to see if I can convince an AI to turn that into test code for me).

Any imaginable formal specification for that would be just grim. In fact, I can't imagine a "formal" specification for that. But a natural language specification seems eminently doable. And even if there were such a formal specification, I am 100% positive that I would be using natural language AI prompts to generate the specifications. Which makes me wonder why anyone needs a formal language for that.

And I can't help thinking that "Write test code for the specifications given in the previous prompt" is something I need to try. How to give my AI tooling to get access to UI controls though....

replies(1): >>michae+lXc
◧◩◪◨⬒⬓
43. rerdav+FK3[view] [source] [discussion] 2025-06-04 16:27:52
>>runarb+TM1
I think you have an overly strict definition of what "learning" means. ChatGPT now has memory that lasts beyond the lifetime of it's context buffer, and now has at least medium term memory. (Actually I'm not entirely sure that they are not just using long persistent context buffers, but anyway).

Admittedly, you have to wrap LLMs to with stuff to get them to do that. If you want to rewrite the rules to excluded that then I will have to revise my statement that it is "mostly, but not completely true".

:-P

replies(1): >>runarb+Fb4
◧◩◪◨⬒⬓⬔
44. runarb+Fb4[view] [source] [discussion] 2025-06-04 18:51:57
>>rerdav+FK3
You also have to alter some neural pathways in your brain to follow commands. That doesn’t make it learning. Learned behavior is usually (but not always) reflected in long term changes to neural pathways outside of the language centers of the brain, and outside of the short-term memory. Ones you forget the command, and still apply the behavior, that is learning.

I think SSR schedulers are a good example of a Machine Learning algorithms that learns from it’s previous interactions. If you run the optimizer you will end up with a different weight matrix, and flashcards will be schedule differently. It has learned how well you retain these cards. But an LLM that is simply following orders has not learned anything, unless you feed the previous interaction back into the system to alter future outcomes, regardless of whether it “remembers” the original interactions. With the SSR, your review history is completely forgotten about. You could delete it, but the weight matrix keeps the optimized weights. If you delete your chat history with ChatGPT, it will not behave any differently based on the previous interaction.

◧◩◪
45. samsep+Rmc[view] [source] [discussion] 2025-06-08 08:35:02
>>algori+TU
> "The loop of prompt->study->prompt->study... is disruptive to my inner loop for several reasons, but a big one is that the machine doesn't "think" like i do. So the solutions it scaffolds commonly make me say "huh?" and i have to change my thought process to interpet them and then study them for mistakes. My intution and iteration is, for the time being, more effective than this machine assisted loop..."

My thoughts exactly as an ADHD dev.

Was having trouble describing my main issue with LLM-assisted development...

Thank you for giving me the words!

◧◩◪◨⬒⬓⬔
46. michae+lXc[view] [source] [discussion] 2025-06-08 16:17:16
>>rerdav+QH3
That doesn't sound like the sort of problem you'd use it for. I think it would be used for the ~10% of code you have in some applications that are part of the critical core. UI, not so much.
◧◩◪◨
47. nipah+DLk[view] [source] [discussion] 2025-06-11 14:16:57
>>ido+Zx2
I'm sorry, 2000 USD per month is MUCH more costly than an engineer from a third world country, it can basically pay for a senior where I live. Even 200 USD is sufficient for an intern here. The problem with your point is that it's not counting on the fact that this work can be done all over the world.
[go to top]