zlacker

[parent] [thread] 2 comments
1. kimixa+(OP)[view] [source] 2026-02-04 11:48:25
Yup - I've related it to working with Juniors, often smart and have good understandings and "book knowledge" of many of the languages and tools involved, but you often have to step back and correct things regularly - normally around local details and project specifics. But then the "junior" you work with every day changes, so you have to start again from scratch.

I think there needs to be a sea change in the current LLM tech to make that no longer the case - either massively increased context sizes, so they can contain near a career worth of learning (without the tendency to start ignoring that context, as the larger end of the current still-way-too-small-for-this context windows available today), or even allow continuous training passes to allow direct integration of these "learnings" into the weights themselves - which might be theoretically possible today, but is many orders of magnitude higher in compute requirements than available today even if you ignore cost.

replies(1): >>throwt+Me
2. throwt+Me[view] [source] 2026-02-04 13:32:02
>>kimixa+(OP)
Try writing more documentation. If your project is bigger than a one man team then you need it anyways and with LLM coding you effectively have an infinite man team.
replies(1): >>kimixa+iO3
◧◩
3. kimixa+iO3[view] [source] [discussion] 2026-02-05 13:21:16
>>throwt+Me
But that doesn't actually work for my use cases though, plenty of other people have already told me "I'm Holding It Wrong" without actual suggestions that work I've started ignoring them. At this stage I just assume many people work in very different sectors, and some see the "great benefits" often proselytized on the internet. And other areas don't see that. Systems programming, where I work, seems to be a poor fit - possibly due to relatively lack of content in the training corpus, perhaps due to company internal styles and APIs meaning lots of the context is taken up simply detailing takes a huge amount of the context leaving little for further corrections or details, or some other failure modes.

We have lots of documentation. Arguably too much - it quickly fills much of the claude opus context window with relevant documentation alone, and even then repeatedly outputs things directly counter to the documentation it just ingested.

[go to top]