zlacker

[parent] [thread] 4 comments
1. adastr+(OP)[view] [source] 2025-08-21 20:51:52
It's a matter of the tools not getting there though. If there was a summarization system that could compress down the structure and history of the system you are working on in a way that could then extract out a half-filled context window of the relevant bits of the code base and architecture for the task (in other words, generate that massive prompt for you), then you might see the same results that you get with Android apps.

The reason being that the boilerplate Android stuff is effectively given for free and not part of the context as it is so heavily represented in the training set, whereas the unique details of your work project is not. But finding a way to provide that context, or better yet fine-tune the model on your codebase, would put you in the same situation and there's no reason for it to not deliver the same results.

That it is not working for you now at your complex work projects is a limitation of tooling, not something fundamental about how AI works.

Aside: Your recommendation is right on. It clicked for me when I took a project that I had spent months of full-time work creating in C++, and rewrote it in idiomatic Go, a language I had never used and knew nothing about. It took only a weekend, and at the end of the project I had reviewed and understood every line of generated code & was now competent enough to write my own simple Go projects without AI help. I went from skeptic to convert right then and there.

replies(1): >>jerf+q1
2. jerf+q1[view] [source] 2025-08-21 20:58:51
>>adastr+(OP)
I agree that the level of complexity of task it can do is likely to rise over time. I often talk about the "next generation" of AI that will actually be what we were promised LLMs would be, but LLMs architecturally are just not suited for. I think the time is coming when AIs "truly" (for some definition of truly) will understand architecture and systems in a way that LLMs don't and really can't, and will be able to do a lot more things than they can now, though when that will be is hard to guess. Could be next year, or AI could stall out now where it is now for the next 10. Nobody knows.

However, the information-theoretic limitation of expressing what you want and how anyone, AI or otherwise, could turn that into commits, is going to be quite the barrier, because that's fundamental to communication itself. I don't think the skill of "having a very, very precise and detailed understanding of the actual problem" is going anywhere any time soon.

replies(1): >>adastr+Z3
◧◩
3. adastr+Z3[view] [source] [discussion] 2025-08-21 21:13:30
>>jerf+q1
Yes, but:

(1) The process of creating "a very, very precise and detailed understanding of the actual problem" is something AI is really good at, when partnered with a human. My use of AI tools got immensely better when I figured out that I should be prompting the AI to turn my vague short request into a detailed prompt, and then I spend a few iteration cycles fixing up before asking the agent to do it.

(2) The other problem of managing context is a search and indexing problem, which we are really, really good at and have lots of tools for, but AI is just so new that these tools haven't been adapted or seen wide use yet. If the limitation of the AI was its internal reasoning or training or something, I would be more skeptical. But the limitation seems to be managing, indexing, compressing, searching, and distilling appropriate context. Which is firmly in the domain of solvable, albeit nontrivial problems.

I don't see the information theoretic barrier you refer to. The amount of information an AI can keep in its context window far exceeds what I have easily accessible to my working memory.

replies(1): >>jerf+7E1
◧◩◪
4. jerf+7E1[view] [source] [discussion] 2025-08-22 13:13:01
>>adastr+Z3
The information theoretic barrier is in the information content of your prompt, not the ability of the AI to expand it.

But then I suppose I should learn from my own experiences and not try to make information theoretic arguments on HN, since it is in that most terrible state where everyone thinks they understand it because they use "bits" all the time, but in fact the average HN denizen knows less than nothing about it because even their definition of "bit" actively misleads them and that's about all they know.

replies(1): >>adastr+yW1
◧◩◪◨
5. adastr+yW1[view] [source] [discussion] 2025-08-22 14:55:39
>>jerf+7E1
I have a CS theory background. If by prompt you mean the full context provided, then there isn’t an effective limit. Claude now has 1M token context windows. You are not going to fill that with just a task specification. You could easily fill it in a large repo with the accumulated design history and total codebase. However this is also largely static, and could be used for fine-tuning. With fine tuning you’re back to a 1M token task specification for the unique variation of this prompt, and recent changes. You are not going to easily fill that.
[go to top]