zlacker

[parent] [thread] 3 comments
1. smokel+(OP)[view] [source] 2025-08-21 20:24:22
What you are describing also seems to align with the idea that greenfield projects are well-suited for AI, whereas brownfield projects are considerably more challenging.
replies(1): >>Quercu+D
2. Quercu+D[view] [source] 2025-08-21 20:28:35
>>smokel+(OP)
Brownfield projects are more challenging because of all the context and decisions that went into building things that are not directly defined in the code.

I suspect that well-engineered projects with plenty of test coverage and high-quality documentation will be easier to use AI on, just like they're easier for humans to comprehend. But you need to have somebody with the big picture still who can make sure that you don't just turn things into a giant mess once less disciplined people start using AI on a project.

replies(2): >>smokel+V7 >>ab2525+tp
◧◩
3. smokel+V7[view] [source] [discussion] 2025-08-21 21:08:23
>>Quercu+D
Also, as soon as the code no longer fits within the context window of an LLM, one must resort to RAG-based solutions, which often leads to a significant decline in quality.
◧◩
4. ab2525+tp[view] [source] [discussion] 2025-08-21 23:02:58
>>Quercu+D
Well said - language (text input) is actually the vehicle you have to transfer neural state to the engine. When you are working in a greenfield project or pure-vibe project, you can get away with most of that neural state being in the "default" probability mode. But in a legacy project, you need significantly more context to contrain the probability distributions a lot closer to the decisions which were made historically otherwise you quickly get into spaghetti-ville as the AI tries to drag the codebase towards its natural ruts.
[go to top]