zlacker

[return to "GitHub Copilot Coding Agent"]
1. Scene_+V4[view] [source] 2025-05-19 16:47:11
>>net01+(OP)
I tried doing some vibe coding on a greenfield project (using gemini 2.5 pro + cline). On one hand - super impressive, a major productivity booster (even compared to using a non-integrated LLM chat interface).

I noticed that LLMs need a very heavy hand in guiding the architecture, otherwise they'll add architectural tech debt. One easy example is that I noticed them breaking abstractions (putting things where they don't belong). Unfortunately, there's not that much self-retrospection on these aspects if you ask about the quality of the code or if there are any better ways of doing it. Of course, if you pick up that something is in the wrong spot and prompt better, they'll pick up on it immediately.

I also ended up blowing through $15 of LLM tokens in a single evening. (Previously, as a heavy LLM user including coding tasks, I was averaging maybe $20 a month.)

◧◩
2. SkyPun+8I[view] [source] 2025-05-19 20:01:17
>>Scene_+V4
I loathe using AI in a greenfield project. There are simply too many possible paths, so it seems to randomly switch between approaches.

In a brownfield code base, I can often provide it reference files to pattern match against. So much easier to get great results when it can anchor itself in the rest of your code base.

◧◩◪
3. imiric+541[view] [source] 2025-05-19 22:16:01
>>SkyPun+8I
The trick for greenfield projects is to use it to help you design detailed specs and a tentative implementation plan. Just bounce some ideas off of it, as with a somewhat smarter rubber duck, and hone the design until you arrive at something you're happy with. Then feed the detailed implementation plan step by step to another model or session.

This is a popular workflow I first read about here[1].

This has been the most useful use case for LLMs for me. Actually getting them to implement the spec correctly is the hard part, and you'll have to take the reigns and course correct often.

[1]: https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/

◧◩◪◨
4. rcarmo+yR1[view] [source] 2025-05-20 07:01:37
>>imiric+541
Here’s my workflow, it takes that a few steps further: https://taoofmac.com/space/blog/2025/05/13/2230
[go to top]