zlacker

[return to "GitHub Copilot Coding Agent"]
1. Scene_+V4[view] [source] 2025-05-19 16:47:11
>>net01+(OP)
I tried doing some vibe coding on a greenfield project (using gemini 2.5 pro + cline). On one hand - super impressive, a major productivity booster (even compared to using a non-integrated LLM chat interface).

I noticed that LLMs need a very heavy hand in guiding the architecture, otherwise they'll add architectural tech debt. One easy example is that I noticed them breaking abstractions (putting things where they don't belong). Unfortunately, there's not that much self-retrospection on these aspects if you ask about the quality of the code or if there are any better ways of doing it. Of course, if you pick up that something is in the wrong spot and prompt better, they'll pick up on it immediately.

I also ended up blowing through $15 of LLM tokens in a single evening. (Previously, as a heavy LLM user including coding tasks, I was averaging maybe $20 a month.)

◧◩
2. candid+v7[view] [source] 2025-05-19 16:58:59
>>Scene_+V4
> I also ended up blowing through $15 of LLM tokens in a single evening.

This is a feature, not a bug. LLMs are going to be the next "OMG my AWS bill" phenomenon.

◧◩◪
3. philku+uO[view] [source] 2025-05-19 20:37:36
>>candid+v7
I think that models are gonna commoditize, if they haven't already. The cost of switching over is rather small, especially when you have good evals on what you want done.

Also there's no way you can build a business without providing value in this space. Buyers are not that dumb.

[go to top]