zlacker

[return to "GitHub Copilot Coding Agent"]
1. Scene_+V4[view] [source] 2025-05-19 16:47:11
>>net01+(OP)
I tried doing some vibe coding on a greenfield project (using gemini 2.5 pro + cline). On one hand - super impressive, a major productivity booster (even compared to using a non-integrated LLM chat interface).

I noticed that LLMs need a very heavy hand in guiding the architecture, otherwise they'll add architectural tech debt. One easy example is that I noticed them breaking abstractions (putting things where they don't belong). Unfortunately, there's not that much self-retrospection on these aspects if you ask about the quality of the code or if there are any better ways of doing it. Of course, if you pick up that something is in the wrong spot and prompt better, they'll pick up on it immediately.

I also ended up blowing through $15 of LLM tokens in a single evening. (Previously, as a heavy LLM user including coding tasks, I was averaging maybe $20 a month.)

◧◩
2. akmari+mC1[view] [source] 2025-05-20 03:58:49
>>Scene_+V4
> I noticed that LLMs need a very heavy hand in guiding the architecture, otherwise they'll add architectural tech debt. One easy example is that I noticed them breaking abstractions

That doesn’t matter anymore when you’re vibe coding it. No human is going to look at it anyway.

It can all be if/else on one line in one file. If it works and if the LLMs can work at, iterate and implement new business requirements, while keeping performance and security - code structure, quality and readability don’t matter one bit.

Customers don’t care about code quality and the only reason businesses used to care is to make it less money consuming to build and ship new things, so they can make more money.

◧◩◪
3. theapp+cF1[view] [source] 2025-05-20 04:35:13
>>akmari+mC1
Wild take. Let’s just hand over the keys to LLMs I suppose, the fancy next token predictor is the capitan now.
[go to top]