zlacker

[parent] [thread] 7 comments
1. vector+(OP)[view] [source] 2025-12-02 21:36:53
I am not your target with this question (I don't write Zig) but there is a spectrum of LLM usage for coding. It is possible to use LLMs extensively but almost never ship LLM generated code, except for tiny trivial functions. One can use them for ideation, quick research, or prototypes/starting places, and then build on that. That is how I use them, anyway

Culturally I see pure vibe coders as intersecting more with entrepreneurfluencer types who are non-technical but trying to extend their capabilities. Most technical folks I know are fairly disillusioned with pure vibe coding, but that's my corner of the world, YMMV

replies(3): >>dijit+D >>advent+b3 >>Aurorn+af
2. dijit+D[view] [source] 2025-12-02 21:40:19
>>vector+(OP)
fwiw, copilots licence only explicitly permits using its suggestions the way you say.

putting everyone using the generated outputs into a sort of unofficial grey market: even when using first-party tools. Which is weird.

replies(1): >>lupire+58
3. advent+b3[view] [source] 2025-12-02 21:52:20
>>vector+(OP)
I'll give you a basic example where it saved me a ton of time to vibe code instead of doing it myself, and I believe it would hold true for anyone.

Creating ~50 different types of calculators in JavaScript. Gemini can bang out in seconds what would take me far longer (and it's reasonable at basic tailwind style front-end design to boot). A large amount of work smashed down to a couple of days of cumulative instruction + testing in my spare time. It takes far long to think of how I want something to function in this example than it does for Gemini to successfully produce it. This is a use case scenario where something like Gemini 3 is exceptionally capable, and far exceeds the capability requirements needed to produce a decent outcome.

Do I want my next operating system vibe coded by Gemini 3? Of course not. Can it knock out front-end JavaScript tasks trivially? Yes, and far faster than any human could ever do it. Classic situation of using a tool for things it's particularly well suited.

Here's another one. An SM-24 Geophone + Raspberry PI 5 + ADC board. Hey Gemini / GPT, I need to build bin files from the raw voltage figures + timestamps, then using flask I need a web viewer + conversion on the geophone velocity figures for displacement and acceleration. Properly instructed, they'll create a highly functional version of that with some adjustments/iteration in 15-30 minutes. I basically had them recreate REW RTA mode for my geophone velocity data, and there's no way a person could do it nearly as fast. It requires some checking and iteration, and that's assumed in the comparison.

replies(1): >>ohyout+Vv
◧◩
4. lupire+58[view] [source] [discussion] 2025-12-02 22:17:32
>>dijit+D
Can you link to more info about this?
replies(1): >>theshr+BL1
5. Aurorn+af[view] [source] 2025-12-02 23:02:18
>>vector+(OP)
> Culturally I see pure vibe coders as intersecting more with entrepreneurfluencer types who are non-technical but trying to extend their capabilities. Most technical folks I know are fairly disillusioned with pure vibe coding, but that's my corner of the world, YMMV

Anyone who has spent time working with LLMs knows that the LinkedIn-style vibecoding where someone writes prompts and hits enter until they ship an app doesn't work.

I've had some fun trying to coax different LLMs into writing usable small throwaway apps. It's hilarious in a way to the contrast between what an experienced developer sees coming out of LLMs and what the LinkedIn and Twitter influencers are saying. If you know what you're doing and you have enough patience you really can get an LLM to do a lot of the things you want, but it can require a lot of handholding, rejecting bad ideas, and reviewing.

In my experience, the people pushing "vibecoding" content are influencers trying to ride the trend. They use the trend to gain more followers, sell courses, get the attention of a class of investors desperate to deploy cash, and other groups who want to believe vibecoding is magic.

I also consider them a vocal minority, because I don't think they represent the majority of LLM users.

replies(1): >>theshr+rL1
◧◩
6. ohyout+Vv[view] [source] [discussion] 2025-12-03 01:24:07
>>advent+b3
Yeah I had OpenAI crank out 100 different fizzbuzz implementations in a dozen seconds—-and many of them worked! No chance a developer would have done it that fast, and for anyone who needs to crank out fizzbuzz implementations at scale this is the tool to beat. The haters don’t know what they’re talking about.
◧◩
7. theshr+rL1[view] [source] [discussion] 2025-12-03 13:00:21
>>Aurorn+af
Working with Agentic LLMs is exactly the same skillset as directing junior programmers or offshore consultants.

You get a feel for how much direction they need after working for a while and tooling and accessible documentation is really important for quality.

Then you give them a task and review the results. In (backend/systems) programming it's pretty binary whether a solution works or not, it's not a matter of taste but something you can just validate with hard data.

I've done so many tiny/small/medium sized utilities for myself in the last year it's crazy[0]. A good bunch of them are 95-100% vibecoded, meaning I was just the "project manager" instructing what features I want and letting the agent(s) make it work.

I think I have a pretty good feel for the main agentic systems and what they can do in the context of what I do so I know what to tell them and how - each has its own distinct way of working and using the wrong one for the wrong job is either stupid, frustrating or just a waste of time.

[0] https://indieweb.org/make_what_you_need

◧◩◪
8. theshr+BL1[view] [source] [discussion] 2025-12-03 13:01:18
>>lupire+58
Might be related to the fact that AI generated content has no copyright by law.

Nobody knows WHO has the copyright but it's been decided in courts that AI definitely doesn't own it.

[go to top]