zlacker

[parent] [thread] 20 comments
1. josean+(OP)[view] [source] 2026-02-04 05:12:43
except the thing does not work as expected and it just makes you worse not better
replies(3): >>keyle+R >>beebma+X >>Crimso+Tg
2. keyle+R[view] [source] 2026-02-04 05:21:55
>>josean+(OP)
Like I said that's temporary. It's janky and wonky but it's a stepping stone.

Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.

It's only time.

replies(2): >>leecom+81 >>mr_fre+mR
3. beebma+X[view] [source] 2026-02-04 05:23:14
>>josean+(OP)
Comments like these are why I don't browse HN nearly ever anymore
replies(1): >>w4yai+T6
◧◩
4. leecom+81[view] [source] [discussion] 2026-02-04 05:25:19
>>keyle+R
Why is image generation the same as code generation?
replies(2): >>dcw303+P4 >>rvz+6j
◧◩◪
5. dcw303+P4[view] [source] [discussion] 2026-02-04 06:06:34
>>leecom+81
it's not. We were able to get rid of 6 fingered hands by getting very specific, and fine tuning models with lots of hand and finger training data.

But that approach doesn't work with code, or with reasoning in general, because you would need to exponentially fine tune everything in the universe. The illusion that the AI "understands" what it is doing is lost.

◧◩
6. w4yai+T6[view] [source] [discussion] 2026-02-04 06:22:04
>>beebma+X
Nothing new. Whenever a new layer of abstraction is added, people say it's worse and will never be as good as the old way. Though it's a totally biased opinion, we just have issues with giving up things we like as human being.
replies(2): >>roadbu+ak >>duskdo+An
7. Crimso+Tg[view] [source] 2026-02-04 07:53:58
>>josean+(OP)
That's your opinion and you can not use those tools.

People are paying for it because it helps them. Who are you to whine about it?

replies(1): >>nunez+hh
◧◩
8. nunez+hh[view] [source] [discussion] 2026-02-04 07:56:34
>>Crimso+Tg
But that's the entire flippin' problem. People are being forced to use these tools professionally at a stagering rate. It's like the industry is in its "training your replacement" era.
replies(2): >>Crimso+hw >>mr_fre+yR
◧◩◪
9. rvz+6j[view] [source] [discussion] 2026-02-04 08:09:22
>>leecom+81
It isn't.

Code generation progression in LLMs still carries higher objective risk of failure depending on the experience on the person using it because:

1. They still do not trust if the code works (even if it has tests) thus, needs thorough human supervision and still requires on-going maintainance.

2. Hence (2) it can cost you more money than the tokens you spent building it in the first place when it goes horribly wrong in production.

Image generation progression comes with close to no operational impact, and has far less human supervision and can be safely done with none.

replies(1): >>raw_an+Xm1
◧◩◪
10. roadbu+ak[view] [source] [discussion] 2026-02-04 08:18:59
>>w4yai+T6
> Whenever a new layer of abstraction is added

LLMs aren't a "layer of abstraction."

99% of people writing in assembly don't have to drop down into manual cobbling of machine code. People who write in C rarely drop into assembly. Java developers typically treat the JVM as "the computer." In the OSI network stack, developers writing at level 7 (application layer) almost never drop to level 5 (session layer), and virtually no one even bothers to understand the magic at layers 1 & 2. These all represent successful, effective abstractions for developers.

In contrast, unless you believe 99% of "software development" is about to be replaced with "vibe coding", it's off the mark to describe LLMs as a new layer of abstraction.

replies(1): >>w4yai+ss
◧◩◪
11. duskdo+An[view] [source] [discussion] 2026-02-04 08:45:19
>>w4yai+T6
The difference is that LLM output is very nondeterministic.
replies(2): >>w4yai+Gs >>wtetzn+pA2
◧◩◪◨
12. w4yai+ss[view] [source] [discussion] 2026-02-04 09:23:13
>>roadbu+ak
> unless you believe 99% of "software development" is about to be replaced with "vibe coding"

Probably not vibe coding, but most certainly with some AI automation

◧◩◪◨
13. w4yai+Gs[view] [source] [discussion] 2026-02-04 09:24:26
>>duskdo+An
It depends. Temperature is a variable. If you really need determinism, you could build a LLM for that. Non-determinism can be a good feature though.
replies(1): >>duskdo+SW
◧◩◪
14. Crimso+hw[view] [source] [discussion] 2026-02-04 09:51:03
>>nunez+hh
you don't like it? Find a place that doesn't enforce it. Can't find it? Then either build it or accept that you want a horse carriage while people want taxi.
replies(1): >>dudewh+Ah1
◧◩
15. mr_fre+mR[view] [source] [discussion] 2026-02-04 12:31:11
>>keyle+R
> Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.

Yes, but you’re not taking into account what actually caused this evolution. At first glance, it looks like exponential growth, but then we see OpenAI (as one example) with trillions in obligations compared to 12–13 billion in annual revenue. Meanwhile, tool prices keep rising, hardware demand is surging (RAM shortages, GPUs), and yet new and interesting models continue to appear. I’ve been experimenting with Claude over the past few days myself. Still, at some point, something is bound to backfire.

The AI "bubble" is real, you don’t need a masters degree in economics to recognize it. But with mounting economic pressures worldwide and escalating geopolitical tension we may end up stuck with nothing more than those amusing Will Smith eating pasta videos for a while.

◧◩◪
16. mr_fre+yR[view] [source] [discussion] 2026-02-04 12:33:03
>>nunez+hh
That's Capitalism, baby
◧◩◪◨⬒
17. duskdo+SW[view] [source] [discussion] 2026-02-04 13:10:18
>>w4yai+Gs
How would you do that? If it's possible, it seems strange that someone hasn't done it already.
replies(1): >>w4yai+Fp3
◧◩◪◨
18. dudewh+Ah1[view] [source] [discussion] 2026-02-04 15:04:53
>>Crimso+hw
How can you be so sure to think you're not replaceable? Because when you use the tools, you're giving data to companies to eventually get even you.
◧◩◪◨
19. raw_an+Xm1[view] [source] [discussion] 2026-02-04 15:29:39
>>rvz+6j
This sounds like every system that I didn’t write completely myself and honestly some that I did
◧◩◪◨
20. wtetzn+pA2[view] [source] [discussion] 2026-02-04 21:00:20
>>duskdo+An
And because of that, we check in the generated code, not the high-level abstraction. So to understand your program, you have to read the output, not the input.
◧◩◪◨⬒⬓
21. w4yai+Fp3[view] [source] [discussion] 2026-02-05 02:27:05
>>duskdo+SW
Totally possible and we can already do it ! Simply put, just set the temperature to 0 and reuse the same seed. But it's just not what people really want, and providers are reluctant because they cost up to 5x more to generate. It's also not 100% non-deterministic, because cloud providers don't run on the same hardware, with the same conditions required for producing the same output. So, in practice, not so good, but in theory if you need it and can afford it, you can.
[go to top]