zlacker
[parent]
[thread]
3 comments
1. catlif+(OP)
[view]
[source]
2026-02-07 00:45:02
Seems like we should fix the LLMs instead of bending over backwards no?
replies(1):
>>redman+qh
◧
2. redman+qh
[view]
[source]
2026-02-07 04:05:39
>>catlif+(OP)
They’re good at it because they’ve learned from the existing mountains of python and javascript.
replies(2):
>>catlif+Mu
>>rienbd+ix
◧◩
3. catlif+Mu
[view]
[source]
[discussion]
2026-02-07 07:34:43
>>redman+qh
I think the next big breakthrough will be cost effective model specialization, maybe through modular models. The monolithic nature of today’s models is a major weakness.
◧◩
4. rienbd+ix
[view]
[source]
[discussion]
2026-02-07 08:09:08
>>redman+qh
Plenty of Java in the training data too.
[go to top]