zlacker

[return to "My AI skeptic friends are all nuts"]
1. parado+4u[view] [source] 2025-06-03 00:29:20
>>tablet+(OP)
I like Thomas, but I find his arguments include the same fundamental mistake I see made elsewhere. He acknowledged that the tools need an expert to use properly, and as he illustrated, he refined his expertise over many years. He is of the first and last generation of experienced programmers who learned without LLM assistance. How is someone just coming out of school going to get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase? I can almost anticipate an interjection along the lines of "well we used to build everything with our hands and now we have tools etc, it's just different" but this is an order of magnitude different. This is asking a robot to design and assemble a shed for you, and you never even see the saw, nails, and hammer being used, let alone understand enough about how the different materials interact to get much more than a "vibe" for how much weight the roof might support.
◧◩
2. Aurorn+fw[view] [source] 2025-06-03 00:51:07
>>parado+4u
> I like Thomas, but I find his arguments include the same fundamental mistake I see made elsewhere

Some of the arguments in the article are so bizarre that I can’t believe they’re anything other than engagement bait.

Claiming that IP rights shouldn’t matter because some developers pirate TV shows? Blaming LLM hallucinations on the programming language?

I agree with the general sentiment of the article, but it feels like the author decided to go full ragebait/engagement bait mode with the article instead of trying to have a real discussion. It’s weird to see this language on a company blog.

I think he knows that he’s ignoring the more complex and nuanced debates about LLMs because that’s not what the article is about. It’s written in inflammatory style that sets up straw man talking points and then sort of knocks them down while giving weird excuses for why certain arguments should be ignored.

◧◩◪
3. phkahl+vx[view] [source] 2025-06-03 01:02:47
>>Aurorn+fw
>> Blaming LLM hallucinations on the programming language?

My favorite was suggesting that people select the programming language based of which ones LLMs are best at. People who need an LLM to write code might do that, but no experienced developer would. There are too many other legitimate considerations.

◧◩◪◨
4. mediam+tB[view] [source] 2025-06-03 01:40:57
>>phkahl+vx
If an LLM improves coding productivity, and it is better at one language than another, then at the margin it will affect which language you may choose.

At the margin means that both languages, or frameworks or whatever, are reasonably appropriate for the task at hand. If you are writing firmware for a robot, then the LLM will be less helpful, and a language such as Python or JS which the LLM is good at is useless.

But Thomas's point is that arguing that LLMs are not useful for all languages is not the same as saying that are not useful for any language.

If you believe that LLM competencies are not actually becoming drivers in what web frameworks people are using, for example, you need to open your eyes and recognize what is happening instead of what you think should be happening.

(I write this as someone who prefers SvelteJS over React - but LLM's React output is much better. This has become kind of an issue over the last few years.)

◧◩◪◨⬒
5. rapind+SC[view] [source] 2025-06-03 01:52:36
>>mediam+tB
I'm a little (not a lot) concerned that this will accelerate the adoption of languages and frameworks based on their popularity and bury away interesting new abstractions and approaches from unknown languages and frameworks.

Taking your react example, then if we we're a couple years ahead on LLMs, jQuery might now be the preferred tool due to AI adoption through consumption.

You can apply this to other fields too. It's quite possible that AIs will make movies, but the only reliably well produced ones will be superhero movies... (I'm exaggerating for effect)

Could AI be the next Cavendish banana? I'm probably being a bit silly though...

◧◩◪◨⬒⬓
6. simonc+201[view] [source] 2025-06-03 06:16:04
>>rapind+SC
> I'm a little ... concerned that this will accelerate the adoption of languages and frameworks based on their popularity and bury away interesting new abstractions and approaches...

I'd argue that the Web development world has been choosing tooling based largely on popularity for like at least a decade now. I can't see how tooling selection could possibly get any worse for that section of the profession.

◧◩◪◨⬒⬓⬔
7. rapind+sy1[view] [source] 2025-06-03 12:01:42
>>simonc+201
I disagree. There’s a ton of diversity in web development currently. I don’t think there’s ever been so many language and framework choices to build a web app.

The argument is that we lose this diversity as more people rely on AI and choose what AI prefers.

◧◩◪◨⬒⬓⬔⧯
8. jhatem+d03[view] [source] 2025-06-03 20:53:08
>>rapind+sy1
You raise a valid concern but you presume that we will stay under the OpenAI/Anthropic/etc oligopoly forever. I don't think this is going to be the status quo in the long-term. There is demand for different types of LLMs trained on different data. And there is demand for hardware. For example the new Mac Studio has 512gb VRAM which can run the 600B param Deepseek model locally. So in the future I could see people training their own LLMs to be experts at their language/framework of choice.

Of course you could disagree with my prediction and that these big tech companies are going to build MASSIVE gpu farms the size of the Tesla Gigafactory which can run godlike AI where nobody can compete, but if we get to that point I feel like we will have bigger problems than "AI react code is better than AI solidjs code"

◧◩◪◨⬒⬓⬔⧯▣
9. rapind+gm3[view] [source] 2025-06-03 23:49:19
>>jhatem+d03
I suspect we’ll plateau at some point and the gigafactories won’t produce a massive advantage. So running your own models could very well be a thing.
◧◩◪◨⬒⬓⬔⧯▣▦
10. jhatem+cI3[view] [source] 2025-06-04 04:52:42
>>rapind+gm3
Yea probably..... I wonder when the plateau is. Is it right around the corner or 10 years from now? Seems like they can just keep growing it forever, based on what Sam Altman is saying. I'm botching the quote but either he or George Hotz said something to the effect of: every time you add an order of magnitude to the size of the data, there is a noticeable qualitative difference in the output. But maybe past a certain size you get diminishing returns. Or maybe it's like Moore's Law where they thought it would just go on forever but it turned out it's extremely difficult to get the distance between two transistors smaller than 7nm
[go to top]