zlacker

[return to "My AI skeptic friends are all nuts"]
1. parado+4u[view] [source] 2025-06-03 00:29:20
>>tablet+(OP)
I like Thomas, but I find his arguments include the same fundamental mistake I see made elsewhere. He acknowledged that the tools need an expert to use properly, and as he illustrated, he refined his expertise over many years. He is of the first and last generation of experienced programmers who learned without LLM assistance. How is someone just coming out of school going to get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase? I can almost anticipate an interjection along the lines of "well we used to build everything with our hands and now we have tools etc, it's just different" but this is an order of magnitude different. This is asking a robot to design and assemble a shed for you, and you never even see the saw, nails, and hammer being used, let alone understand enough about how the different materials interact to get much more than a "vibe" for how much weight the roof might support.
◧◩
2. Aurorn+fw[view] [source] 2025-06-03 00:51:07
>>parado+4u
> I like Thomas, but I find his arguments include the same fundamental mistake I see made elsewhere

Some of the arguments in the article are so bizarre that I can’t believe they’re anything other than engagement bait.

Claiming that IP rights shouldn’t matter because some developers pirate TV shows? Blaming LLM hallucinations on the programming language?

I agree with the general sentiment of the article, but it feels like the author decided to go full ragebait/engagement bait mode with the article instead of trying to have a real discussion. It’s weird to see this language on a company blog.

I think he knows that he’s ignoring the more complex and nuanced debates about LLMs because that’s not what the article is about. It’s written in inflammatory style that sets up straw man talking points and then sort of knocks them down while giving weird excuses for why certain arguments should be ignored.

◧◩◪
3. tptace+sx[view] [source] 2025-06-03 01:02:32
>>Aurorn+fw
They are not engagement bait. That argument, in particular, survived multiple rounds of reviews with friends outside my team who do not fully agree with me about this stuff. It's a deeply sincere, and, I would say for myself, earned take on this.

A lot of people are misunderstanding the goal of the post, which is not necessarily to persuade them, but rather to disrupt a static, unproductive equilibrium of uninformed arguments about how this stuff works. The commentary I've read today has to my mind vindicated that premise.

◧◩◪◨
4. cyral+CB[view] [source] 2025-06-03 01:42:00
>>tptace+sx
It's a good post and I strongly agree with the part about level setting. You see the same tired arguments basically every day here and subreddits like /r/ExperiencedDevs. I read a few today and my favorites are:

- It cannot write tests because it doesn't understand intent

- Actually it can write them, but they are "worthless"

- It's just predicting the next token, so it has no way of writing code well

- It tries to guess what code means and will be wrong

- It can't write anything novel because it can only write things it's seen

- It's faster to do all of the above by hand

I'm not sure if it's the issue where they tried copilot with gpt 3.5 or something, but anyone who uses cursor daily knows all of the above is false, I make it do these things every day and it works great. There was another comment I saw here or on reddit about how everyone needs to spend a day with cursor and get good at understanding how prompting + context works. That is a big ask but I think the savings are worth it when you get the hang of it.

◧◩◪◨⬒
5. tptace+PB[view] [source] 2025-06-03 01:44:33
>>cyral+CB
Yes. It's this "next token" stuff that is a total tell we're not all having the same conversation, because what serious LLM-driven developers are doing differently today than they were a year ago has not much at all to do with the evolution of the SOTA models themselves. If you get what's going on, the "next token" thing has nothing at all to do with this. It's not about the model, it's about the agent.
[go to top]