zlacker

[return to "My AI skeptic friends are all nuts"]
1. metall+D1[view] [source] 2025-06-02 21:22:20
>>tablet+(OP)
Can someone explain to me what this means?

> People coding with LLMs today use agents. Agents get to poke around your codebase on their own. They author files directly. They run tools. They compile code, run tests, and iterate on the results. ...

Is this what people are really doing? Who is just turning AI loose to modify things as it sees fit? If I'm not directing the work, how does it even know what to do?

I've been subjected to forced LLM integration from management, and there are no "Agents" anywhere that I've seen.

Is anyone here doing this that can explain it?

◧◩
2. willia+Z2[view] [source] 2025-06-02 21:28:48
>>metall+D1
I run Cursor in a mode that starts up shell processes, runs linters, tests etc on its own, updates multiple files, runs the linter and tests again, fixes failures, and so on. It auto stops at 20 iterations through the feedback loop.

Depending on the task it works really well.

◧◩◪
3. metall+K4[view] [source] 2025-06-02 21:38:19
>>willia+Z2
This example seems to keep coming up. Why do you need an AI to run linters? I have found that linters actually add very little value to an experience programmer, and actually get in the way when I am in the middle of active development. I have to say I'm having a hard time visualizing the amazing revolution that is alluded to by the author.
◧◩◪◨
4. willia+86[view] [source] 2025-06-02 21:46:02
>>metall+K4
Static errors are caught by linters before runtime errors are caught by a test suite. When you have an LLM in a feedback loop, otherwise known as an agent, then iterative calls to the LLM will include requests and responses from linters and test suites, which can assure the user, who typically follows along with the entire process, that the agent is writing better code than it would otherwise.
[go to top]