zlacker

[return to "My AI skeptic friends are all nuts"]
1. capnre+15[view] [source] 2025-06-02 21:39:49
>>tablet+(OP)
The argument seems to be that for an expert programmer, who is capable of reading and understanding AI agent code output and merging it into a codebase, AI agents are great.

Question: If everyone uses AI to code, how does someone become an expert capable of carefully reading and understanding code and acting as an editor to an AI?

The expert skills needed to be an editor -- reading code, understanding its implications, knowing what approaches are likely to cause problems, recognizing patterns that can be refactored, knowing where likely problems lie and how to test them, holding a complex codebase in memory and knowing where to find things -- currently come from long experience writing code.

But a novice who outsources their thinking to an LLM or an agent (or both) will never develop those skills on their own. So where will the experts come from?

I think of this because of my job as a professor; many of the homework assignments we use to develop thinking skills are now obsolete because LLMs can do them, permitting the students to pass without thinking. Perhaps there is another way to develop the skills, but I don't know what it is, and in the mean time I'm not sure how novices will learn to become experts.

◧◩
2. gwbas1+c9[view] [source] 2025-06-02 22:05:05
>>capnre+15
> Question: If everyone uses AI to code, how does someone become an expert capable of carefully reading and understanding code and acting as an editor to an AI?

Well, if everyone uses a calculator, how do we learn math?

Basically, force students to do it by hand long enough that they understand the essentials. Introduce LLMs at a point similar to when you allow students to use a calculator.

◧◩◪
3. mmasu+4e[view] [source] 2025-06-02 22:35:00
>>gwbas1+c9
While I agree with your suggestion, the comparison does not hold: calculators do not tell you which numbers to input and compute. With an LLM you can just ask vaguely, and get an often passable result
◧◩◪◨
4. gwbas1+0r4[view] [source] 2025-06-04 12:40:48
>>mmasu+4e
Then figure out how to structure the assignment to make students show their work. If a student doesn't understand the concept, it will show in how they prompt AI.

For example, you could require that students submit all logs of AI conversations, and show all changes they made to the code produced.

IE, yesterday I asked ChatGPT how to add a copy to clipboard button in MudBlazor. It told me the button didn't exist, and then wrote the component for me. That saved me a bunch of research; but I needed to refactor the code for various reasons.

So, if this was for an assignment, I could turn in both my log from ChatGPT, and then show the changes I made to the code ChatGPT provided.

[go to top]