zlacker

Writing with LLM is not a shame

submitted by flornt+(OP) on 2025-08-24 10:10:29 | 107 points 145 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
11. latexr+N3[view] [source] 2025-08-24 10:59:16
>>flornt+(OP)
> One argument to not disclaim it: people do not disclaim if they Photoshop a picture after publishing it and we are surrounded by a lot of edited pictures.

That is both a false equivalence and a form of whataboutism.

https://en.wikipedia.org/wiki/False_equivalence

https://en.wikipedia.org/wiki/Whataboutism

It is a poor argument in general, and a sure-fire way to increase shittiness in the world: “Well, everyone else is doing this wrong thing, so I can too”. No. Whenever you mention the status quo as an excuse to justify your own behaviour, you should look inward and reflect on your actions. Do you really believe what you’re doing is the right thing? If it is, fine; but if it is not, either don’t mention it or (ideally) do something about it.

> why don’t we see people mentioning they used specific tools to proofread before AI apparition?

Whenever I see this argument, I have a hard time believe it is made in good faith. Can you truly not see the difference between using a tool to fix mistakes in your work or to do the work for you?

> It feels like an obligation we have to respect in a way.

This was obvious from the beginning of the post. Throughout I never got the feeling you were struggling with the question intrinsically, for yourself, but always in a sense of how others would judge your actions. You quote opinion after opinion and it felt you were in search of absolution—not truth—for something you had already decided you did not want to do.

◧◩◪◨
32. latexr+Y6[view] [source] [discussion] 2025-08-24 11:29:40
>>jascha+o6
> Fact is that I maybe saw it in 10% of blogs and news articles before Chatgpt.

I believe you. But also be aware of the Frequency Illusion. The fact that someone mentions that as an LLM signal also makes you see it more.

https://en.wikipedia.org/wiki/Frequency_illusion

> Yes it's not a guarantee but it is at least a very good signal that something was at least partially LLM written.

Which is perfectly congruent with what I said with emphasis:

> it is never sufficient on its own to identify LLM use

I have no quarrel with using it as one signal. My beef is when it’s used as the principal or sole signal.

56. kosola+US[view] [source] 2025-08-24 18:13:12
>>flornt+(OP)
A relevant satirical post I stumbled on today much about the same subject: https://medium.com/@Justwritet/stop-competing-with-the-machi...
73. godels+L91[view] [source] 2025-08-24 20:18:58
>>flornt+(OP)
I'm an AI critic, but I use AI every day. In fact, I am an AI researcher and work on making models more capable and powerful (probably where a lot of my criticism stems from).

My main problem with AI usage is that people use it and turn their brains off. This isn't a new problem, but it is a new scale. People mindlessly punch numbers into a formula, run software they don't understand, or read a summary of a complex topics declaring mastery. The problem is sloppiness and our human tendencies to be lazy. Lazy by focusing on the least amount of energy at the moment, not the least amount of energy through time. That's the critical distinction. Slop is momentary laziness while thoughtfulness is amortized laziness.

The problem is in a way not the AI but us and the cultures we have created. At the end of the day no one cares if you wrote AI code (or docs or whatever), they care about how well it was done. You want to do things fast, but speed is nothing if the quality suffers.

I really like how Mitchell put it in this Ghostty PR[0,1]. The disclosure is to help people know what to pay more attention to. It is a declaration of where you were lazy or didn't have expertise or took some shortcut. It tells us what the actually problem is: slop isn't always obvious.

A little slop generally doesn't do too much harm (unless it grows and compounds), but a lot of slop does. If you are concerned about slop and the rate of slop is increasing then it means you must treat everything as potential slop. Because slop isn't easily recognized, it makes effort increase, exponentially. So by producing AI slop (or any kind of slop) you aren't decreasing the workload, you're outsourcing it to someone else. Often, that outsourcing produces additional costs. It only creates the illusion of productivity.

It's not about the AI, it is about shoving your work onto others. Doesn't matter if you use a shovel or bulldozer. But people are sure going to be louder (or cross that threshold where they'll actually speak up) if you start using a bulldozer to offload your work to others. The problem is it makes others have to constantly be in System 2 thinking all the time. It is absolutely exhausting.

[0] https://github.com/ghostty-org/ghostty/pull/8289

[1] >>44976568

[go to top]