My workflow right now is to use AI for rough draft and developmental editing stages, then switch AI from changing files to leaving comments on files suggesting I change something. It is slower than letting it line/copyedit itself, but models derp up too much so letting them handle edits at this stage tends to be 2 steps forward 2 steps back.
I recently had a colleague send me a link to a ChatGPT conversation instead of responding to me. Another colleague organised a quiz where the answers were hallucinated by Grok. In some Facebook groups I'm in where people are meant to help each other, people have started just pasting the questions into ChatGPT and responding with screenshots of the conversation. I use LLMs almost daily, but this is all incredibly depressing. The only time I want to interact with an LLM is when I choose to, not when it's forced on me without my consent or at least a disclaimer.
> In some Facebook groups I'm in where people are meant to help each other, people have started just pasting the questions into ChatGPT and responding with screenshots of the conversation
i get the feeling these ai tools will just further the alienation of society even more...I honestly would rather this than my colleague sending me text that is obviously from chatgpt, but not stating it upfront. Or even the "I asked chatgpt and it said this.." along with pasting 10 paragraphs of stuff they didn't even read to confirm it could be relevant.
I find this kind of thing interesting anywhere someone is being paid more than minimum wage: a really good way to make your boss think that they can replace you with ChatGPT is for you to perform it at ChatGPT’s level. I do give them points for not trying to hide it, but it really seems shortsighted not to consider that each time you do that, you’re raising the question of why they shouldn’t cut out the middleman.
They aren't reliable at anything I guess, but for English I have nothing else, and they are better than nothing. I do wish they would use a more effective way of highlighting their suggested changes, such as italics for new text and strikeout for deleted text.
Unless you are paid by the word, I struggling to think of why you would use an AI to create new text. The facts will be wrong, the tone won't be yours. "If I had more time, this would be shorter" is a truism here - AI can spit out an enormous amount of text in a very short time, text could be cut down to a fraction of the size with a bit of effort.
As I said that doesn't matter if you are being paid by the word. If the goal is to be paid, who cares where the words come from if the reader laps it up. But if you enjoy writing for it's own sake, and if you are doing because you've found the discipline involved writing something down in a way others will understand is an excellent way to sharpen you're own understanding, then the less a AI is involved the better. Sadly I need a proof reader, an AI does an acceptable job, and they are free, for now.