zlacker

[return to "Performance and telemetry analysis of Trae IDE, ByteDance's VSCode fork"]
1. circui+s4[view] [source] 2025-07-27 18:36:18
>>segfau+(OP)
Is it just me or does the formatting of this feel like ChatGPT (numbered lists, "Key Takeaways", and just the general phrasing of things)? It's not necessarily an issue if you checked over it properly but if you did use it then it might be good to mention that for transparency, because people can tell anyway and it might feel slightly otherwise

(or maybe you just have a similar writing style)

◧◩
2. markso+Q4[view] [source] 2025-07-27 18:39:31
>>circui+s4
> might be good to mention that for transparency, because people can tell anyway and it might feel slightly otherwise

Devil's advocate: why does it matter (apart from "it feels wrong")? As long as the conclusions are sound, why is it relevant whether AI helped with the writing of the report?

◧◩◪
3. pessim+xa[view] [source] 2025-07-27 19:21:45
>>markso+Q4
> As long as the conclusions are sound

I can't decide to read something because the conclusions are sound. I have to read the entire thing to find out if the conclusions are sound. What's more, if it's an LLM, it's going to try its gradient-following best to make unsound reasoning seem sound. I have to be an expert to tell that it is a moron.

I can't put that kind of work into every piece of worthless slop on the internet. If an LLM says something interesting, I'm sure a human will tell me about it.

The reason people are smelling LLMs everywhere is because LLMs are low-signal, high-effort. The disappointment one feels when a model starts going off the rails is conditioning people to detect and be repulsed by even the slightest whiff of a robotic word choice.

edit: I feel like we discovered the direction in which AGI lies but we don't have the math to make it converge, so every AI we make goes completely insane after being asked three to five questions. So we've created architectures where models keep copious notes about what they're doing, and we carefully watch them to see if they've gone insane yet. When they inevitably do, we quickly kill them, create a new one from scratch, and feed it the notes the old one left. AI slop reads like a dozen cycles of that. A group effort, created by a series of new hires, silently killed after a single interaction with the work.

◧◩◪◨
4. farmer+p01[view] [source] 2025-07-28 03:44:22
>>pessim+xa
I want this to be the plot of bladerunner - deckard must hunt down errant replicants before they completely go insane due to context limits
[go to top]