zlacker

[return to "My AI skeptic friends are all nuts"]
1. matthe+y41[view] [source] 2025-06-03 06:58:13
>>tablet+(OP)
I think this article is pretty spot on — it articulates something I’ve come to appreciate about LLM-assisted coding over the past few months.

I started out very sceptical. When Claude Code landed, I got completely seduced — borderline addicted, slot machine-style — by what initially felt like a superpower. Then I actually read the code. It was shockingly bad. I swung back hard to my earlier scepticism, probably even more entrenched than before.

Then something shifted. I started experimenting. I stopped giving it orders and began using it more like a virtual rubber duck. That made a huge difference.

It’s still absolute rubbish if you just let it run wild, which is why I think “vibe coding” is basically just “vibe debt” — because it just doesn’t do what most (possibly uninformed) people think it does.

But if you treat it as a collaborator — more like an idiot savant with a massive brain but no instinct or nous — or better yet, as a mech suit [0] that needs firm control — then something interesting happens.

I’m now at a point where working with Claude Code is not just productive, it actually produces pretty good code, with the right guidance. I’ve got tests, lots of them. I’ve also developed a way of getting Claude to document intent as we go, which helps me, any future human reader, and, crucially, the model itself when revisiting old code.

What fascinates me is how negative these comments are — how many people seem closed off to the possibility that this could be a net positive for software engineers rather than some kind of doomsday.

Did Photoshop kill graphic artists? Did film kill theatre? Not really. Things changed, sure. Was it “better”? There’s no counterfactual, so who knows? But change was inevitable.

What’s clear is this tech is here now, and complaining about it feels a bit like mourning the loss of punch cards when terminals showed up.

[0]: https://matthewsinclair.com/blog/0178-why-llm-powered-progra...

◧◩
2. throw3+G51[view] [source] 2025-06-03 07:12:08
>>matthe+y41
> Did Photoshop kill graphic artists?

No, but AI did.

◧◩◪
3. tptace+m61[view] [source] 2025-06-03 07:19:54
>>throw3+G51
This, as the article makes clear, is a concern I am alert and receptive to. Ban production of anything visual from an LLM; I'll vote for it. Just make sure they can still generate Mermaid charts and Graphviz diagrams, so they still apply to developers.
◧◩◪◨
4. hatefu+W61[view] [source] 2025-06-03 07:25:44
>>tptace+m61
What is unique about graphic design that warrants such extraordinary care? Should we just ban technology that approaches "replacement" territory? What about the people, real or imagined, that earn a living making Graphviz diagrams?
◧◩◪◨⬒
5. omnimu+6f1[view] [source] 2025-06-03 08:49:30
>>hatefu+W61
It’s more question of how it does what it does. By making statistical model out of work of humans that it now aims to replace.

I think graphic designers would be a lot less angry if AIs were trained on licensed work… thats how the system worked up until now after all.

◧◩◪◨⬒⬓
6. fennec+Eh1[view] [source] 2025-06-03 09:21:52
>>omnimu+6f1
I don't think most artists would be any less angry & scared if AI was trained on licensed work. The rhetoric would just shift from mostly "they're breashing copyright!" to more of the "machine art is soulless and lacks true human creativity!" line.

I have a lot of artist friends but I still appreciate that diffusion models are (and will be with further refinement) incredibly useful tools.

What we're seeing is just the commoditisation of an industry in the same way that we have many, many times before through the industrial era, etc.

◧◩◪◨⬒⬓⬔
7. omnimu+6R1[view] [source] 2025-06-03 13:54:01
>>fennec+Eh1
It actually doesn't matter how would they feel. In currently accepted copyright framework if the works were licensed they couldn't do much about it. But right now they can be upset because suddenly new normal is massive copyright violation. It's very clear that without the massive amount of unlicensed work the LLMs simply wouldn't work well. The AI industry is just trying to run with it hoping nobody will notice.
◧◩◪◨⬒⬓⬔⧯
8. Amezar+9e2[view] [source] 2025-06-03 16:14:49
>>omnimu+6R1
It isn’t clear at all that there’s any infringement going on at all, except in cases where AI output reproduces copyrighted content or content that is sufficiently close to copyrighted content to constitute a derivative work. For example, if you told an LLM to write a Harry Potter fanfic, that would be infringement - fanfics are actually infringing derivative works that usually get a pass because nobody wants to sue their fanbase.

It’s very unlikely simply training an LLM on “unlicensed” work constitutes infringement. It could possibly be that the model itself, when published, would represent a derivative work, but it’s unlikely that most output would be unless specifically prompted to be.

◧◩◪◨⬒⬓⬔⧯▣
9. nogrid+zp2[view] [source] 2025-06-03 17:17:55
>>Amezar+9e2
I'm interpreting what you described as a derivative work to be something like:

"Create a video of a girl running through a field in the style of Studio Ghibli."

There, someone has specifically prompted the AI to create something visually similar to X.

But would you still consider it a derivative work if you replaced the words "Studio Ghibli" with a few sentences describing their style that ultimately produces the same output?

◧◩◪◨⬒⬓⬔⧯▣▦
10. Amezar+zu3[view] [source] 2025-06-04 01:41:36
>>nogrid+zp2
Derivative work is a legal term. Art styles cannot be copyrighted.
[go to top]