zlacker

[return to "My AI skeptic friends are all nuts"]
1. matthe+y41[view] [source] 2025-06-03 06:58:13
>>tablet+(OP)
I think this article is pretty spot on — it articulates something I’ve come to appreciate about LLM-assisted coding over the past few months.

I started out very sceptical. When Claude Code landed, I got completely seduced — borderline addicted, slot machine-style — by what initially felt like a superpower. Then I actually read the code. It was shockingly bad. I swung back hard to my earlier scepticism, probably even more entrenched than before.

Then something shifted. I started experimenting. I stopped giving it orders and began using it more like a virtual rubber duck. That made a huge difference.

It’s still absolute rubbish if you just let it run wild, which is why I think “vibe coding” is basically just “vibe debt” — because it just doesn’t do what most (possibly uninformed) people think it does.

But if you treat it as a collaborator — more like an idiot savant with a massive brain but no instinct or nous — or better yet, as a mech suit [0] that needs firm control — then something interesting happens.

I’m now at a point where working with Claude Code is not just productive, it actually produces pretty good code, with the right guidance. I’ve got tests, lots of them. I’ve also developed a way of getting Claude to document intent as we go, which helps me, any future human reader, and, crucially, the model itself when revisiting old code.

What fascinates me is how negative these comments are — how many people seem closed off to the possibility that this could be a net positive for software engineers rather than some kind of doomsday.

Did Photoshop kill graphic artists? Did film kill theatre? Not really. Things changed, sure. Was it “better”? There’s no counterfactual, so who knows? But change was inevitable.

What’s clear is this tech is here now, and complaining about it feels a bit like mourning the loss of punch cards when terminals showed up.

[0]: https://matthewsinclair.com/blog/0178-why-llm-powered-progra...

◧◩
2. throw3+G51[view] [source] 2025-06-03 07:12:08
>>matthe+y41
> Did Photoshop kill graphic artists?

No, but AI did.

◧◩◪
3. tptace+m61[view] [source] 2025-06-03 07:19:54
>>throw3+G51
This, as the article makes clear, is a concern I am alert and receptive to. Ban production of anything visual from an LLM; I'll vote for it. Just make sure they can still generate Mermaid charts and Graphviz diagrams, so they still apply to developers.
◧◩◪◨
4. hatefu+W61[view] [source] 2025-06-03 07:25:44
>>tptace+m61
What is unique about graphic design that warrants such extraordinary care? Should we just ban technology that approaches "replacement" territory? What about the people, real or imagined, that earn a living making Graphviz diagrams?
◧◩◪◨⬒
5. omnimu+6f1[view] [source] 2025-06-03 08:49:30
>>hatefu+W61
It’s more question of how it does what it does. By making statistical model out of work of humans that it now aims to replace.

I think graphic designers would be a lot less angry if AIs were trained on licensed work… thats how the system worked up until now after all.

◧◩◪◨⬒⬓
6. fennec+Eh1[view] [source] 2025-06-03 09:21:52
>>omnimu+6f1
I don't think most artists would be any less angry & scared if AI was trained on licensed work. The rhetoric would just shift from mostly "they're breashing copyright!" to more of the "machine art is soulless and lacks true human creativity!" line.

I have a lot of artist friends but I still appreciate that diffusion models are (and will be with further refinement) incredibly useful tools.

What we're seeing is just the commoditisation of an industry in the same way that we have many, many times before through the industrial era, etc.

◧◩◪◨⬒⬓⬔
7. omnimu+6R1[view] [source] 2025-06-03 13:54:01
>>fennec+Eh1
It actually doesn't matter how would they feel. In currently accepted copyright framework if the works were licensed they couldn't do much about it. But right now they can be upset because suddenly new normal is massive copyright violation. It's very clear that without the massive amount of unlicensed work the LLMs simply wouldn't work well. The AI industry is just trying to run with it hoping nobody will notice.
◧◩◪◨⬒⬓⬔⧯
8. Amezar+9e2[view] [source] 2025-06-03 16:14:49
>>omnimu+6R1
It isn’t clear at all that there’s any infringement going on at all, except in cases where AI output reproduces copyrighted content or content that is sufficiently close to copyrighted content to constitute a derivative work. For example, if you told an LLM to write a Harry Potter fanfic, that would be infringement - fanfics are actually infringing derivative works that usually get a pass because nobody wants to sue their fanbase.

It’s very unlikely simply training an LLM on “unlicensed” work constitutes infringement. It could possibly be that the model itself, when published, would represent a derivative work, but it’s unlikely that most output would be unless specifically prompted to be.

◧◩◪◨⬒⬓⬔⧯▣
9. omnimu+A13[view] [source] 2025-06-03 21:02:11
>>Amezar+9e2
I am not sure why you would think so. AFAIK we will see more what courts think later in 2025 but judging from what was ruled in Delaware in feb... it is actually very likely that LLMs use of material is not "fair use" because besides "how transformed" work is one important part of "fair use" is that the output does not compete with the initial work. LLMs not only compete... they are specifically sold as replacement of the work they have been trained on.

This is why all the lobby now pushes the govs to not allow any regulation of AI even if courts disagree.

IMHO what will happen anyway is that at some point the companies will "solve" the licensing by training models purely on older synthetic LLM output that will be "public research" (which of course will have the "human" weights but they will claim it doesnt matter).

◧◩◪◨⬒⬓⬔⧯▣▦
10. Amezar+ou3[view] [source] 2025-06-04 01:39:30
>>omnimu+A13
What you are describing is the output of the LLM, not the model. Can you link to the case where a model itself was determined to be infringing?

It’s important that copyright applies to copying/publishing/distributing - you can do whatever you to copyrighted works by yourself.

◧◩◪◨⬒⬓⬔⧯▣▦▧
11. omnimu+mn4[view] [source] 2025-06-04 12:14:37
>>Amezar+ou3
I dont follow. The artists are obviously complaining about the output that LLMs create. If you create LLM and dont use it then yeah nobody would have problem with it because nobody would know about it…
[go to top]