zlacker

[return to "Two kinds of AI users are emerging"]
1. danpal+vd[view] [source] 2026-02-02 01:44:03
>>martin+(OP)
I've noticed a huge gap between AI use on greenfield projects and brownfield projects. The first day of working on a greenfield project I can accomplish a week of work. But the second day I can accomplish a few days of work. By the end of the first week I'm getting a 20% productivity gain.

I think AI is just allowing everyone to speed-run the innovator's dilemma. Anyone can create a small version of anything, while big orgs will struggle to move quickly as before.

The interesting bit is going to be whether we see AI being used in maturing those small systems into big complex ones that account for the edge cases, meet all the requirements, scale as needed, etc. That's hard for humans to do, and particularly while still moving. I've not see any of this from AI yet outside of either a) very directed small changes to large complex systems, or b) plugins/extensions/etc along a well define set of rails.

◧◩
2. stego-+Hs[view] [source] 2026-02-02 04:21:12
>>danpal+vd
Enterprise IT dinosaur here, seconding this perspective and the author’s.

When I needed to bash out a quick Hashicorp Packer buildfile without prior experience beyond a bit of Vault and Terraform, local AI was a godsend at getting me 80% of the way there in seconds. I could read it, edit it, test it, and move much faster than Packer’s own thin “getting started” guide offered. The net result was zero prior knowledge to a hardened OS image and repeatable pipeline in under a week.

On the flip side, asking a chatbot about my GPOs? Or trusting it to change network firewalls and segmentation rules? Letting it run wild in the existing house of cards at the core of most enterprises? Absolutely hell no the fuck not. The longer something exists, the more likely a chatbot is to fuck it up by simple virtue of how they’re trained (pattern matching and prediction) versus how infrastructure ages (the older it is or the more often it changes, the less likely it is to be predictable), and I don’t see that changing with LLMs.

LLMs really are a game changer for my personal sales pitch of being a single dinosaur army for IT in small to medium-sized enterprises.

◧◩◪
3. vages+3J[view] [source] 2026-02-02 07:25:07
>>stego-+Hs
Which local AI do you use? I am local-curious, but don’t know which models to try, as people mention them by model name much less than their cloud counterparts.
◧◩◪◨
4. stego-+7U2[view] [source] 2026-02-02 21:22:13
>>vages+3J
I'm frequently rotating and experimenting specifically because I don't want to be dependent upon a single model when everything changes week-to-week; focusing on foundations, not processes. Right now, I've got a Ministral 3 14B reasoning model and Qwen3 8B model on my Macbook Pro; I think my RTX 3090 rig uses a slightly larger parameter/less quantized Ministral model by default, and juggles old Gemini/OpenAI "open weights" models as they're released.
[go to top]