I think AI is just allowing everyone to speed-run the innovator's dilemma. Anyone can create a small version of anything, while big orgs will struggle to move quickly as before.
The interesting bit is going to be whether we see AI being used in maturing those small systems into big complex ones that account for the edge cases, meet all the requirements, scale as needed, etc. That's hard for humans to do, and particularly while still moving. I've not see any of this from AI yet outside of either a) very directed small changes to large complex systems, or b) plugins/extensions/etc along a well define set of rails.
When I needed to bash out a quick Hashicorp Packer buildfile without prior experience beyond a bit of Vault and Terraform, local AI was a godsend at getting me 80% of the way there in seconds. I could read it, edit it, test it, and move much faster than Packer’s own thin “getting started” guide offered. The net result was zero prior knowledge to a hardened OS image and repeatable pipeline in under a week.
On the flip side, asking a chatbot about my GPOs? Or trusting it to change network firewalls and segmentation rules? Letting it run wild in the existing house of cards at the core of most enterprises? Absolutely hell no the fuck not. The longer something exists, the more likely a chatbot is to fuck it up by simple virtue of how they’re trained (pattern matching and prediction) versus how infrastructure ages (the older it is or the more often it changes, the less likely it is to be predictable), and I don’t see that changing with LLMs.
LLMs really are a game changer for my personal sales pitch of being a single dinosaur army for IT in small to medium-sized enterprises.
This is essentially what I'm doing too but I expect in a different country. I'm finding it incredibly difficult to successfully speak to people. How are you making headway? I'm very curious how you're leveraging AI messaging to clients/prospective clients that doesn't just come across as "I farm out work to an AI and yolo".
Edit - if you don't mind sharing, of course.
Mostly it's been an excellent way to translate vocabulary between products or technologies for me. When I'm working on something new (e.g., Hashicorp Packer) and lack the specific vocabulary, I may query Qwen or Ministral with what I want to do ("Build a Windows 11 image that executes scripts after startup but before sysprep"), then use its output as a starting point for what I actually want to accomplish. I've also tinkered with it at home for writing API integrations or parsing JSON with RegEx for Home Assistant uses, and found it very useful in low-risk environments.
Thus far, they don't consistently spit out functional code. I still have to do a back-and-forth to troubleshoot the output and make it secure and functional within my environments, and that's fine - it's how I learn, after all. When it comes to, say, SQL (which I understand conceptually, but not necessarily specifically), it's a slightly bigger crutch until I can start running on my own two feet.
Still cheaper than a proper consultant or SME, though, and for most enterprise workloads that's good (and cheap) enough once I've sanity checked it with a colleague or in a local dev/sandbox environment.