The worry is that customers who do not realize the full depth of the problem will implement their own app using AI. But that happens today, too: people use spreadsheets to manage their electronic parts (please don't) and BOMs (bills of materials). The spreadsheet is my biggest competitor.
I've been designing and building the software for 10 years now and most of the difficulty and complexity is not in the code. Coding is the last part, and the easiest one. The real value is in understanding the world (the processes involved) and modeling it in a way that cuts a good compromise between ease of use and complexity.
Sadly, as I found out, once you spend a lot of time thinking and come up with a model, copycats will clone that (as well as they can, but superficially it will look similar).
Which I don't think can be replaced by AI in a lot of cases. I think in the software world we are used to things being shared, open and easily knowable, but a great deal of industry and enterprise domain knowledge is locked up inside in companies and will not be in the training data.
That's why it's such a big deal for an enterprise to have on prem tools, to avoid leaking industry processes and "secrets" (the secrets are boring, but still secrets).
A little career advice in there too I guess. At least for now, you're a bit more secure as a developer in industries that aren't themselves software, is my guess.
Yes. I try to visit my customers as often as I can, to learn how they work and to see the production processes on site. I consider it to be one of the most valuable things I can do for the future of my business.
I used to love CL and wrote quite a bit of code in it, but since Clojure came along I can't really see any reason to go back.
You have a product, which sits between your users and what your users want. That product has an UI for users to operate. Many (most, I imagine) users would prefer to hire an assistant to operate that UI for them, since UI is not the actual value your service provides. Now, s/assistant/AI agent/ and you can see that your product turns into a tool call.
So the simpler problem is that your product now becomes merely a tool call for AI agents. That's what users want. Many SaaS companies won't like that, because it removes their advertising channel and commoditizes their product.
It's the same reason why API access to SaaS is usually restricted or not available for the users except biggest customers. LLMs defeat that by turning the entire human experience into an API, without explicit coding.
That's ridiculous. A good ui will improve on assistant in every way.
Do assistants have some use? Sure—querying.
True.
"Good" UI seems to be in short supply these days, even from trillion dollar corporations.
But even with that, it is still not "ridiculous" for many to prefer to "hire an assistant to operate that UI for them". A lot of the complexity in UI is the balance between keeping common tasks highly visible without hiding the occasional-use stuff, allowing users to explore and learn more about what can be done without overwhelming them.
If I want a spaceship in Blender and don't care which one you get — right now the spaceship models that any GenAI would give you are "pick your poison" between Diffusion models' weirdness and the 3D equivalent of the pelican-on-a-bike weirdness — the easiest UI is to say (or type) "give me a spaceship", not doing all the steps by hand.
If you have some unformatted time series data and want to use it to forecast the next quarter, you could manually enter it into a spreadsheet, or you could say/type "here's a JPG of some time series data, use it to forecast the next quarter".
Again, just to be clear, I agree with everyone saying current AI is only mediocre in performance, it does make mistakes and shouldn't be relied upon yet. But the error rates are going down, the task horizons they don't suck at are going up. I expect the money to run out before they get good enough to take on all SaaS, but at the same time they're already good enough to be interesting.
While rolling the whole solution with an AI agent is not practical, taking a open source starting point and using AI to overcome specific workflow pain points as well as add features allows me to have a lower cost, specifically tailored solution to our needs.
This is actually a serious problem for me: my SaaS has a lot of very complex functionality under the hood, but it is not easily visible, and importantly it isn't necessarily appreciated when making a buying decision. Lot control is a good example: most people think it is only needed for coding batches of expiring products. In reality, it's an essential feature that pretty much everyone needs, because it lets you treat some inventory of the same part (e.g. a reel) differently from other inventory of this part (e.g. cut tape) and track those separately.
AI-coding will help people get the features they know they need, but it won't guide them to the features they don't know they could use.
This is a big assumption, and not one I've seen in product testing. Open-ended human language is not a good interface for highly detailed technical work, at least not with the current state of LLMs.
> It's the same reason why API access to SaaS is usually restricted or not available for the users except biggest customers.
I don't... think this is true? Of the top of my head, aside from cloud providers like AWS/GCP/Azure which obviously provide APIs: Salesforce, Hubspot, Jira all provide APIs either alongside basic plans or as a small upsell. Certainly not just for the biggest customers. You're probably thinking of social media where Twitter/Reddit/FB/etc don't really give API access, but those aren't really B2B SaaS products.
That said, the act of doing this- using LLMs to dominate somebody's legitimately intelligent and unique work- feels not only discourteous, but worse, like it's a short-term solution.
I'm convinced that it's a short-term solution NOT because I don't think that LLMs can continuously maintain these projects, but because open-source itself is going to be clawed back. The raison d'être of open-source is personal pride, hiring, collaboration, enjoyment, trust, etc. These motivations make less sense in an LLM-fueled world.
My prediction is that useful and well maintained open-source projects like we're hijacking will become fewer and far between.
So, I kind of know what I'm talking about :-) And I don't miss anything from CL: I honestly can't find a single reason to switch back to CL.
Coding and modeling are interleaved. Prototyping is basically thinking through the models you are considering. If you split the two, you'll end up with a bad model, bad software or both.