As someone who switches between Anthropic and ChatGPT depending on the month and has dabbled with other providers and some local LLMs, I think this fear is unfounded.
It's really easy to switch between models. The different models have some differences that you notice over time but the techniques you learn in one place aren't going to lock you into a provider anywhere.
We have two cell phone providers. Google is removing the ability to install binaries, and the other one has never allowed freedom. All computing is taxed, defaults are set to the incumbent monopolies. Searching, even for trademarks, is a forced bidding war. Businesses have to shed customer relationships, get poached on brand relationships, and jump through hoops week after week. The FTC/DOJ do nothing, and the EU hasn't done much either.
I can't even imagine what this will be like for engineering once this becomes necessary to do our jobs. We've been spoiled by not needing many tools - other industries, like medical or industrial research, tie their employment to a physical location and set of expensive industrial tools. You lose your job, you have to physically move - possibly to another state.
What happens when Anthropic and OpenAI ban you? Or decide to only sell to industry?
This is just the start - we're going to become more dependent upon these tools to the point we're serfs. We might have two choices, and that's demonstrably (with the current incumbency) not a good world.
Computing is quickly becoming a non-local phenomenon. Google and the platforms broke the dream of the open web. We're about to witness the death of the personal computer if we don't do anything about it.
Your job might be assigned to some other legal entity renting some other compute.
If this goes as according to some of their plans, we might all be out of the picture one day.
If these systems are closed, you might not get the opportunity to hire them yourself to build something you have ownership in. You might be cut out.
There are multiple frontier models to choose from.
They’re not all going to disappear.
You don’t think that eventually Google/OpenAI are going to go to the government and say, “it’s really dangerous to have all these foreign/unreglated models being used everywhere could you please get rid of them?”. Suddenly they have an oligopoly on the market.
I mean, the long arch of computing history has had us wobble back and forth in regards to how closed down it all was, but it seems we are almost at a golden age again with respect to good enough (if not popular) hardware.
On the software front, we definitely swung back from the age of Microsoft. Sure, Linux is a lot more corporate than people admit, but it’s a lot more open than Microsoft’s offerings and it’s capable of running on practically everything except the smallest IOT device.
As for LLMs. I know people have hyped themselves up to think that if you aren’t chasing the latest LLM release and running swarms of agents, you are next in the queues for the soup kitchens, but again, I don’t see why it HAS to play out that way, partly because of history (as referenced), partly because open models are already so impressive and I don’t see any reason why they wouldn’t continue to do well.
In fact, I do my day-to-day work using an open weight model. Beyond that, can only say I know employers who will probably never countenance using commercially hosted LLMs, but who are already setting up self-hosted ones based on open weight releases.
I don't think we're in any golden age since the GPU shortages started, and now memory and disks are becoming super expensive too.
Hardware vendors have shown they don't have an interest in serving consumers and will sell out to hyperscalers the moment they show some green bills. I fear a day where you won't be able to purchase powerful (enough) machines and will be forced to subscribe to a commercial provider to get some compute to do your job.