Depends on your definition of profitability, They are not recovering R&D and training costs, but they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.
Today they will not survive if they stop investing in R&D, but they do have to slow down at some point. It looks like they and other big players are betting on a moat they hope to build with the $100B DCs and ASICs that open weight models or others cannot compete with.
This will be either because training will be too expensive (few entities have the budget for $10B+ on training and no need to monetize it) and even those kind of models where available may be impossible to run inference with off the shelf GPUs, i.e. these models can only run on ASICS, which only large players will have access to[1].
In this scenario corporations will have to pay them the money for the best models, when that happens OpenAI can slow down R&D and become profitable with capex considered.
[1] This is natural progression in a compute bottle-necked sector, we saw a similar evolution from CPU to ASICS and GPU in the crypto few years ago. It is slightly distorted comparison due to the switch from PoW to PoS and intentional design for GPU for some coins, even then you needed DC scale operations in a cheap power location to be profitable.
If the frontier models generate huge revenue from big government and intelligence and corporate contracts, then I can see a dynamo kicking off with the business model. The missing link is probably that there need to be continual breakthroughs that massively increase the power of AI rather than it tapering off with diminishing returns for bigger training/inference capital outlay. Obviously, openAI is leveraging against that view as well.
Maybe the most important part is that all of these huge names are involved in the project to some degree. Well, they're all cross-linked in the entire AI enterprise, really, like OpenAI Microsoft, so once all the players give preference to each other, it sort of creates a moat in and of itself, unless foreign sovereign wealth funds start spinning up massive stargate initiatives as well.
We'll see. Europe has been behind the ball in tech developments like this historically, and China, although this might be a bit of a stretch to claim, does seem to be held back by their need for control and censorship when it comes to what these models can do. They want them to be focused tools that help society, but the American companies want much more, and they want power in their own hands and power in their user's hands. So much like the first round where American big tech took over the world, maybe it's prime to happen again as the AI industry continues to scale.
> they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.
I tried to Google for more information. I tried this search: <<is openai inference profitable?>>I didn't find any reliable sources about OpenAI. All sources that I could find state this is not true -- inference costs are far higher than subscription fees.
I hate to ask this on HN... but, can you provide a source? Or tell us how do you know?
That’s just a different flavour of enforced right-think.
With the hard shift to the right and Trump coming into office, especially the last bit will be interesting. There is a pretty substantial tension between factual reporting and not offending right-wing ideology: Should a model consider "both sides" about topics with with clear and broad scientific consensus if it might offend Trumpists? (Two examples that come to mind was the recent "The Nazis were actually left wing" and "There are only two genders".)
News flash: household-name businesses aren't going to repeat slurs if the media will use it to defame them. Nevermind the fact that people will (rightfully) hold you legally accountable and demand your testimony when ChatGPT starts offering unsupervised chemistry lessons - the threat of bad PR is all that is required to censor their models.
There's no agenda removing porn from ChatGPT any more than there's an agenda removing porn from the App Store or YouTube. It's about shrewd identity politics, not prudish shadow government conspiracies against you seeing sex and being bigoted.
It is just an educated guess factoring costs of running similar/comparable models to 4o or 4o-mini per token, and how azure commitments work with OpenAI models[2], also knowing that Plus subscriptions are probably more profitable[1] than API calls.
It would be hard for even OpenAI to know with any certainty because they are not paying for Azure credits like a normal company. The costs are deeply intertwined with Azure and would be hard to split given the nature of the MS relationship[3]
----
[1] This is from experience of running LibreChat using 4o versus ChatGPT Plus for ~200 users, subscriptions should quite profitable than raw API by a order of 3 to 4x, of course different types of users and adoption levels will be there my sample while not small is not likely representative of their typical user base.
[2] MS has less incentive to subsidize than say OpenAI themselves
[3] Azure is quite profitable in the aggregate, while possibly subsidizing OpenAI APIs, any such subsidy has not shown up meaningfully in Microsoft financial reports.
Though I see no reason whatsoever why LLM should be blocked from answering "how do I make a nuclear bomb?" query.
So I do question if OpenAI is able to make a profit, even if you remove training and R&D. The $20 plan may be more profitable, but now it will need to cover the R&D and training, plus whatever they lose on Pro.
"if your company doesn't present hardcore fisting pornography to five year olds you're a tyrant" is a heck of a take, even for hacker news.
Censorship is censorship is censorship.
If you don't believe US has elections then straighten up your tinfoil hat:)
Maybe you'll say next the earth is flat, if you think people have nothing better to do but to find ways to lie to you.
As far as I am aware the only information from within OpenAI one way or another is from their financial documents circulated to investors:
> The fund-raising material also signaled that OpenAI would need to continue raising money over the next year because its expenses grew in tandem with the number of people using its products.
Subscriptions are the lions share of their revenue (73%). It's possible they are making money on the average Plus or Enterprise subscription but given the above claim they definitely aren't making enough to cover the cost of inference for free users.
https://www.nytimes.com/2024/09/27/technology/openai-chatgpt...
Again, consider my example about YouTube - it's not illegal for Google to put pornography on YouTube. They still moderate it out though, not because they want to "censor" their users but because amateur porn is a liability nightmare to moderate. Similarly, I don't think ChatGPT's limitations qualify as censorship.
"Censorship is censorship is censorship" is the sort of defense you'd rely on if you were caught selling guns and kiddie porn on the internet. It's not the sort of defense OpenAI needs to use though, because they have a semblance of self-preservation instinct and would rather not let ChatGPT say something capable of pissing off the IMF or ADL. Call that "censorship" all you want - it's like canvassing for your right to yell 'fire!' in a movie theater.
Friend, neither of those is a body that can say constitution in US is null and void. Nor to they get to pick and choose which speech is kosher. It is not up to those orgs to decide.
<< They're accepting your definition of censorship to highlight how fucking stupid it is.
They are accepting it, because there is no way it cannot not be accepted. Now.. just because there is some cognitive dissonance over what should logically follow is a separate issue entirely.
Best I can do is spread some seeds..
> Insane thing: we are currently losing money on OpenAI pro subscriptions! people use it much more than we expected.
Ref: https://techstartups.com/2025/01/06/openai-is-losing-money-o...Of course the market being extremely concentrated and effectively an oligopoly even in the best case does shine a somewhat different light on it. Until/unless open models catch up both quality and accessibility wise.
i.e. denying someone who is running an online platform/community or training an LLM model or whatever the right to remove or not provide specific content is a clearly limiting their right to freedom of expression.
I was explaining why it is more harmful and thought you were arguing it is not harmful?
Censorship in the west and china are both done by unelected people