The closed frontier models seem to sell at a substantial premium to inference on open-source models, so that does suggest that there is a decent margin to the inference. The training is where they're losing money, and the bull case is that every model makes money eventually, but the models keep getting bigger or at least more expensive to train, so they're borrowing money to make even more money later (which does need to converge somehow, i.e. they can't just keep shooting larger until the market can't actually afford to pay for the training). The bear case is that this is basically just a treadmill to stay on the frontier where they can make that premium (if the big labs ever stop they'll quickly get caught up by cheaper or even open-source models and lose their edge), in which case it's probably never going to actually become sustainable.