zlacker

[return to "Stargate Project: SoftBank, OpenAI, Oracle, MGX to build data centers"]
1. serjes+ja[view] [source] 2025-01-21 23:24:37
>>tedsan+(OP)
You have to keep in mind Microsoft is planning on spending almost 100B in datacenter capex this year and they're not alone. This is basically OpenAI matching the major cloud provider's spending.

This could also be (at least partly) a reaction to Microsoft threatening to pull OpenAI's cloud credits last year. OpenAI wants to maintain independence and with compute accounting for 25–50% of their expenses (currently) [2], this strategy may actually be prudent.

[1] https://www.cnbc.com/2025/01/03/microsoft-expects-to-spend-8...

[2] https://youtu.be/7EH0VjM3dTk?si=hZe0Og6BjqLxbVav&t=1077

◧◩
2. throit+Oa[view] [source] 2025-01-21 23:28:37
>>serjes+ja
Microsoft has lots of revenue streams tied to that capex outlay. Does OpenAI have similar revenue numbers to Microsoft?
◧◩◪
3. tuvang+Uc[view] [source] 2025-01-21 23:42:24
>>throit+Oa
OpenAI has a very healthy revenue stream in the form of other companies throwing money at them.

But to answer your question, no they aren’t even profitable by themselves.

◧◩◪◨
4. manque+7f[view] [source] 2025-01-21 23:55:36
>>tuvang+Uc
> they aren’t even profitable

Depends on your definition of profitability, They are not recovering R&D and training costs, but they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.

Today they will not survive if they stop investing in R&D, but they do have to slow down at some point. It looks like they and other big players are betting on a moat they hope to build with the $100B DCs and ASICs that open weight models or others cannot compete with.

This will be either because training will be too expensive (few entities have the budget for $10B+ on training and no need to monetize it) and even those kind of models where available may be impossible to run inference with off the shelf GPUs, i.e. these models can only run on ASICS, which only large players will have access to[1].

In this scenario corporations will have to pay them the money for the best models, when that happens OpenAI can slow down R&D and become profitable with capex considered.

[1] This is natural progression in a compute bottle-necked sector, we saw a similar evolution from CPU to ASICS and GPU in the crypto few years ago. It is slightly distorted comparison due to the switch from PoW to PoS and intentional design for GPU for some coins, even then you needed DC scale operations in a cheap power location to be profitable.

◧◩◪◨⬒
5. Fade_D+Fk[view] [source] 2025-01-22 00:33:51
>>manque+7f
They will have an endless wave of commoditization chasing behind them. NVIDIA will continue to market chips to anyone who will buy... Well anyone who is allowed to buy, considering the recent export restrictions. On that note, if OpenAI is in bed with the US government with this to some degree, I would expect tariffs, expert restrictions, and all of that to continue to conveniently align with their business objectives.

If the frontier models generate huge revenue from big government and intelligence and corporate contracts, then I can see a dynamo kicking off with the business model. The missing link is probably that there need to be continual breakthroughs that massively increase the power of AI rather than it tapering off with diminishing returns for bigger training/inference capital outlay. Obviously, openAI is leveraging against that view as well.

Maybe the most important part is that all of these huge names are involved in the project to some degree. Well, they're all cross-linked in the entire AI enterprise, really, like OpenAI Microsoft, so once all the players give preference to each other, it sort of creates a moat in and of itself, unless foreign sovereign wealth funds start spinning up massive stargate initiatives as well.

We'll see. Europe has been behind the ball in tech developments like this historically, and China, although this might be a bit of a stretch to claim, does seem to be held back by their need for control and censorship when it comes to what these models can do. They want them to be focused tools that help society, but the American companies want much more, and they want power in their own hands and power in their user's hands. So much like the first round where American big tech took over the world, maybe it's prime to happen again as the AI industry continues to scale.

◧◩◪◨⬒⬓
6. fragme+Yr[view] [source] 2025-01-22 01:30:38
>>Fade_D+Fk
Why would China censoring Tiananmen Square/whatever out of their LLMs be anymore harmful to the training process when the US controlled LLMs also censor certain topics, eg "how do I make meth?" or "how do I make a nuclear bomb?".
◧◩◪◨⬒⬓⬔
7. vaccin+jv[view] [source] 2025-01-22 01:51:46
>>fragme+Yr
Because China censors very common words and phrases such as "harmonized", "shameless", "lifelong", "river crabbed", "me too". This is because Chinese citizens uses puns and common phrases initially to get around censors.
◧◩◪◨⬒⬓⬔⧯
8. jiggaw+xO[view] [source] 2025-01-22 04:31:41
>>vaccin+jv
OpenAI models refuse to translate subtitles because they contain violence, sex, or racism.

That’s just a different flavour of enforced right-think.

◧◩◪◨⬒⬓⬔⧯▣
9. tallda+FU[view] [source] 2025-01-22 05:41:05
>>jiggaw+xO
They are absolutely different flavors. OpenAI is not being told by the government to censor violence, sex or racism - they're being told that by their executives.

News flash: household-name businesses aren't going to repeat slurs if the media will use it to defame them. Nevermind the fact that people will (rightfully) hold you legally accountable and demand your testimony when ChatGPT starts offering unsupervised chemistry lessons - the threat of bad PR is all that is required to censor their models.

There's no agenda removing porn from ChatGPT any more than there's an agenda removing porn from the App Store or YouTube. It's about shrewd identity politics, not prudish shadow government conspiracies against you seeing sex and being bigoted.

◧◩◪◨⬒⬓⬔⧯▣▦
10. A4ET8a+bn1[view] [source] 2025-01-22 10:14:40
>>tallda+FU
Sigh. No. Censorship is censorship is censorship. That is true even if you happen to like and can generate a plausible defense of US version that happens to be business friendly ( as opposed to China's ruling party friendly ).
◧◩◪◨⬒⬓⬔⧯▣▦▧
11. ForHac+Ro1[view] [source] 2025-01-22 10:32:55
>>A4ET8a+bn1
> Censorship is censorship is censorship

"if your company doesn't present hardcore fisting pornography to five year olds you're a tyrant" is a heck of a take, even for hacker news.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
12. A4ET8a+up1[view] [source] 2025-01-22 10:37:48
>>ForHac+Ro1
It is not a take. It is simple position of 'just because you call something as involuntary semen injection does not make it any less of a rape'. I like things that are clear and well defined. And so I repeat:

Censorship is censorship is censorship.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
13. ForHac+Fs1[view] [source] 2025-01-22 11:16:19
>>A4ET8a+up1
Ok, I guess I'm #TeamProCensorship, then. So is almost everyone.
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲◳
14. snapca+9S1[view] [source] 2025-01-22 14:18:29
>>ForHac+Fs1
Yes, that's true. It's very rare for people to be able to value actual free speech. Most people think they do until they hear something they don't like
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲◳⚿
15. qwytw+3t5[view] [source] 2025-01-23 19:02:07
>>snapca+9S1
However private individuals or companies deciding not to offer certain products is an expression of free speech.

i.e. denying someone who is running an online platform/community or training an LLM model or whatever the right to remove or not provide specific content is a clearly limiting their right to freedom of expression.

[go to top]