Not sure why, but the word choice of "consumption" feels like a reverse Freudian slip to me.
But yeah if you're in the industry it's easy to forget how certain jargon sounds based on its dictionary definition
Give a clever, articulate person a task to write about something they don't believe in and they will include the subtlest of barbs, weak praise, or both.
Maybe I just don't get it. Texas seems like an awful place to do business.
If the electric grid — particularly the interconnection queue — is already the bottleneck to data center deployment, is something on this scale even close to possible? If it's a rationalized policy framework (big if!), I would guess there's some major permitting reform announcement coming soon.
Wow.
The AI Stargate Project claims it will "create hundreds of thousands of American jobs". One has doubts.
> All of us look forward to continuing to build and develop ... AGI for the benefit of all of humanity.
Erm, so which one is it? It is amply demonstrable from events post WW2 that US+allies are quite far from benefiting all of humanity & in fact, in some cases, it assists an allied minority at an extreme cost to a condemned majority, for no discernable humanitarian reasons save for some perceived notion of "shared values".
> The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.
I'm sorry, has SoftBank suddenly become an American company? I feel like I'm taking crazy pills reading this.
Edit: MGX is Saudi company? This is baffling....
> building new AI infrastructure for OpenAI in the United States
The carrot is probably something like - we will build enough compute to make a supper intelligence that will solve all the problems, ???, profit.
0. https://www.thewrap.com/trump-open-ai-oracle-stargate-ai-inf...
1. https://www.cbsnews.com/news/trump-announces-private-sector-...
Depending on the part of the state, relatively low costs of living which is helpful if you don't like paying people much. Large areas that are relatively undeveloped or underdeveloped which can mean cheaper land.
Wait, was it supposed to re industrialize the USA?
It will get weirder, but only relatively so, the concept of normalcy always trailing just a little bit behind as we slide
Also I have no doubt that the timing is deliberate and that this is not happening without government endorsement. If I had to guess the US military also is involved in this and sees this initiative as important for national security.
Where is the US government in all this? Why aren't they leading the charge? They obviously have the money.
Interesting that the UAE (MGX) and Japan (Softbank) are bankrolling the re-industrialization of America.
That is why there is the awkward “we’ll continue to consume Azure” sentence in there. Will be interesting to see if it works or if MS starts revving up their lawyers.
Not all rich people are out of their minds, but Masayoshi Son definitely is. The way he handled the WeWork situation was bad...
Texas has a .... unique energy market (literally! They don't connect to the national grid so they can avoid US Government regulations- that way it's not interstate commerce). Because of that spot prices fluctuate very wildly up and down, depending on the weather, demand, and their large quantity of renewables (Texas is good for solar and wind energy). When the weather is good for renewables they have very cheap electricity (lots of production and can't sell to anyone outside the state), when the weather is bad they can have incredibly expensive electricity (less production, can't buy from anyone outside the state). Larger markets, able to pull from larger pools of producers and consumers, just fluctuate less.
I know some bitcoin miners liked to be in Texas and basically worked as energy speculators: when electricity was cheap they would mine bitcoin, when it was expensive they shut down their plant- sometimes they even got paid by producers to shut-down their plant! I would bet that you could do a lot of that with AI training as well, given good checkpointing.
You wouldn't want to do inference there (which needs to be responsive and doesn't like 'oh this plant is going to shut down in one minute because a storm just came up') but for training it should be fine?
If compute continues to become cheaper, local training might be feasible in 20 years.
A fraction of this money invested in building homes would end the homelessness problem in the U.S.
I guess the one silver lining here is that when the likely collapse happens, we'll have more clean energy infrastructure to use for more useful things.
For those interested, it looks like Albany, NY (upstate NY) is very likely one of the next growth sites.
[0] https://www.schumer.senate.gov/newsroom/press-releases/schum...
"Hammond, of Texas"
(apologies to those who haven't watched SG-1)
There is credible evidence that leads me to believe that (1) Nippon Steel Corporation, a corporation organized under the laws of Japan . . . might take action that threatens to impair the national security of the United States;
https://bidenwhitehouse.archives.gov/briefing-room/president...
Yes, it would make money for stockholders. But it's much more than that: it's an empire-scale psychological game for leverage in the future.
https://en.wikipedia.org/wiki/Superconducting_Super_Collider...
https://www.amusingplanet.com/2010/12/abandoned-remains-of-s...
It’s incredibly depressing how everyone sees this as something the new administration did in a single day…
(But yes I agree)
Some of these companies do have huge cash reserves they don't know what to do with so if it is $500 billion of private money, I am not going to complain.
I will believe it when I see it though and that this isn't a 100 billion in private money with a 400 billion dollar free US government put option for the "private" investors if things don't go perfect.
The companies said they will develop land controlled by Wise Asset to provide on-site natural gas power plant solutions that can be quickly deployed to meet demand in the ERCOT.
The two firms are currently working to develop more than 3,000 acres in the Dallas-Fort Worth region of Texas, with availability as soon as 2027
[0] https://www.datacenterdynamics.com/en/news/rpower-and-wise-a...
[1.a] https://enchantedrock.com/data-centers/
[1.b] https://www.powermag.com/vistra-in-talks-to-expand-power-for...
If you ran 7% of a mile in 5 minutes, would you claim you were close to running a 5 minute mile?
Ordinarily a joke would follow, but now America is volunteering to be the punchline.
An announcement of a public AI infrastructure program joined by multiple companies could have been a monumental announcement. This one just looks like three big companies getting permission to make one big one.
Here's the presser, Sam is at 9 minutes in.
https://boeing.mediaroom.com/2015-04-10-Presidents-Varela-Ob...
But then again that's their entire business, so I shouldn't be too surprised.
If you buy the marketing, yeah. But we aren't really seeing that in the tech sector. We haven't seen it succeed in the entertainment sector... it's still fighting for relevance in the medical and defense industries too. The number and quality of jobs that AI replaced is probably still quite low, and it will probably remain that way even after Stargate.
AI is DOA. LLMs have no successor, and the transformer architecture hit it's bathtub curve years ago.
> Jenson doesn't need another 1B gallons of water under his belt.
Jensen gets what he wants because he works with the industry. It's funny to see people object to CUDA and Nvidia's dominance but then refuse to suggest an alternative. An open standard managed by an independent and unbiased third-party? We tried that, OEMs abandoned it. NPU hardware tailor-made for specific inference tasks? Too slow, too niche, too often ends up as wasted silicon. Alternative manufacturer-specific SDKs integrated with one high-level library? ONNX tried that and died in obscurity.
Nvidia got where they are today by doing exactly what AMD and Apple couldn't figure out. People give Jensen their water because it's wasted in anyone else's hands.
This could also be (at least partly) a reaction to Microsoft threatening to pull OpenAI's cloud credits last year. OpenAI wants to maintain independence and with compute accounting for 25–50% of their expenses (currently) [2], this strategy may actually be prudent.
[1] https://www.cnbc.com/2025/01/03/microsoft-expects-to-spend-8...
You're answering your own question:
> potential job creation. Which could be significant
I'm not sure why I've never heard of this being done, it would be a good use of GPUs in between training runs.
Is there any planned future partnerships? Stargate implies something about movies and astronomy. Movies in particular have a lot of military influence, but not always.
So, what's the play? Help mankind or go after mankind?
Also, can I opt-out right now?
That said, I don't think they have the courage to invest even the lower amount that it would take to compete with this. But it's not clear if it's truly necessary either, as DeepSeek is proving that you don't need a billion to get to the frontier. For all we know we might all be running AGI locally on our gaming PCs in a few years' time. I'm glad I'm not the one writing the checks here.
Trump probably wanted to start his presidency with a bang, being a person with excess vanity. The participating companies scored a PR coup.
What is the hard limiting factor constraining software and robots from replacing any human job in that time span? Lots of limitations of current technology, but all seem likely to be solved within that timeframe.
I've been meaning to read a relevant book to today's times called Engines That Move Markets. Will probably get it from the library.
Obey me and live, or disobey and die. The choice is yours.
Gave me a real "this is just smoke and mirrors hiding the fact that the white house is now a glory hole for Trump to enjoy" feel.
unless...
https://www.bbc.com/news/articles/cj4d75zl212o
https://apnews.com/article/trump-apple-tim-cook-tech-0a9fb8e...
You don't need a finance degree to figure out what's happening here. Apple is ripping pages right out of Elon's playbook.
if this is part of softbank's existing plan to invest $100bn in ai over the next four years, then all that's being announced here is that Sama and Larry Ellison wanted to stand on a stage beside trump and remind people about it.
They can break some cryptography... other than that... what are they good for?
There's some highly speculative ideas about using them for chemistry/biology research, but no guaranteed return on investment at all.
As far as I know... that's it.
Tell me you didn’t read the DeepSeek R1 paper without telling me you also don’t know about reinforcement learning.
But to answer your question, no they aren’t even profitable by themselves.
If it's possible to produce intelligence from just ingesting text, then current tech companies have all the data they need from their initial scrapes of the internet. They don't need more. That's different to keeping models up to date on current affairs.
EVERY youtube video?? Even the 9/11 truther videos? Sandy Hook conspiracy videos? Flat earth? Even the blatantly racist? This would be some bad training data without some pruning.
Skilled labor for sure, but not necessarily college educated.
If one is expecting to have an AGI breakthrough in the next few years, this is exactly the prepositioning move one would make to be able to maximally capitalize on that breakthrough.
Also, SoftBank is an investment fund. A lot of its money came from American investors.
June 2024: Oracle joins in - https://www.datacenterdynamics.com/en/news/openai-to-use-oci...
January 2025: Softbank provides additional funding, and they for some reason give credit to Trump?
Or then, consider that with his policies put forward the president brings investments to the US.
LOL
Under Trump policies, China will win "in the future" on energy and protein production alone.
Once we've speedrunned our petro supply and exhausted our agricultural inputs with unfathomably inefficient protein production, China can sit back and watch us crumble under our own starvation.
No conflict necessary under these policies, just patience! They're playing the game on a scale of centuries, we can't even stay focused on a single problem or opportunity for a few weeks.
Money can't buy fundamental breakthroughs: money buys you parallel experimental volume - i.e. more people working from the same knowledge base, and presumably an increase in the chance that one of them does advance the field. But at any given time point, everyone is working from the same baseline (money also can improve this - by funding things you can ensure knowledge is distributed more evenly so everyone is working at the state of the art, rather then playing catch up in proprietary silos).
Depends on your definition of profitability, They are not recovering R&D and training costs, but they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.
Today they will not survive if they stop investing in R&D, but they do have to slow down at some point. It looks like they and other big players are betting on a moat they hope to build with the $100B DCs and ASICs that open weight models or others cannot compete with.
This will be either because training will be too expensive (few entities have the budget for $10B+ on training and no need to monetize it) and even those kind of models where available may be impossible to run inference with off the shelf GPUs, i.e. these models can only run on ASICS, which only large players will have access to[1].
In this scenario corporations will have to pay them the money for the best models, when that happens OpenAI can slow down R&D and become profitable with capex considered.
[1] This is natural progression in a compute bottle-necked sector, we saw a similar evolution from CPU to ASICS and GPU in the crypto few years ago. It is slightly distorted comparison due to the switch from PoW to PoS and intentional design for GPU for some coins, even then you needed DC scale operations in a cheap power location to be profitable.
Because tech CEOs have decided to go all-in on fascism as they see it's a way to make money. Bow to Trump, get on his good side, reap the benefits of government corruption.
It's why TikTok thanked Trump in their boot-licking message of "thanks, trump" after he was the one who started the TikTok ban.
A harder question is: why wouldn't billionaires like Trump and his oligarchic kleptocracy?
1. The outlays can be over many years.
2. They can raise debt. People will happily invest at modest yields.
3. They can raise an equity fund.
I don't know how to make sense of this level of investment. I feel that I lack the proper conceptual framework to make sense of the purchasing power of half a trillion USD in this context.
They argue for about 4 years, nothing changes, and everyone forgets about it.
Why not continue:
4. They can start a kickstarter or go fund me
5. They can go on Dragons’ Den
…
Deepseek v3 at $5.5M on compute and now r1 a few weeks later hitting o1 benchmark scores with a fraction of the engineers etc. involved ... and open source
We know model prep/training compute has potentially peaked for now ... with some smaller models starting to perform very well as inference improves by the week
Unless some new RL concept is going to require vastly more compute for a run at AGI soon ... it's possible the capacity being built based on an extrapolation of 2024 numbers will exceed the 2025 actuals
Also, can see many enterprises wanting to run on-prem -- at least initially
I read this but it lacks information: https://apnews.com/article/wind-energy-offshore-turbines-tru...
I think there’s a term for that.
> Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT.
It rather means that they see their only chance for substantial progress in Moar Power!
>> "a lot of the code in our apps and including the AI that we generate, is actually going to be built by AI engineers instead of people engineers."
https://www.entrepreneur.com/business-news/meta-developing-a...
Ikea's been doing this for a while:
>> Ingka says it has trained 8,500 call centre workers as interior design advisers since 2021, while Billie - launched the same year with a name inspired by IKEA's Billy bookcase range - has handled 47% of customers' queries to call centres over the past two years.
https://www.reuters.com/technology/ikea-bets-remote-interior...
If they plan to transition off oil/nuclear it will be fun to watch
Governor says our power grid is the best in the universe. Why don't you believe us?
Stop breaking your own rules.
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
Let's not ruin HN with overmoderation. This kind of thing is no longer in fashion, right?
Someone else will have to fill in the stocks for:
AI robotics:
Data Center energy:
We all know the cloud/software picks.
What am I missing?
That's not quite the same thing at all as your credit card's revenue stream as you have a ~18%+ monthly interest rate on that revenue stream. If you recall AMZN (& all startups really) have this mode early in their business where they're over-spending on R&D to grow more quickly than their free cash flow otherwise allows to stay ahead of competition and dominate the market. Indeed if investors agree and your business is actually strong, this is a strong play because you're leveraging some future value into today's growth.
I'm imagining a future where the US builds a Tower of Babel from thousands of data centers just to keep people employed and occupied. Maybe also add in some paperclip factories¹?
Debt/Equity Fundraising is basically a kickstarter! Remarkably similar.
“My NEW Official Trump Meme is HERE! It's time to celebrate everything we stand for: WINNING! Join my very special Trump Community. GET YOUR $TRUMP NOW.”
Your calibration is probably fine, stargate is not a means to achieve AGI, it’s a means to start construction on a few million square feet of datacenters thereby “reindustrializing America”
Build it on federal land.
> unless it is powered by a nuclear reactor
From what I’m hearing, this is in play. (If I were in nuclear, I’d find a way to get Greenpeace to protest nuclear power in a way that Trump sees it.)
Looks like the dollar printing press will continue to overheat in the coming years.
I’ve been advocating for a data centre analogue to the Heavy Press Programme for some years [1].
This isn’t quite it. But when I mapped out costs, $1tn over 10 years was very doable. (A lot of it would go to power generation and data transmission infrastructure.)
Could be 5 to 10 with $20+ bn/year in scale and research spend.
Trump is screwing over his China hawks. The anti-China and pro-nuclear lobbies have significant overlap; this could be how Trump keeps e.g. Peter Thiel from going thermonuclear on him.
Thermodynamic neural networks may also basically turn everything on its ear, especially if we figure out how to scale them like NAND flash.
If anything, I would estimate that this is a space-race type effort to “win” the AI “wars”. In the short term, it might work. In the long term, it’s probably going to result in a massive glut in accelerated data center capacity.
The trend of technology is towards doing better than natural processes, not doing it 100000x less efficiently. I don’t think AI will be an exception.
If we look at what is -theoretically- possible using thermodynamic wells, with current model architectures, for instance, we could (theoretically) make a network that applies 1t parameters in something like 1cm2. It would use about 20watts, back of the napkin, and be able to generate a few thousand T/S.
Operational thermodynamic wells have already been demonstrated en silica. There are scaling challenges, cooling requirements, etc but AFAIK no theoretical roadblocks to scaling.
Obviously, the theoretical doesn’t translate to results, but it does correlate strongly with the trend.
So the real question is, what can we build that can only be done if there are hundreds of millions of NVIDIA GPUs sitting around idle in ten years? Or alternatively, if those systems are depreciated and available on secondary markets?
What does that look like?
https://www.youtube.com/watch?v=DNCZHAQnfGU
The key question then becomes one of political priorities and public understanding. If public opposition to beneficial government spending stems from misunderstanding how modern monetary systems work, then better education about these mechanisms could help advance important policy goals. The focus should be on managing real economic constraints rather than imaginary financial ones.
Ellison stated explicitly that this would be "impossible" without Trump.
Masa stated that this (new investment level?) wouldn't be happening had Trump not won, and that the new investment level was decided yesterday.
I know everyone wants to see something nefarious here, but simplest explanation is that the federal government for next four years is expected to be significantly less hostile to private investment, and - shocker - that yields increased private investment.
There is some crypto that we know how to break with a sufficiently large quantum computer [0]. There is some we don't know how to do that to. I might be behind the state of the art here, but when I wasn't we specifically really only knew how to use it to break cryptography that Shor's algorithm breaks.
As to openai, given deepseek and the fact lot of use cases dont even need real time inference its not obvious this story will end well.
If the frontier models generate huge revenue from big government and intelligence and corporate contracts, then I can see a dynamo kicking off with the business model. The missing link is probably that there need to be continual breakthroughs that massively increase the power of AI rather than it tapering off with diminishing returns for bigger training/inference capital outlay. Obviously, openAI is leveraging against that view as well.
Maybe the most important part is that all of these huge names are involved in the project to some degree. Well, they're all cross-linked in the entire AI enterprise, really, like OpenAI Microsoft, so once all the players give preference to each other, it sort of creates a moat in and of itself, unless foreign sovereign wealth funds start spinning up massive stargate initiatives as well.
We'll see. Europe has been behind the ball in tech developments like this historically, and China, although this might be a bit of a stretch to claim, does seem to be held back by their need for control and censorship when it comes to what these models can do. They want them to be focused tools that help society, but the American companies want much more, and they want power in their own hands and power in their user's hands. So much like the first round where American big tech took over the world, maybe it's prime to happen again as the AI industry continues to scale.
Sort things out with Venezuela and this issue resolves itself (for a little while, at least).
How long can they maintain their position at the top without the insane cashflow?
This sort of $100-500B budget doesn't sound like training cluster money, more like anticipating massive industry uptake and multiple datacenters running inference (with all of corporate America's data sitting in the cloud).
Remember Trump's BIG WIN of Foxconn investing $10B to build a factory in Wisconsin, creating 13000 jobs?
That was in 2017. 7 years later, it's employing about 1000 people if that. Not really clear what, if anything, is being made at the partially-built factory. [0]
And everyone's forgotten about it by now.
I expect this to be something along those lines.
[0] https://www.jsonline.com/story/money/business/2023/03/23/wha...
Our military and political focus will be keeping neighbors out on one side and trying to seize land on the other side while China goes and builds infrastructure for the entire developing world that they'll exploit for centuries.
Is this a serious suggestion? America can just keep invading people ad infinitum instead of... applying slight thumb pressure on the market's scales to develop more efficient protein sources and more renewable fuel sources before we are staring at the last raw economic input we have?
Brilliant
You are misdirecting and you know it. I don't even need to discredit that paper. Other people have done it for me already.
1) reasoning capabilities in latest models are rapidly approaching superhuman levels and continue to scale with compute.
2) intelligence at a certain level is easier to achieve algorithmically when the hardware improves. There's also a larger path to intelligence and often simpler mechanisms
3) most current generation reasoning AI models leverage test time compute and RL in training--both of which can make use of more compute readily. For example RL on coding against compilers proofs against verifiers.
All of this points to compute now being basically the only bottleneck to massively superhuman AIs in domains like math and coding--rest no comment (idk what superhuman is in a domain with no objective evals)
That's nice, but if I were spending $500bn on datacenters I'd probably try to put a few in places that serve other users. Centralised compute can only get you so far in terms of serving users.
Intent is a funny thing—people usually assume that good intent is sufficient because it's obvious to themselves, but the rest of us don't have access to that state, so has to be encoded somehow in your actual comment in order to get communicated. I sometimes put it this way: the burden is on the commenter to disambiguate. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
I take your point at least halfway though, because it wasn't the worst violation of the guidelines. (Usually I say "this is not a borderline case" but this time it was!) I'm sensitive to regional flamewar because it's tedious and, unlike national flamewar or religious flamewar, it tends to sneak up on people (i.e. we don't realize we're doing it).
[0] - https://arstechnica.com/ai/2025/01/china-is-catching-up-with...
I live, work, and posted this from Texas, BTW...
Also it takes up more than one line on my screen. So, not a "one-liner" either. If you think it is, please follow the rules consistently and enforce them by deleting all comments on the site containing one sentence or even paragraph. My comment was a pretty long sentence (136 chars) and wouldn't come close to fitting in the 50 characters of a Git "one-liner".
Otherwise, people will just assume all the comments are filtered through your unpredictable and unfairly biased eye. And like I said (and you didn't answer), this kind of thing is no longer in fashion, right?
None of this is "borderline". I did nothing wrong and you publicly shamed me. Think before you start flamewars on HN. Bad mod.
It's all very Dr. Strangelove. "Mr. President, we must not allow an AI gap! Now give us billions"
Is this is a good investment by Softbank? Who knows.. they did invest in Uber, but also have many bad investments.
This is unfortunately paywalled but a good writeup on how the datacenter came to be: https://www.theinformation.com/articles/why-openai-and-oracl...
What's going to be left of their population in a single century?
If Son can actually build a 500B Vision Fund it can only come from one of two places...
somehow the dollar depreciates radically OR Saudis
Vision Fund was heavily invested in by the Saudis so...
Not sure how they knew to buy them or why but they have them. Mostly seem to be lending them out. Think mostly OpenAI. Or was it MS. One of the big dogs
This 5GW data centre capacity very roughly equates to 350000x NVIDIA DGX B200 (with 14.3kW maximum power consumption[4] and USD$500k price tag[5]) which if NVIDIA were selected would result in a very approximate total procurement of USD$175b from NVIDIA.
On top of the empty data centres and DGX B200's and in the remaining (potential) USD$265b we have to add:
* Networking equipment / fibre network builds between data centres.
* Engineering / software development / research and development across 4 years to design, build and be able to use the newly built infrastructure. This was estimated in mid 2024 to cost OpenAI US$1.5b/yr for retaining 1500 employees, or USD$1m/yr/employee[7]. Obviously this is a fraction of the total workforce needed to design and build out all the additional infrastructure that Microsoft, Oracle, etc would have to deliver.
* Electricity supply costs for current/initial operation. As an aside, these costs seemingly not be competitive with other global competitors if the USA decides to avoid the cheapest method of generation (renewables) and instead prefer the more expensive generation methods (nuclear, fossil fuels). It is however worth noting that China currently has ~80% of solar PV module manufacturing capacity and ~95% of wafer manufacturing capacity.[10]
* Costs for obtaining training data.
* Obsolescence management (4 years is a long time after which equipment will likely need to be completely replaced due to obsolescence).
* Any other current and ongoing costs of Microsoft, Oracle and OpenAI that they'll likely roll into the total announced amount to make it sound more impressive. As an example this could include R&D and sustainment costs in corporate ICT infrastructure and shared services such as authentication and security monitoring systems.
The question we can then turn to is whether this rate of spend can actually be achieved in 4 years?
Microsoft is planning to spend USD$80bn building data centres in 2025[7] with 1.5GW of new capacity to be added in the first six months of 2025[3]. This USD$80bn planned spend is for more than "Stargate" and would include all their other business units that require data centres to be built, so the total required spend of USD$45b-$75b to add 5GW data centre capacity is unlikely to be achieved quickly by Microsoft alone, hence the apparent reason for Oracle's involvement. However, Oracle are only planning a US$10b capital expenditure in 2025 equating to ~0.8GW capacity expansion[9]. The data centre builds will be schedule critical for the "Stargate" project because equipment can't be installed and turned on and large models trained (a lengthy activity) until data centres exist. And data centre builds are heavily dependent on electricity generation and transmission expansion which is slow to expand.
[1] >>39869158
[2] https://www.datacenterdynamics.com/en/news/microsoft-openai-...
[3] https://www.datacenterdynamics.com/en/news/microsoft-to-doub...
[4] https://resources.nvidia.com/en-us-dgx-systems/dgx-b200-data...
[5] https://wccftech.com/nvidia-blackwell-dgx-b200-price-half-a-...
[6] https://www.cushmanwakefield.com/en/united-states/insights/d...
[7] https://blogs.microsoft.com/on-the-issues/2025/01/03/the-gol...
[8] https://www.datacenterdynamics.com/en/news/openai-training-a...
[9] https://www.crn.com.au/news/oracle-q3-2024-ellison-says-ai-i...
[10] https://www.iea.org/reports/advancing-clean-technology-manuf...
This is Abu Dhabi money.
Pretty sure that was musk and his 50+ bn bonus
A calculator is superhuman if you're prepared to put up with it's foibles.
Wouldn't a more northern state be a better location given the average temperatures of the environment? I've heard Texas is hot!
Data center, AI and nuclear power stations. Three advanced technologies, that's pretty good.
https://www.reuters.com/technology/artificial-intelligence/t...
no one wants to bite the hand that feeds.
Who gets the benefit of all of this investment? Are taxpayers going to fund this thing which is monetized by OpenAI?
If we pay for this shit, it better be fucking free to use.
> Ellison noted that the data centers are already under construction with 10 being built so far.
Do you mean building the centers or maintenance or both?
My sense anecdotally from within the space is yes people are feeling like we most likely have a "straight shot" to AGI now. Progress has been insane over the last few years but there's been this lurking worry around signs that the pre-training scaling paradigm has diminishing returns.
What recent outputs like o1, o3, DeepSeek-R1 are showing is that that's fine, we now have a new paradigm around test-time compute. For various reasons people think this is going to be more scalable and not run into the kind of data issues you'd get with a pre-training paradigm.
You can definitely debate on whether that's true or not but this is the first time I've been really seeing people think we've cracked "it", and the rest is scaling, better training etc.
China is the largest importer of crude oil in the world. China imports 59% of its oil consumptions, and 80% of food products. Meanwhile, US is fully self sufficient on both food and oil.
> They're playing the game on a scale of centuries
Is that why they are completely broke, having built enough ghost buildings that house entire population of France - 65 million vacant units? Is that why they are now isolated in geopolitics, having allied with Russia and pissed off all their neighbors and Europe?
https://www.theguardian.com/business/2023/may/18/bt-cut-jobs...
>> “For a company like BT there is a huge opportunity to use AI to be more efficient,” he said. “There is a sort of 10,000 reduction from that sort of automated digitisation, we will be a huge beneficiary of AI. I believe generative AI is a huge leap forward; yes, we have to be careful, but it is a massive change.”
Goldman Sacs:
https://www.gspublishing.com/content/research/en/reports/202...
>> Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300mn full-time jobs to automation.
China is dead broke and will shrink to 600M in population before 2100. State owned enterprises are eating up all the private enterprises. Meanwhile, Chinese rich leaves China by tens of thousands per year, and capital outflow increases every year.
One, capable of replacing some large proportion of global gdp (this definition has a lot of obstructions: organizational, bureaucratic, robotic)...
two, difficult to find problems in which average human can solve but model cannot. The problem with this definition is that the distinct nature of intelligence of AI and the broadness of tasks is such that this metric is probably only achievable after AI is already in reality massively superhuman intelligence in aggregate. Compare this with Go AIs which were massively superhuman and often still failing to count ladders correctly--which was also fixed by more scaling.
All in all I avoid the term AGI because for me AGI is comparing average intelligence on broad tasks rel humans and I'm already not sure if it's achieved by current models whereas superhuman research math is clearly not achieved because humans are still making all of progress of new results.
I doubt the US choice of energy generation is ideological as much a practicality. China absolutely dominates renewables with 80% of solar PV modules manufactured in China and 95% of wafers manufactured in China.[3] China installed a world record 277GW of new solar PV generation in 2024 which was a 45% year-on-year increase.[4] By contract, the US only installed ~1/10th this capacity in 2024 with only 14GW of solar PV generation installed in the first half of 2024.[5]
[1] https://en.wikipedia.org/wiki/Cost_of_electricity_by_source
[2] https://www.iea.org/data-and-statistics/charts/lcoe-and-valu...
[3] https://www.iea.org/reports/advancing-clean-technology-manuf...
[4] https://www.pv-magazine.com/2025/01/21/china-hits-277-17-gw-...
[5] https://www.energy.gov/eere/solar/quarterly-solar-industry-u...
In 2023 China had more net new solar capacity than the US has in total, and it will only climb from there. In order to do this, they're flexing muscles in R&D and mass production that the US has actually started to flex, and now will face extreme headwinds and decreased capital investment.
Regarding agriculture: America's agricultural powerhouse, California's Central Valley, is rapidly depleting its water supplies. The midwest is depleting its topsoil at double the rate that USDA considers sustainable.
None of this is irreversible or irrecoverable, but it very clearly requires some countervailing push on market forces. Market forces do not naturally operate on these types of time scales and repeatedly externalize costs to neighbors or future generations.
https://www.nature.com/articles/s41467-022-35582-x
https://www.smithsonianmag.com/smart-news/57-billion-tons-of...
Uh yeah, duh. Why would you not deplete other people's finite resources while you build massive capacity of your own infinite resources?
Softbank should not be allowed to invest more than ARM Holdings sold at a loss.
Instead of figuring that out, they'll just watch their civilization crumble.
Btw: they're already investing heavily in artificial wombs and affiliated technologies.
Inference isn’t cheap either.
Part of the reason that kids need less material is that the aren't just listening, they are also able to do experiments to see what works and what doesn't.
Maybe your calibration isn't poor. Maybe they really are all wrong but there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there. And if you genuinely think that, them this kind of investment isn't so crazy.
I'm curious why that is. If we know how to build it, it shouldn't take that long. It's not like we need to move a massive amount of earth or pour a humongous amount of concrete or anything like that, which would actually take time. Then why does it take 15 years to build a reactor with a design that is already tried and tested and approved?
The relationship between inflation and monetary policy is more complex than often portrayed. While recent inflation has created financial strain for many Americans, its root causes extend beyond simple money supply issues. Recent data shows that corporate profit margins reached historic highs during the inflationary period of 2021-2022. For example, in Q2 2022, corporate profits as a percentage of GDP hit 15.5%, the highest level since the 1950s. This surge in corporate profits coincided with the aftermath of Trump's 2017 Tax Cuts and Jobs Act, which reduced the corporate tax rate from 35% to 21%. This tax reduction increased after-tax profits and may have given companies more flexibility to pursue aggressive pricing strategies. Multiple factors contributed to inflation:
Supply chain disruptions created genuine scarcity in many sectors, particularly semiconductors, shipping, and raw materials Demand surged as economies reopened post-pandemic Many companies used these market conditions to implement price increases that exceeded their cost increases The corporate tax environment created incentives for profit maximization over price stability
For instance, many large retailers reported both higher prices and expanded profit margins during this period. The Federal Reserve Bank of Kansas City found that roughly 40% of inflation in 2021 could be attributed to expanded profit margins rather than increased costs. This pattern suggests that market concentration, pricing power, and tax policy played significant roles in inflation, alongside traditional monetary and supply-chain factors. Policy solutions should therefore address market structure, tax policy, and monetary policy to effectively manage inflation.
Moore's law seems to be against them too... hardware getting more powerful, small models getting more powerful... Not at all obvious that companies will need to rely on cloud models vs running locally (licencing models from whoever wants that market). Also, a lot of corporate use probably isn't that time critical, and can afford to run slower and cheaper.
Of course the US government could choose to wreck free-market economics by mandating powerful models to be run in "secure" cloud environments, but unless other countries did same that might put US at competitive price disadvantage.
This is true for brute force algorithms as well and has been known for decades. With infinite compute, you can achieve wonders. But the problem lies in diminishing returns[1][2], and it seems things do not scale linearly, at least for transformers.
1. https://www.bloomberg.com/news/articles/2024-12-19/anthropic...
2. https://www.bloomberg.com/news/articles/2024-11-13/openai-go...
Can't answer that question, but, if the only thing to change in the next four years was that generation got cheaper and cheaper, we haven't even begun to understand the transformative power of what we have available today. I think we've felt like 5-10% of the effects that integrating today's technology can bring, especially if generation costs come down to maybe 1% of what they currently are, and latency of the big models becomes close to instantaneous.
Just waiting for the current regime to decide that we should go all-in on some big AI venture and bet the whole Social Security pot on it.
The breaking of The Enigma gave humans machines that can spread knowledge to more humans. It already happened a long time ago, and all of it was cause for much trouble, but we endured the hardest part (to know when to stop), and humans live in a good world now. Full of problems, but way better than it was before.
I think the web is enough. LLMs are good enough.
This move to try to draw water from stone (artificial intelligence in sillicon chips) seems to be overkill. How can we be sure it's not a siphon that will make us dumber? Before you just dismiss me or counter my arguments, consider what is happening everywhere.
Maybe I'm wrong, or not seeing something. You know, like I believed in aliens for a long time. This move to artificial intelligence causes shock and awe in a similar way. However, while I do believe aliens do not exist, I am not sure if artificial intelligence is a real strawman. It could be the case that is not made of straw, and if it is more than that, we might have a problem.
I am specially concerned because unlike other polemic topics, this one could lead to something not human that fully understands those previous polemic topics. Humans through their generations forget and mythologize those fantasies. We don't know what non-humans could do with that information.
I am thinking about those issues for a long time. Almost a decade, even before LLMs running on silicon existed. If it wanted, non-human artificial intelligence could wipe the floor with humans just by playing to their favorite myths. Humans do it in a small scale. If machines learn it, we're in for an unknown hostile reality.
It could, for example, perceive time different from us (also a play on myths), and do all sorts of tricks with our minds.
LLMs and the current generation of artificial intelligence are boolean first, it's what they run. Only true or false bits and gates. Humans can understand the meaning of trulse though, we are very non boolean.
So, yeah, I am worried about booleaning people on a massive scale.
Yep, long wall of text. Sorry about that.
I expect those who really understand those questions to get my point.
I'm sure they're getting tax credits for investment (none of the articles I can find actually detail the US gov involvement) but the project is mostly just a few multinationals setting up a datacenter where their customers are.
> Other partners in the project include Microsoft, investor MGX and the chipmakers Arm and NVIDIA, according to separate statements by Oracle and OpenAI.
Technology advancing more quickly year over year?
That’s a crazy notion and I’ll be sure everyone knows.
Also, what a wild thing to say. “People like you deserve to live in poverty because you don’t think we live in a sci-fi world.”
Calm down, dude.
> The Stargate Project is a new company which intends to invest $500 billion over the next four years
No-till farming has been significantly supported by the USDA’s programs like EQIP
During his first term, Trump pushed for a $325MM cut to EQIP. That's 20-25% of their funding and would have required cutting hundreds if not thousands of employees.
Even BEFORE these cuts (and whatever he does this time around), USDA already has to reject almost 75% of eligible EQIP applicants
Regarding CA’s water: Trump already signed an EO requiring more water be diverted from the San Joaquin Delta into the desert Central Valley to subsidize water-intensive crops. This water, by the way, is mostly sold to mega-corps at rates 98% below what nearby American consumers pay via their municipal water supplies, effectively eliminating the blaring sirens that say “don’t grow shit in the desert.”
Now copy-paste to every other mechanism by which we can increase our nation’s climate security and ta-da, you’ve discovered one of the major problems with Trumpism. It turns out politics do matter!
https://www.reuters.com/markets/deals/japans-seven-i-deal-re...
As I understand it there wasn't anything to select, this is their own private money to be spent as they please. In this case Stargate.
Make hay while the sun shines.
We’re not doing time and tested.
> Department of Energy does not allow "off-the-cuff" designs for reactor
Not by statute!
When you're the biggest fossil fuel producer in the world, it's vital that you stay laser-focused on regulating nuclear power to death in every imaginable detail while you ignore the vast problems with unchecked carbon emissions and gaslight anyone who points them out.
> The president indicated he would use emergency declarations to expedite the project’s development, particularly regarding energy infrastructure.
> “We have to get this stuff built,” Trump said. “They have to produce a lot of electricity and we’ll make it possible for them to get that production done very easily at their own plants.
https://www.theguardian.com/us-news/2025/jan/21/trump-ai-joi...
Mega Project Rankings (USD Inflation Adjusted)
The New Deal: $1T,
Interstate Highway System: $618B,
OpenAI Stargate: $500B,
The Apollo Project: $278B,
International Space Station: $180B,
South-North Water Transfer: $106B,
The Channel Tunnel: $31B,
Manhattan Project: $30B
Insane Stuff.
Yes, the data center itself will create some permanent jobs (I have no real feel for this, but guessing less than 1000).
There'll be some work for construction folk of course. But again seems like a small number.
I presume though they're counting jobs related to the existence of a data center. As in, if I make use of it do I count that as a "job"?
What if we create a new post to leverage AI generally? Kinda like the way we have a marketing post, and a chunk of the daily work there is Adwords.
Once we start gustimamating the jobs created by the existence of an AI data center, we're in full speculation mode. Any number really can be justified.
Of course ultimately the number is meaningless. It won't create that many "local jobs" - indeed most of those jobs, to the degree they exist at all, will likely be outside the US.
So you don't need to wait for a post-mortem. The number is sucked out of thin air with no basis in reality for the point of making a good political sound bite.
Or they think the odds are high enough that the gamble makes sense. Even if they think it's a 20% chance, their competitors are investing at this scale, their only real options are keep up or drop out.
> Technology advancing more quickly year over year?
> That’s a crazy notion and I’ll be sure everyone knows.
The version I heard from an economist was something akin to a second industrial revolution, where the pace of technological development increases permanently. Imagine a transition from Moore's law-style doubling every year and a half, to doubling every week and a half. That wouldn't be a true "singularity" (nothing would be infinite), but it would be a radical change to our lives.
Edit: Hey we can solve the obesity crisis AND preserve jobs during the singularity!! Win win!
twitter hype is out of control again.
we are not gonna deploy AGI next month, nor have we built it.
we have some very cool stuff for you but pls chill and cut your expectations 100x!
I realize he wrote a fairly goofy blog a few weeks ago, but this tweet is unambiguous: they have not achieved AGI.The Flood Control Act [0], TVA, Heavy Press, etc.
They all created generally useful infrastructure, that would be used for a variety of purposes over the subsequent decades.
The federal government creating data center capacity, at scale, with electrical, water, and network hookups, feels very similar. Or semiconductor manufacture. Or recapitalizing US shipyards.
It might be AI today, something else tomorrow. But there will always be a something else.
Honestly, the biggest missed opportunity was supporting the Blount Island nuclear reactor mass production facility [1]. That was a perfect opportunity for government investment to smooth out market demand spikes. Mass deployed US nuclear in 1980 would have been a game changer.
[0] https://en.m.wikipedia.org/wiki/Flood_Control_Act_of_1928
[1] https://en.m.wikipedia.org/wiki/Offshore_Power_Systems#Const...
Wouldn't surprise me Sam Altman convinced Trump/Son/Ellison that this AI can reverse their aging. And Ellison does have a ton of money - $208bn.
> they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.
I tried to Google for more information. I tried this search: <<is openai inference profitable?>>I didn't find any reliable sources about OpenAI. All sources that I could find state this is not true -- inference costs are far higher than subscription fees.
I hate to ask this on HN... but, can you provide a source? Or tell us how do you know?
I could see DFW being a good candidate for a prototype arcology project.
With a state like Texas and a Federal Government thats onboard these permits would be a much smaller issue. The press conference makes this seem more like, "drill baby drill" (drilling natural gas) and directly talking about them spinning up their own power plants.
[1] https://www.kunr.org/npr-news/2024-09-11/how-memphis-became-...
[2] https://www.gevernova.com/gas-power/resources/case-studies/t...
Unless we air strike the data centers, there is no way to control China’s progress
I wonder if this is a sign of the global economic downturn pausing cloud migrations or AI sucking the oxygen out of the room.
That’s just a different flavour of enforced right-think.
it seems like such a simple stat to collect
With the hard shift to the right and Trump coming into office, especially the last bit will be interesting. There is a pretty substantial tension between factual reporting and not offending right-wing ideology: Should a model consider "both sides" about topics with with clear and broad scientific consensus if it might offend Trumpists? (Two examples that come to mind was the recent "The Nazis were actually left wing" and "There are only two genders".)
We’ve always been getting better at making things better.
I've read that some datacenters run mixed generation GPUs - just updating some at a time, but not sure if they all do that.
It'd be interesting to read something about how updates are typically managed/scheduled.
Not in the same way though. The pace of technological development post-industrial-revolution increased a lot faster - technological development was exponential both before and after, but it went from exponential with a doubling time of maybe a century, to a Moore's law style regime where the doubling time is a couple of years. Arguably the development of agriculture was a similar phase change. So the point is to imagine another phase change on the same scale.
One of the big issues that have occurred (in the US especially) is, that for 20+ years there were no new plants built. This caused a large void in the talent pool, inside and outside the industry. That fact, along with others has caused many problems with some projects of recent years in the US.
Regardless, I don’t see any change in this pattern. We’re advancing faster than ever before, just like always.
We’ve been doing statistical analysis and prediction for years now. It’s just getting better faster, like always.
I don’t see this big change in the rate of advancement. There’s just a lot more media buzz around it right now causing a bubble.
There was a big visible jump in text generation capabilities a few years ago (which was preceded by about 6 years of incremental NLP advances) and since then we’ve seen paced, year over year advances in that field.
As a medical layman, I imagine that alpha fold may really push the rate of pharmaceutical advances.
But I see no indication for a general jump in the rate of rate of technological advancement.
Sure. But you can look at things like GDP growth rates and see the same thing.
> I don’t see this big change in the rate of advancement. There’s just a lot more media buzz around it right now causing a bubble.
Maybe. I'm just trying to give a sense of what the concept of a "weak singularity" is. I don't have a view on whether we're actually going to have one or not.
She interfaces with AI Agents of companies, organizations, friends, family, etc to get things done for you (or to learn from..what's my friends bday his agent tells yours) automagically and she is like a friend. Always there for you at your beckon call like in the movie H.E.R.
Zuckerberg's glasses that can not take selfies will only be complimentary to our AI phones.
That's just my guess and desire as fervent GPT user, as well a Meta Ray Ban wearer (can't take selfies with glasses).
But why are programs like this controversial, even though anything shaped like a farm subsidy is normally popular? It seems to me that things like your Central Valley analysis are precisely the reason. The Central Valley has been one of the nation's agricultural heartlands for a while, and for quite a few common food products represents 90%+ of domestic production. So if this "blaring siren" you describe is real, and we have to stop farming there, a realistic response plan would have to include an explanation of what all the farmers are going to do and where we'll get almonds and broccoli from.
Perhaps you know all this already, but a lot of people who advocate such policies don't seem to. This then feeds into skepticism about whether they're hearing the "blaring siren" correctly in the first place. Personally, I think nearly arbitrarily extreme water subsidies are worth it if that's what we need to keep olives and pomegranates and celery in stock at the grocery store.
For self-driving you need edge compute because a few milliseconds of latency is a safety risk, but for many applications I don't see why you'd want that.
I'm sure this will easily be true if you count AI as entities capable of doing jobs. Actually, they don't really touch that (if AI develops too quickly, there will be a lot of unemployment to contend with!) but I get the national security aspect (China is full speed ahead on AI, and by some measurements, they are winning ATM).
News flash: household-name businesses aren't going to repeat slurs if the media will use it to defame them. Nevermind the fact that people will (rightfully) hold you legally accountable and demand your testimony when ChatGPT starts offering unsupervised chemistry lessons - the threat of bad PR is all that is required to censor their models.
There's no agenda removing porn from ChatGPT any more than there's an agenda removing porn from the App Store or YouTube. It's about shrewd identity politics, not prudish shadow government conspiracies against you seeing sex and being bigoted.
https://x.com/elonmusk/status/1881923570458304780
They don’t actually have the money
So any OpenAI user ( or competitor even) could take it and run a hosted model. You can even tweak the weights if you wanted to.
Why pay for OpenAI access when you can just run your own and save the money?
Seeing how Elon deceives advertisers with false impressions, I could see him giving the same strategy a strong vote of confidence (with the bullshit metrics to back it!)
It doesn't look great so far :)
Altman gets on Trump's good side by giving him credit for the deal.
Trump revoked Biden's AI regulations.
Google researchers invented the transformer
And Stargate Project is... what exactly? What is the goal? To make Altman richer, or is there any more or less concrete goal to achieve?
Also, few items for comparison, that I googled while thinking about it:
- Yucca Mountain Nuclear Waste Repository: $96B
- ITER: $65B
- Hubble Space Telescope: $16B
- JWST: $11B
- LHC: $10B
Sources:
https://jameswebbtracker.com/jwst/budget
We've seen with oAI and Anthropic, and rumoured with Google, that holding your "best" model and using it to generate datasets for smaller but almost as capable models is one way to go forward. I would say that this shows the "big models" are more capable than it would seem and that they also open up new avenues.
We know that Meta used L2 to filter and improve its training sets for L3. We are also seeing how "long form" content + filtering + RL leads to amazing things (what people call "reasoning" models). Semantics might be a bit ambitious, but this really opens up the path towards -> documentation + virtual environments + many rollouts + filtering by SotA models => new dataset for next gen models.
That, plus optimisations (early exit from meta, titans from google, distillation from everyone, etc) really makes me question the "we've hit a wall" rhetoric. I think there are enough tools on the table today to either jump the wall, or move around it.
Besides what ImJamal said, as a wealthy playboy man-about-town hanging out at Studio 54 in the '70s and '80s, I guarantee Trump has known and been friends with more gays than 95% of Americans. Certainly there has been no shortage of gay people among his top-level appointees in either his first or second administrations.
- Google has a massive data center division (Google Cloud / GCP) and a massive AI product division (Deep Mind / Gemini).
- Microsoft has a massive data center division (Azure) but no significant AI product division; for the most part, they build their "Copilot" functionality atop their partner version of the OpenAI APIs.
- Amazon has a massive data center division (Amazon Web Services / AWS) but no significant AI product division; for the most part, they are hedging their bets here with an investment in Anthropic and support for running models inside AWS (e.g. Bedrock).
- Oracle has a massive data center division (Oracle Cloud / OCI) but no significant AI product division.
Now look at OpenAI by comparison. OpenAI has no data center division, as the whole company is basically the AI product division and related R&D. But, at the moment, their data centers come exclusively from their partnership with Microsoft.
This announcement is OpenAI succeeding in a multi-party negotiation with Microsoft, Oracle, and the new administration of the US Gov't. Oracle will build the new data centers, which it knows how to do. OpenAI will use the compute in these new data centers, which it knows how to do. Microsoft granted OpenAI an exception to their exclusive cloud compute licensing arrangement, due to this special circumstance. Masa helps raise the money for the joint venture, which he knows how to do. US Gov't puts its seal on it to make it a more valuable joint venture and to clear regulatory roadblocks for big parallel data center build-outs. The current administration gets to take credit as "doing something in the AI space," while also framing it in national industrial policy terms ("data centers built in the USA").
The clear winner in all of this is OpenAI, which has politically and economically navigated its way to a multi-cloud arrangement, while still outsourcing physical data center management to Microsoft and Oracle. Probably their deal with Oracle will end up looking like their deal with Microsoft, where the trade is compute capacity for API credits that Oracle can use in its higher level database products.
OpenAI probably only needs two well-capitalized hardware providers competing for their CPU+GPU business in order to have a "good enough" commodity market to carry them to the next level of scaling, and now they have it.
Google increasingly has a strategic reason not to sell OpenAI any of its cloud compute, and Amazon could be headed in that direction too. So this was more strategically (and existentially) important to OpenAI than one might have imagined.
They really got together the supervillains of tech.
Feels like the the only reason Zuck is missing is Elon's veto.
Be the definitive first past the post in the budding "AI" industry.
Why? He who wins first writes the rules.
For an obvious example: The aviation industry uses feets and knots instead of metres because the US invented and commercialized aviation.
Another obvious example: Computers all speak ASCII (read: English) and even Unicode is based on ASCII because the US and UK commercialized computers.
If you want to write the rules you must win first, it is an absolute requirement. Runner-ups and below only get to obey the rules.
What would you say is the strongest evidence for this statement?
My problem with this is that people making this statement are unlikely to be objective. Major players are in fundraising mode, and safety folks are also incentivised to be subjective in their evaluation.
Yesterday I repeatedly used OpenAI’s API to summarise a document. The first result looked impressive. However, comparing repeated results revealed that it was missing major points each time, in a way a human would certainly not. In the surface the summary looked good, but careful evaluation indicated a lack of understanding or reasoning.
Don’t get me wrong, I think AI is already transformative, but I am not sure we are close to AGI. I hear a lot about it, but it doesn’t reflect my experience in a company using and building AI.
https://situational-awareness.ai/racing-to-the-trillion-doll...
A 500B dollar investment doesn't just fall into one's lap. It's not your run of the mill funding round. No, this is something you very actively work towards that your funders must be really damn convinced is worth the gamble. No one sane is going to look at what they genuinely believe to be a dead end and try to garner up Manhattan Project scales of investment. Careers have been nuked for far less.
Once you have one working design for the environment (e.g. hot desert vs cold and humid), you can stamp the things out with minimal variation between the two.
The maintenance of all of that supporting infrastructure is the standard blue collar work the same.
The only new blue collar job on the maintenance side is responding to hardware issues. What this entails depends on if it’s a colo center and you’re doing “remote hands” for a customer where you’re swapping a PSU, RAM, or whatever. You also install new servers, switches, etc.
As you move up into hyperscalers the logistics vary because some designs make servicing a single server in place not worth cooling the whole hot aisle (Google runs really hot hot aisles that weren’t human friendly). So sometimes you just yank the server and throw it in a cart or wait for the whole rack to fail and pull it then.
Overall though, anything that can be done remotely is. So the data center techs do very little work on the keyboard
It has been quite clear for a while we'll shoot past human-level intelligence since we learned how to do test-time compute effectively with RL on LMMs (Large Multimodal Models).
There is this pesky detail about manufacturing 100k treadmills but lets not get bothered by details now, the current must flow
I can see your point re: run locally but no reason Open AI can't release version 0.1 and how many times are u left without an internet connection on ur current phone?
Overall I hate Apple now it's so stale compared to GPT's iPhone app. I nerd rage at dumbass Siri.
Yes, a very interesting project; similar power output to an AP1000. Would have really changed the energy landscape to have such a deployable power station. https://econtent.unm.edu/digital/collection/nuceng/id/98/rec...
Perhaps.
For context see https://masdar.ae/en/news/newsroom/uae-president-witnesses-l... which is a bit further south than the bulk of Texas and has not yet been built; 5.2GW of panels, 19GWh of storage. I have seen suggestions on Linkedin that it will be insufficient to cover a portion of days over the winter, meaning backup power is required.
It is just an educated guess factoring costs of running similar/comparable models to 4o or 4o-mini per token, and how azure commitments work with OpenAI models[2], also knowing that Plus subscriptions are probably more profitable[1] than API calls.
It would be hard for even OpenAI to know with any certainty because they are not paying for Azure credits like a normal company. The costs are deeply intertwined with Azure and would be hard to split given the nature of the MS relationship[3]
----
[1] This is from experience of running LibreChat using 4o versus ChatGPT Plus for ~200 users, subscriptions should quite profitable than raw API by a order of 3 to 4x, of course different types of users and adoption levels will be there my sample while not small is not likely representative of their typical user base.
[2] MS has less incentive to subsidize than say OpenAI themselves
[3] Azure is quite profitable in the aggregate, while possibly subsidizing OpenAI APIs, any such subsidy has not shown up meaningfully in Microsoft financial reports.
The argument presented in the quote there is: “everyone in AI foundation companies are putting money into AI, therefore we must be near AGI.”
The best evaluation of progress is to use the tools we have. It doesn’t look like we are close to AGI. It looks like amazing NLP with an enormous amount of human labelling.
Though I see no reason whatsoever why LLM should be blocked from answering "how do I make a nuclear bomb?" query.
Recent advances in quantum error correction are a significant increase in confidence that quantum computers are practical.
We can argue about timelines. I suspect it is too early for startups to be raising funds for quantum computers at this stage.
Source: I worked in quantum computing research.
Look, making up a three-letter acronym doesn't make whatever it stands for a real thing. Not even real in a sense "it exists", but real in a sense "it is meaningful". And assigning that acronym to a project doesn't make up a goal.
I'm not claiming that AGI, ASI, AXY or whatever is "impossible" or something. I claim that no one who uses these words has any fucking clue what they mean. A "bomb" is some stuff that explodes. A "road" is some flat enough surface to drive on. But "superintelligence"? There's no good enough definition of "intelligence", let alone "artifical superintelligence". I unironically always thought a calculator is intelligent in a sense, and if it is, then it's also unironically superintelligent, because I cannot multiply 20-digit numbers in my mind. Well, it wasn't exactly "general", but so aren't humans, and it's an outdated acronym anyway.
So it's fun and all when people are "just talking", because making up bullshit is a natural human activity and somebody's profession. But when we are talking about the goal of a project, it implies something specific, measurable… you know, that SMART acronym (since everybody loves acronyms so much).
maybe i am getting to old or to friendly to humans, but it's staggering to me how the priorities are for such things.
Despite the fact that this is THE thing I'd be the happiest to see in the real world (having spent a considerable amount of my career in companies working towards this vision), we are so far from it (as anyone who actually worked on these problems will attest) that Altman's comment here isn't just overselling, it's a blatant lie about this tech's capabilities.
I guess the pitch was something like: "hey o3 can already do PhD level maths so you know in 5 years it will be able to do drugs too, and cure shit, Mr President".
Trouble is o3 can't do advanced math (or at least definitely not at the level openai claimed.. it was a lie, it turns out openai funds the dataset that measures this - ouch). And the bigger problem is, going from "ai can do maths" to "invent cures" is about a 10-100 X jump. If it wasn't, don't we think the pharma companies would have solved this by hiring lots of "really smart math guys"?
As anyone in biotech will tell you, the hard bit is not the first third of the drug discovery pipeline (where 99% of ai driven biotechs focus). It's the later parts where the rubber meets the road.. i.e. where your precious little molecule is out in the real world with real people where the incredible variability of real biological hosts makes most drugs fail spectacularly. You can't GPT your way out of this. The answers for this is not in science papers that you can just read and regurgitate a version that "solves biology and cures diseases".
To solve this you need AI but most of all you have to do science. Real science. In the lab, in vitro and in Vivo, not just in silico, doing ablation studies, overfitting famous benchmark datasets and other pseudo science shit the ML community is used to doing.
That is all to say, I'd bet we won't see a single purely AI designed novel drug in the clinic in this decade. All parts of that sentence are important. Purely AI designed. Novel. But that's for another post..
Now, back to Altman. If you watch the clip, he almost did the smart thing at first when Trump put him on the spot and said "I have no idea about healthcare, biotech (or AI beyond board room drama)" but then could not resist coming up with this outlandish insane answer.
Famously (in tech circles anyway) Paul Graham wrote more than a decade ago about Altman that he's the most strong willed individual he's ever met, who can just bend the universe to his will. That's his super skill. And clearly.. convincing SoftBank and Oracle to do this 500 billion investment for OpenAI (a non profit turned for profit) is an unbelievable achievement. I have no idea what Altman can say (or do) in board rooms that unlocks these possibilities for him.. Any ideas? Let me know!
So I do question if OpenAI is able to make a profit, even if you remove training and R&D. The $20 plan may be more profitable, but now it will need to cover the R&D and training, plus whatever they lose on Pro.
I don't immediately disagree with you but you just accidentally also described all crypto/NFT enthusiasts of a few years ago.
I think what's been going on is compute/$ has been exponentially rising for decades in a steady way and has recently passed the point that you can get human brain level compute for modest money. The tendency has been once the compute is there lots of bright PhDs get hired to figure algorithms to use it so that bit gets sorted in a few years. (as written about by Kurzweil, Wait But Why and similar).
So it's not so much brute forcing AGI so much that exponential growth makes it inevitable at some point and that point is probably quite soon. At least that seems to be what they are betting.
The annual global spend on human labour is ~$100tn so if you either replace that with AGI or just add $100tn AGI and double GDP output, it's quite a lot of money.
Even when accounting for announced capacity expansion, the USA is currently on target to remain a very small player in the global market with announced capacity of 33GW/yr polysilicon, 13GW/yr ingots, 24GW/yr wafers, 49GW/yr cells and 83GW/yr modules (13GW/yr sovereign supply chain limitation).
In 2024, China completed sovereign manufacturing of ~540GW of modules[2] including all precursor polysilicon, ingots, wafers and cells. China also produced and exported polysilicon, ingots, wagers and cells that were surplus to domestic demand. Many factories in China's production chain are operating at half their maximum production capacity due to global demand being less than half of global manufacturing capacity.[3]
[1] https://seia.org/research-resources/solar-storage-supply-cha...
[2] Estimated figure extrapolated from Jan-Oct 2024 data (10 months). https://taiyangnews.info/markets/china-solar-pv-output-10m-2...
[3] https://dialogue.earth/en/business/chinese-solar-manufacture...
> there are no working analogs in the US to use as an approved guide
small reactors have been installed on ships and submarines for over 70(!) years now. Reading up on the very first one, USS Nautilus, "the conceptual design of the first nuclear submarine began in March 1950" it took a couple of years? So why is it so unthinkably hard 70 years later, honest question? "Military doesn't care about cost" is not good enough, there are currently about >100 active ones with who knows how many hundreds in the past, so they must have cracked the cost formula at some point, besides by now we have hugely better tech than the 50's, so what gives?
https://en.wikipedia.org/wiki/United_States_involvement_in_r...
This completely ignores storage and the ability to control the output depending on needs. Instead of LCOE the LFSCOE number makes much more sense in practical terms.
The two are qualitatively different.
That doesn't seem to be much of a thing these days. If you look at Russia/Ukraine or China/Taiwan there's not much scarcity. It's more bullying dictator wants to control the neighbours issues.
Like which wars in this century?
Is that the figure correct figure? Because that's Japanese yen which is more like $2.2B USD?
Also, "Dario Amodei says what he has seen inside Anthropic in the past few months leads him to believe that in the next 2 or 3 years we will see AI systems that are better than almost all humans at almost all tasks"
Instead we gave a small number of people all of this money for a moonshot in a state where they squabble over who’s allowed to use which bathroom and if I need an abortion I might die.
If you ignore Gaza and whole of Africa, maybe.
Then again, it's more of a logistics challenge, and if e.g. California were to invade Canada for its water supply, how are they going to get it all the way down there?
I can see it happening in Africa though, a long string of countries rely on the Nile, but large hydropower dams built in Sudan and Ethiopia are reducing the water flow, which Egypt is really not happy about as it's costing them water supply and irrigated land. I wouldn't be surprised if Egypt and its allies declares war on those countries and aims to have the dams broken. Then again, that's been going on for some years now and nothing has happened yet as far as I'm aware.
(the above is armchair theorycrafting from thousands of miles away based on superficial information and a lively imagination at best)
Setting the world on fire and disrupting societies gleefully, while basically building bunkers (figuratively more than literally) and consolidating surveillance and propaganda to ride out the cataclysm, that's what I'm seeing.
And the stories to sell people on continuing to put up with that are not even good IMO. Just because the people who use the story to consolidate wealth and control are excited about that, we're somehow expected to be excited about the promise of a pair of socks made from barbed wire they gave us for Christmas. It's the narcissistic experience: "this is shit. this benefits you, not me. this hurts me."
One thing is sure, actual intelligence, regardless of how you may define it, something that is able to reason and speak freely, is NOT what people who fire engineers for correcting them want. It's not about a sort of oracle for humanity to enjoy and benefit from, that just speaks "truth".
South Sudan is some ridiculous thing where two rival generals are fighting for control. Are there any wars which are mostly about scarcity at the moment?
Even in Europe extremists are propped up by promise of "cheap energies" from Russia.
I guess if you dont see the link this is not the place to explain it.
If I was an AI enthusiast, Softbank showing up would make me nervous.
That's why Israelis gladly handed back the Sinai desert to Egypt, but have kept Golan Heights, East Jerusalem, Shaba Farms, and continuously confiscate Palestinian farmlands in the West Bank.
There is nothing arbitrary or religious about which lands Zionists are occupying and which they're leaving to arabs.
But pulling out your phone to talk to it like a friend...
I don’t believe the control problem is solved, but I’m not sure it would matter if it is.
I don't even understand what the proposed mechanism for "rouge AI enslaves humanity" is. It's scifi (and not hard scifi) as far as I can see.
"if your company doesn't present hardcore fisting pornography to five year olds you're a tyrant" is a heck of a take, even for hacker news.
The main military thing going on there - I was in Dahab where there are endless military checkpoints - is Hamas like guys trying to come over and overthrow the fairly moderate Egyptian government and replace it with a hardline Hamas type islamic dictatorship for the glorification of Allah etc. Again it's not about reducing scarcity - more about increasing scarcity in return for political control. Dahab and Cairo are both a few hours drive from Gaza.
Censorship is censorship is censorship.
To put it simply, it could outcompete humanity on every metric that matters, especially given recent advancements in robotics.
On a tangential note, those who wish to frame this as the start of the great AI war with China (in which they regrettably may be right), should seriously consider the possibility of coming out on the losing end. China has tremendous industrial momentum, and is not nearly as incapable of leading-edge innovation as some Americans seem to think.
That said, this does look like dreadful policy at the first headline. There is a lot of money going in to AI, adding more money from the US taxpayer is gratuitous. Although in the spirit of mixing praise and condemnation, if this is the worst policy out of Trump Admin II then it'll be the best US administration seen in my lifetime. Generally the low points are much lower.
and a bureaucratic one as well. in Germany, they want to trim bureaucratic necessities while (not) expecting multiple millions of climate refugees.
lot's of undocumented STUFF (undocumented have nowhere to go so they don't get vaccines, proper help when sick, injured, mentally unstable, threatened, abused) incoming which means more disease, crime, theft, money for security firms and insurance companies, which means more smuggle, more fear-mongering via media, more polarization, more hard-coding of subservience into the young, more financial fascism overall, less art, zero authenticity, and a spawn of VR worlds where the old rules apply forever.
plus more STDs and micro-pandemics due to viral mutations because people will be even more careless when partying under second-semester light-shows in metropolitan city clubs and festivals and when selling out for an "adventurous" quick potent buck and bug, which of course means more money pouring into pharma who won't be able to test their drugs thoroughly (and won't have to, not requiring platforms to fact check will transfer somewhat into the pharma industry) because the population will be more diverse in terms of their bio-chemical reactions towards ingredients in context of their "fluid" habitats chemical and psycho-social make-ups.
but it's cool, let's not solve the biggest problems before pseudo-transcending into the AGI era. will make for a really great impression, especially those who had the means, brains, skills, (past) careers, opportunity and peace of mind.
Regarding to your question, yes. I'd prefer a healthy counterbalance to what we have currently. Ideally, I'd prefer cooperation. A worldwide cooperation.
The intro paragraph in the original URL https://openai.com/index/announcing-the-stargate-project/ mentions US/America for 5 times!
Again, I wonder why no group of smart people with brilliant ideas has unilaterally imposed those ideas on the rest of humanity through sheer force of genius.
“Preprint out today that tests o1-preview's medical reasoning experiments against a baseline of 100s of clinicians.
In this case the title says it all:
Superhuman performance of a large language model on the reasoning tasks of a physician
Link: https://arxiv.org/abs/2412.10849”. — Adam Rodman, a co-author of the paper https://x.com/AdamRodmanMD/status/186902305691786464
—-
Have you tried using o1 with a variety of problems?
Maybe you have not seen the 2013 movie "H.E.R.?" Scarlett Johansan starred in it (her voice was the AI) and Sam Altman asked her to be the voice of chatGPT.
Overall this is what I see happening and excited for some of it or possibly all of it to happen. Yet time will tell :-) and it sounds like your betting none of it will happen ... we'll see :)
So those who framing this are correct and that we should matching their momentum here asap?
Having said that, we do not to understand the world to exploit it for ourselves. And what better way to understand and exploit the universe than science? Its an endearment.
I don't know if this will happen with any certainty, but the general idea of commoditising intelligence very much has the ability to tip the world order: every problem that can be tackled by throwing brainpower at it will be, and those advances will compound.
Also, the question you're posing did happen: it was called the Manhattan Project.
Am I to conclude that we've had a comparably intelligent machine since 2012?
Given the similar performance between GPT4 and O1 on this task, I wonder if GPT3.5 is significantly better than a human, too.
Sorry if my thoughts are a bit scattered, but it feels like that benchmark shows how good statistical methods are in general, not that LLMs are better reasoners.
You've probably read and understood more than me, so I'm happy for you to clarify.
Perhaps it’s better that you ask a statistician you trust.
We already did. Look at the state of animals today vs <1 mya. Bovines grown in unprecedented mass numbers to live short lives before slaughter. Wolves bred into an all new animal, friendly and helpful to the dominate species. Previously apex predators with claws, teeth, speed and strength, rendered extinct.
Who cares about the planet, anyway.
I agree in principle. And realistically, there is no way Altman would not be part of this consortium, much as I dislike it. But rounding out the team with Ellison, Son and Abu Dhabi oil money in particular -- that makes for a profound statement, IMHO.
If not, why is the study sufficient evidence for the LLM, but not sufficient evidence for the previous system?
Again, it feels like statistical methods are winning out in general.
> Perhaps it’s better that you ask a statistician you trust
Maybe we can shortcut this conversation by each of us simply consulting O1 :^)
To be explicitly clear, the US granting largess to tech companies for datacenters also counts as a misallocation in my view.
A serious question though, what does happen when AIs are filing lawsuits autonomously on behalf of the powerful, the courts clearly won't be able to cope unless you have AI powered courts too? None of how these monumental changes will work has been thought through at all, let's hope AI is smart enough to tell us what to do...
Did we see the same fallout from the space-race from a couple generations ago?
I don't think so — certainly not in the way you're framing it. So I guess I don't accept your proposition as a guarantee of what will happen.
So that means the models themselves aren't really IP - they are inevitable outputs from optimising using the input data for a certain task.
I think this means pretty much everyone, apart from the AI companies - will see these models as pre-competitive.
Why spend huge amounts training the same model multiple times, when you can collaborate?
Note it only takes one person/company/country to release an open source model for a particular task to nuke the business model of those companies that have a business model of hoarding them.
The current situation with Russia and China seems caused by them becoming prosperous. In the 1960s in China and 1990s in Russia they were broke. Now they have money they can afford to put it into their militaries and try to attack the neighbours.
I'm reminded of the KAL cartoon on Russia https://www.economist.com/cdn-cgi/image/width=1424,quality=8... That was from 2014. Already Russia is heading to the next panel in the cycle.
I don't think there is much point of reading the whole thing after the following:
"Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”."
It’s sad to see the president of US being ass kissed so much by these guys. I always assumed there’s a little of that but this is another extreme. If this is true, I fear America has become like a third world country with a dictator like head of state where everyone just praises him and get favors in return.
The spoils of the space race would have gone to someone a lot like Musk. Or Ellison. Or Masayoshi Son. Or Sam Altman. Or the much worse old-moneyed types. The US space program was, famously, literally employing ex-Nazis. I doubt the beneficiaries of the money had particularly clean hands either
Trying to process this but doesn’t his fall from grace have more to him increasing his real personality to the world? Sometime around calling that guy a pedo. Not much bothers me but at the very least his apparent lack of decision making calls into question many things.
How many deaths did China's warmongering caused abroad?
[1] https://arstechnica.com/information-technology/2024/09/omnip...
And there could be a change in the law that allows people to forgive student debt in personal bankruptcy, and that could make sure higher tuition doesnt happen.
You really DON’T need to centrally plan everything. The market will still find good solutions under the new parameters, but we need those parameters to change before we’re actually out of water.
From a national security PoV, surpassing other countries’ work in the field is paramount to maintaining US hegemony.
We know China performs a ton of corporate espionage, and likely research in this field is being copied, then extended, in other parts of the world. China has been more intentional in putting money towards AI over the last 4 years.
We had the chips act, which is tangentially related, but nothing as complete as this. For i think a couple years, the climate impact of data centers caused active political slowdown from the previous administration.
Part of this is selling the project politically, so my belief is much of the talk of AGI and super intelligence is more marketing speak aimed at a general audience vs a niche tech community.
I’d be willing to predict that we’ll get some ancillary benefits to this level of investment. Maybe more efficient power generation? Cheaper electricity via more investment in nuclear power? Just spitballing, but this is an incredible amount of money, with $100 billion “instantly” deployed.
All? Quite a few of the best minds in the field, like Yann LeCun for example, have been adamant that 1) autoregressive LLMs are NOT the path to AGI and 2) that AGI is very likely NOT just a couple of years away.
https://apnews.com/article/wind-energy-offshore-turbines-tru...
https://www.utilitydive.com/news/trump-offshore-wind-leasing...
AI controlled cheap Chinese drones will start flying into their residencies carrying some trivial to make high explosives. With the class wars getting hotter in next few years we may be saying that Luigi Mangione had the right ideas towards the PMC, but he was underachiever.
We're not spending money on AI as a field, we're spending a lot of money on one, quite possibly doomed, approach.
i guess what i'm asking is: what was the practical advantage of ascii or feet and knots that made them so important?
“AGI” has proven to be today’s hot marketing stunt for when you need to raise another round of cash and your only viable product is optimism.
Flying cars were just around the corner in the 60s, too.
AFAICT from this article and others on the same subject, the 500 billion number does not appear to be public money. It sounds like it's 100 billion of private investment (probably mostly from Son), and FTA,
> could reach five times that sum
(5x 100 billion === 500 billion, the # everyone seems to be quoting)
That's moving the goalposts and doesn't address the issue.
>They have been smart and done all their dealings via money.
You mean just like the country who issues the world reserve currency and who's intelligence agencies get involved in destabilizing regimes across the world?
It's worth keeping in mind how extremely unfriendly to tech the last admin was. At this point, it's basically proven in court that emails of the form "please deboost person x or else" were send, and there's probably plenty more we don't know about.
Combine that with the troubles in Europe which Biden's administration was extremely unwilling to help with, the obstacles thrown in the way of major energy buildouts, which are needed for AI... one would have to be stupid to be a tech CEO and not simp for Trump.
Tech has been extremely Democratic for many years. The Democrats have utterly alienated tech, and now they reap the consequences.
Is this how you make a constructive argument? Perhaps I was expecting too much from a joke account but this style of whataboutism is boring.
My post that you responded to set my premise which was that China has its own form of colonialism that is quite different than Americas but it exists and it’s quite strong. To classify China as a peaceful loving nation that respects other cultures is as if we were saying the US has never started a conflict. It’s factually a lie. China has a long list of human rights issues, they factually do not respect other cultures even within their own borders. I am not defending America but pointing out that China is not what the OP stated.
The difference is that Musk can do twice as much for 1/10 what Nasa thinks the program will cost, which is never what the program will actually cost, and Musk will do it in half that time to boot.
The guy is an unhinged manchild, but if what you care about is having your money well spend and getting to Mars as cheaply as possible, he's exactly who you're looking for.
Are you the kind of superficial petty person who needs to take jabs at the messenger's name and not the message itself?
And are you really in the position to throw stones from a glass house with that account name? If you had your real name and social media profiles linked in the bio I'd understand, but you're just being hypocritical, petty and childish here with this 'gotcha'.
> To classify China as a peaceful loving nation that respects other cultures
I never made such a classification. You're building your own strammen to form a narrative you can attack but you're not saying anything useful the contradicts my PoV and wasting our time. Since you're obviously arguing in bad faith I won't converse with you further. Goodbye.
Well, on the other side it can be said that Big Tech wasn't really on the side of democracy (note: democracy, not the Democrat Party) itself, and it hasn't been for years - at the very least ever since Cambridge Analytica was discovered. The "big tech" sector has only looked at profit margins, clicks, eyeballs and other KPIs while completely neglecting its own responsibility towards its host, and it got treated as the danger it posed by the Biden administration and Europe alike.
As for the cryptocoin world that has also been campaigning for the 45th: they are an even worse cancer on the world. Nothing but a gigantic waste of resources (remember the prices of GPUs, HDDs and RAM going through the roof, coal power plants being reactivated?), rug pulls and other scams.
The current shift towards the far-right is just the final masks falling off. Tech has rather (openly) supported the 45th than to learn from the chaos it has brought upon the world and make at least a paper effort to be held accountable.
https://arstechnica.com/information-technology/2024/09/omnip...
The man has the moral system of a private prison and the money to build one.
I think reinforcement learning with little to no human feedback, O-1 / R-1 style, might be that revolution.
Ultimately, the breakthrough in AI is going to either come from eliminating bottlenecks in computing such that we can simulate many more neurons much more cheaply (in other words, 2025-level technology scaled up is not going to really be necessary or sufficient), or some fundamental research discovery such as a new transformer paradigm. In any case, it feels like these are theoretical discoveries that, whoever makes them first, the other "side" can trivially steal or absorb the information.
Let's be honest. He isn't wrong. I'd rather live in a society with zero crime than what we have now.
It won't just be at the behalf of the powerful.
If lawyers are able to file 10x as many lawsuits per hour, the cost of filing a lawsuit is going to go down dramatically, and that's assuming a maximally-unfriendly regulatory environment where you still officially need a human lawyer in the loop.
This will enable people to e.g. use letters signed by an attorney at law, or even small claims court, as their customer support hotline, because that actually produces results in today.
Nobody is prepared for that. Not the companies, not the powerful, not the courts, nobody.
What happens inside China is nothing of my interest, it's their business. They existed for millennias, they probably know how to manage themselves. They are not trying to expand outside of may be Taiwan, they don't put their military bases in my country, they don't fund so-called "opposition" and that's good enough for me.
Wow! It is genuinely frightening that these people should be in control of our future!
There are a number of countries that might give you a panopticon state of you want one
but also myriad of hardcore private repositories of many high-tech US enterprises hacking amazing shit (mine included) :)
All of these entities would have been enormously more powerful with access to an AGI's immortality, sleeplessness, and ability to clone itself.
Nice euphemism for giving people autonomy in their data and privacy.
Most of there companies are so large that they cannot really fail anymore. At this point it has very little to do with protecting themselves, more with making them more powerful than governments. JD Vance are said that the US could drop support for NATO if Europe tries to regulate X [1]. Oligarchs have fully infiltrated the US government and are trying to do the same to other countries.
I disagree with the grandparent. They don't support Trump because they do not want to be on his bad side (well, at least not only that), they support Trump because they see the opportunity to suppress regulation worldwide and become more powerful than governments.
We just keep making excuses (fiduciary duties, he just doesn't know how to wave his arm because he's an autist [2]). Why not just call it what it is?
[1] https://www.independent.co.uk/news/world/americas/us-politic...
[2] Which is pretty offensive to people on the spectrum.
https://www.scientificamerican.com/article/climate-change-an...
Amazing to see how DeepSeek R1 is doing better than OpenAI models with much less resources
Re H.E.R phone - I see people already trying to build this type of product, one example: https://www.aphoneafriend.com
... has anyone ever written a book about this? If not, I think I'm gonna call dibs.
An almost absolute incumbency advantage.
>what was the practical advantage of ascii or feet and knots
Familiarity. Americans and Britons speak English, and they wrote the rules in English. Everyone else after the fact needs to read English or GTFO.
Alternatively, think of it like this: Nvidia was the first to commercialize "AI" with CUDA. Now everyone in "AI" must speak CUDA or be irrelevant.
He who wins first writes the rules, runner-ups and below obey the rules.
This is why America and China are fiercely competing to be the first past the post so one of them will write the rules. This is why Japan and Europe insist they will write the rules, nevermind the fact they aren't even in the race (read: they won't write the rules).
""" SoftBank’s CEO Masayoshi Son has previously made large-scale investment commitments in the US off the back of Trump winning a presidential election. In 2016, Son announced a $50 billion SoftBank investment in the US, alongside a similar pledge to create 50,000 jobs in the country.
...
However, as reported by Reuters, it’s unclear if the new jobs pledged back in 2016 ever came to fruition and questions have been raised about how SoftBank, which had $29 billion in cash on its balance sheet according to its September earnings report, might fund the investment. """
- https://www.datacenterdynamics.com/en/news/softbank-pledges-...
The US will fare no better if it walks down this path, and honestly will likely fare worse for it's cultural obsession with individualism over community.
Oligarchs want less regulation, but they also want these beefy government contracts. They want weaker government to regulate them and stronger government to protect them and bully other countries. Way I see it, what they actually want is control of the government, and with Trump they have it (more than before).
Please don't post comments saying that HN is turning into Reddit. It's a semi-noob illusion, as old as the hills.
This makes preventing the crime and protecting people from effects of these crimes extremely difficult.
MSFT or even Google (AWS is not as mature in that space imho) made perfect sense, Oracle doesn't.
Elon and Larry are good friends, I would guess that has something to do with this development.
Example: Cities are being presented a false choice between accepting deadly high speed chases vs zero criminal accountability [1], which in the world of drones seems silly [2]
I don't want the police to have unfettered access to surveil any and all citizens but putting camera access behind a court warrant issued by a civilian elected judge doesn't feel that dystopian to me.
Is that what Ellison was alluding to? I have no idea, but we are no longer in a world where we should disregard this prima facie.
[1]: https://www.ktvu.com/news/controversial-oakland-police-pursu...
[2]: https://www.cbsnews.com/sanfrancisco/news/san-francisco-poli...
And the market leader is what, 30%? about 1 order of magnitude. That's not such a huge difference, and I suspect that Oracle's size is disproportionate in the enterprise space (which is where a lot of AI services are targeted) whereas AWS has a _ton_ of non-enterprise things hosted.
In any case, 2-3% is big enough where this kind of investment is 1) financially possible, 2) desirable to grow to be #2 or #3
So the statement becomes tautological “all researchers who believe that AGI is imminent believe that AGI is imminent”.
And of course, OpenAI and the other labs don’t perform actual science any longer (if science requires some sort of public sharing of information), so they win every disagreement by claiming that if you could only see what they have behind closed doors, you’d become a true believer.
2) As mentioned in the chart label, earlier systems require manual symptom extraction.
3) An important point well articulated by a cancer genomics faculty member at Harvard:
“….Now, back to today: The newest generation of generative deep learning models (genAI) is different.
For cancer data, the reason these models hold so much potential is exactly the reason why they were not preferred in the first place: they make almost no explicit data assumptions.
These models are excellent at learning whatever implicit distribution from the data they are trained on
Such distributions don’t need to be explainable. Nor do they even need to be specified
When presented with tons of data, these models can just learn, internalize & understand…..”
More here: https://x.com/simocristea/status/1881927022852870372?s=61&t=...
Corruption is as old as mankind; don't know why it's pointed out prominently. Just look at that Xipeng/Biden photo from the National Archives.
'you should have spent all this time and money fighting climate change'
If you don't believe US has elections then straighten up your tinfoil hat:)
Maybe you'll say next the earth is flat, if you think people have nothing better to do but to find ways to lie to you.
It's also not clear to me what happens to all of the derivatives based on student debt, though there may very well be an answer there that I just haven't understood yet.
Welcome to... choose among many of the technodystopies in literature.
If Alexander could have left perfectly aligned copies of himself in every city he passed, he could have gotten much more control and authority, and still avoided a fight by agreeing to maintain the local power structure with himself as the new head of state.
I don't think that holds for a policy of non-intervention. People usually don't like that solution, especially when considering welfare programs, but it is fair to give no one assistance in the sense that everyone was treated equally/fairly.
Now its a totally different question whether its fair that some people are in this position today. The answer is almost certainly no, but that doesn't have a direct impact on whether an intervention today is fair or not.
When a few people get really rich it kind of slips through the gaps, the broader system isn't impacted too much. When most people get a little rich they spend that money and prices go up. Said differently, wealth is all relative so when most people get a little more rich their comparative wealth didn't really change.
The problem you're proposing could be solved via a high quality cellular network.
>How many countries has China invaded and bombed in the last 30 years? >How many deaths did China's warmongering caused abroad?
You didn't answer those, just started hand waving some stuff about China's "own form of colonialism" -- without even explaining what that is and how it works (which personally I'd be curious to hear about, and believe *is*" likely guilty of violence).
So you very clearly are the one guilty of shifting the goalposts, going on tangents, and bringing up usernames instead of real arguments.
Did I?
> Corruption is as old as mankind
Yeah but seldomly celebrated or boasted about.
What is far more important to understand is to ignore all that nonsense and focus on who makes money? It will be Ellison and his buddies making tens of billions of dollars/year selling 'solutions' to local governments, all paid by your property taxes. This also enables an ecosystem of theft, where others benefit a lot more. With the nexus of Private Prisons, kids for cash judges (or judges investing in stock of prisons), DEA/police unions, DEA unions, small rural towns increasing prison population (because they get added to the total pop, and get funds allocated).
More importantly this is extremely attractive to police who can steal billions every day from civil forfeiture, they have access to anyone who makes a bank withdrawal or transacts in cash, all displayed in real time feeds, ready for grabbing!
What are you talking about via Europe? Holding tech companies accountable to meddling in domestic politics? Not allowing carte blanche to user data?
I understand (though do not like) large corps tiptoeing around Trump in order to manipulate him, it is due to fear. Not due to Trump having respectable values.
> Still, the regulatory outlook for AI remains somewhat uncertain as Trump on Monday overturned the 2023 order signed by then-President Joe Biden to create safety standards and watermarking of AI-generated content, among other goals, in hopes of putting guardrails on the technology’s possible risks to national security and economic well-being.
There's also the monetary policy, which is when the federal reserve does this on purpose. The general principle is the same, but instead it spends its money buying bonds and gets its money selling those bonds, and creates a bunch of rules about where banks keep their money so it always has some money on hand.
Too many greedy mouths. Too many corporations. Too little oversight. Too broad an objective. Technology is moving too quickly for them to even guess at what to aim for.
As long as you let universities act like for-profit businesses, their profits will be the only thing they optimize for.
A vessel traveling at 1 knot along a meridian travels one minute of geographic latitude per hour.
"In a stunning display of fiscal restraint, Sam Altman only asks for $500 billion instead of his previous $7 trillion moonshot. Hackernews rejoices that the money will be spent in Texas, where the power grid is as stable as a cryptocurrency exchange. Oracle's involvement prompts lengthy discussions about whether Larry Ellison's surveillance dystopia will run on Java or if they'll need to purchase an enterprise license for consciousness itself. Meanwhile, SoftBank's Masayoshi Son continues his streak of funding increasingly expensive ways to turn electricity into promises, this time with added patriotism. The comments section devolves into a heated debate about whether this is technically fascism or just regular old corporatocracy, with several users helpfully pointing out that actually, the real problem is systemd."
100s of 1000s of jobs seems a bit exaggerated.
It may be a distinction thats not worth making if the current approach is good enough to completely transform society and make infinite money
I agree with you that there is significantly more there there with AI, but I agree with the parent that the hype cycles are essentially indistinguishable.
Elsewhere, you worried that getting millions of people put of crippling debt due to a broken education finance system might tick up inflation.
Here, you worry that making society more educated via university training might decrease the economic value of a degree.
Where is the humanity? Of course some extreme of inflation is bad, and of course we want people to be employable. But artificial scarcity seems like a bad way to go about it.
(And I don't think we have a surplus of engineers in the country, judging by what I perceive to be the gap in talent between china and US, and the moaning by tech about the need for H1B).
Didn't go well for South America in the 60s and 70s but perhaps, as economists are prone to saying, "this time will be different".
free college is just a giveaway to the wealthier third of our society and irresponsible with our current fiscal situation.
Crime is at historical lows.
Hard to predict!
If we've already hit it, this has already been a very short period of time during which we've seen incredibly valuable new technology commercialized, and that's nothing to sneeze at, and fortunes have and will be rightly made from it.
If it's in the near future, then a lot of people might be over-investing in the promise of future growth that won't materialize to the extent they hoped. Some people will lose their shirts, but we're still left with incredibly useful new technology.
But if we have a long (or infinite) way to go before hitting that inflection point, then the hype is justified.
https://www.themarshallproject.org/2023/11/03/violent-crime-...
Two Toshiba 4S reactors at the 50 MW version can cost about $3,000,000,000.
Two of those produces 100 MW.
They don't require refueling for around 30 years. $6,000,000,000 to power a 100 MW datacenter when we're talking about $500,000,000,000 is not too dramatic. Especially consider the amortized yearly cost.
Personally, I do expect a big correction at some point, even if it never reaches the point of bubble bursting. But I have no idea when I expect it to happen, so this isn't, like, an investable thesis.
Not this specificay but this kinda thing. If I am getting billions like this, I wanna keep this gravy going. And it comes from shareholders ultimately.
but regardless of the net balance of actions, it is clearly more interventionist than China has been up to this point
112 reactors.
A gigawatt each.
Over 10 years ago.
Many people dislike all billionaires, but some have escaped criticism more than others by successfully appearing to have some humanity left in them, like Gates and Cuban.
it’s absolutely the second one, this is a commonality across many orgs i’ve talked to who cannot get their CPU request met bc of GPU spend
If the Softbank's inversion is limited to their available assets, or the exposition of each lending is limited to a portion of their real reserves, I think such event will not happen (more than burned money by a bad inversion).
I think it would be quite similar to what has already happened on a global scale with the public money of each country in 2008 (due to the banking pyramid scam), or since 2011 with public loans to TEPCO, an event that could have been prevented if the central plant had been built were originally planned.
The announcement was funny because they weren't quite sure what they are going to do in the health space. Sam Altman was asked, and he immediately deferred to Ellison and Masayoshi. Ellison was vague... it seems they know they want to do something with Ellison's massive stash of health data... but they don't quite know what they are building yet.
The fact that a handful of individuals have half a trillion dollars to throw at something that may or may not work while working people can pay the price of a decent used car each year, every year to their health insurance company only to have claims denied is insane.
Also if you are being $ focused then offer it where there is ROI: STEM, medicine (allow more doctors too).
Education doesn't lose its value if it is free. Does food and water? Shelter?
Unless people are just tuning out of their degree and it is just a social thing. In which deal with that specific problem.
I still have a pretty hard time getting it to tell me how many sisters Alice has. I think this might be a bit optimistic.
1) "We are serious, this is going to happen."
2) "AI is big right now so if we hype it we might get some money!"
This money is managed by small amounts of people but it is aggregated from millions of investors, most of these are public companies. The US spends over 10x that amount on healthcare each year.
The data centres were already being built. All of these companies have been dumping tonnes of money into AI and will continue to dump tonnes of money into AI. It's just more of the same, but they had to do a big announcement with Trump to pander to his ego and somehow make it about him. Like he engineered this Stargate thing. The whole embarrassing spectacle was likely arranged by Ellison.
I was bullish on OpenAI, but honestly I don't see any path forward where they have any differentiating value that justifies even a tenth of the valuation. Their video AI is simply terrible. Dall-E 2 is matched by many competitors. 4o and o1 and good, but already have been eclipsed by a number of competitors, including an open source Chinese option.
My work has almost entirely transitioned to competitors, and Google's latest updates have quietly absolutely trounced OpenAI's offering. Like, Gemini has quietly become the best AI platform in the game.
That's all neither here nor there, but I just don't care what Altman and crew have to say any more. They are not leaders in the space. They are, in many ways, has beens.
Technically you are correct. A ponzi is a single entity paying returns from new marks. It is a straight con.
But some systems can be ponzi-like in that they require more and more investment and people get rich by selling into that. Bitcoin is an example.
Yeah, that's why I mentioned the fed.
> It would mean that countries with low taxes have very high inflation and this is not the case.
It's about the total balance of government spending and taxes. The point being made is that tax breaks have the same effect as government spending. Recall that I was replying to
> Tax breaks, i.e. my money not being in your pocket means that they are stolen?
The government writing someone a million dollar check and the government giving someone a million dollar tax break (assuming they pay at least a million in taxes), contribute to inflation by increasing the money supply by a million dollars than it would be otherwise. Yes, this federal reserve is by far a larger driver of inflation, but the government giving this tax break still degrades the value of your money, same as if they wrote a check.
Of course, it is easy to view a tax break as a non-action, but that's exactly why the government gives so many tax breaks. Once you're taxing everyone, you can hand out tax breaks that's the same as handing out money only you can pretend that it's doing nothing.
Think of it as 3 Scenarios:
1) The island government writes a check to everyone except you, increasing their wealth by 50%.
2) The island government taxes just you for 50% of your wealth.
3) The island government taxes everyone 75% of their wealth, grants everyone but you a total tax-break, and you 25 percentage point tax break.
Basically the same result, only in one they say "It was fair, and we handed out a few tax-breaks, what's wrong with letting people keep their money?"
Disclaimer: I work at a highly regulated industry and we are fine running our "enterprise" workloads in Azure (and even AWS for a spinoff company in the same sector). Oracle has no specific moat in that area imho, unless you already locked-in in one of their software offerings.
The thing about investments, specifically in the world of tech startups and VC money, is that speculation is not something you merely capitalize on as an investor, it's also something you capitalize on as a business. Investors desperately want to speculate (gamble) on AI to scratch that itch, to the tune of $500 billion, apparently.
So this says less about, 'Are we close to AGI?' or, 'Is it worth it?' and more about, 'Are people really willing to gamble this much?'. Collectively, yes, they are.
Instead of starting a new better world, we'll just stick with the old one that sucks because we don't want to be unfair. What an awful, awful way to look at the world.
All technological advances that are adopted are ones that made life easier and for some cooler then what they were once using (cell phone to iPhone put the web in our pocket but using your iPhone while driving is dangerous but talking to your human like friend isnt). Check out the movie H.E.R. as what Im describing is mostly what i describe above.
Time will tell if any of what im saying comes to fruition, but Silicon Valley is all a buzz about AI Agents in the last month or two and going forward.
Musk cannot ban Chinese autos from the US market, but the government can. Same goes for Tiktok, Zuck cannot force Americans not to use it. AI is the next battlefield and further bans will be coming down the line to make sure the investment is protected.
i agree that our protectionist policies are bad and autarkic in nature
The "free movement of capital" only ever seems to move the capital one direction: up to the people who needed the labor of others to reach such wealth.
Keeping BYD out won't help if all the other car makers also catch up. If BYD pulls far enough ahead they could just create a US division and make cars here like Toyota and Nissan do. Nissan is on the ropes, so maybe BYD could just buy them and make that their US brand and get all their factories and supply chains.
If Tesla kept iterating on the Model 3, released the Model E, and Musk stayed out of divisive politics that alienate customers, Tesla had a chance to own the mainstream of the US auto market for the next 50 years. I'd say that's gone now.
I am sorry that you feel you are downwardly mobile, but you should not assume your experience generalizes.
Because everyone's buying everything online and getting it delivered to their homes.
This is, in fact, a generalized experience: [0]
[0]https://www.pewresearch.org/social-trends/2019/02/14/millenn...
Maybe at some point they are going to AI themselves out of climate change. Well.. except for the part where they don’t believe in man-escalated climate change.
This is disputed [1]. In reality, a handful of individuals have the capital to seed a half-a-trillion dollar megaproject, which then entails the project to raise capital from more people.
[1] https://www.wsj.com/tech/musk-pours-cold-water-on-trump-back...
There are lots of problems with our current approach to healthcare, but insurers aren’t charging you way more than the cost to counterparty on that contract should be.
Your article is from 2019. We're now "wealthier than previous generations were at [our] age" [1].
[1] https://www.wsj.com/personal-finance/millennials-personal-fi...
i think if you gave people a legitimate choice to go back to 1980 (and take their friends let’s say), we would see the revealed preference. certainly if you did it for a year and then gave them the option to come back
au·tar·ky /ˈôˌtärkē/ noun economic independence or self-sufficiency. "rural community autarchy is a Utopian dream" a country, state, or society which is economically independent. plural noun: autarkies; plural noun: autarchies
There is nothing saying a socialist country can't produce goods and services, and sell them.
That's certainly not a thing that's ever happened before. /s
Does no one on HN believe in this anymore? Isn't this tech startup community meant to be the tip of the spear? We'll find out by 2030 either way.
It's divided by whether you own real estate or equities.
Immigrant homeownership is starkly lower than native-born Americans' [1].
We're probably going to see a surge in that disparity, now, given the immigrant workforce that builds and renovates houses is in the process of being gutted. That increases the value of existing stock.
[1] https://www.jchs.harvard.edu/sites/default/files/research/fi...
That being said, it seems to reference property owners. Hell, if I'd had the money to buy a house prior to the pandemic, I would have. I didn't because of constant reorgs at my employer at the time, which resulted in hiring freezes and reduced raises. The goal behind these was to make the company attractive to buyers. Eventually, they did find one: Oracle. They've since gutted what was a major employer for my region.
Since the pandemic housing has skyrocketed and pay hasn't kept up. It's been stagnant for 40 years while economic output has risen, along with COL [0].
Where'd all of the value go?
(that's a rhetorical question)
[0]https://www.consumeraffairs.com/finance/comparing-the-costs-...
Providing a turnkey HIPAA-compliant but modern health dataverse would be huge.
Yes. Millenials own property at the highest rate, age adjusted, in generations. (Anecdote: am Millenial. Own a home. Most of my friends do, too. Yes, it's a bubble, but it's a big one.)
> Where'd all of the value go?...(that's a rhetorical question)
No, it's not. It went to the people who bought houses. Including between 2019 and 2024.
Which generation's mode reached home-buying age in that interval, an interval also generously sprinkled with massive stimulus, a stock-market boom and forced consumption-reduction through stay-at-home orders? (That is a rhetorical question.)
To your point, yeah the models still suck in some surprising ways, but again it's that thing of they're the worst they're ever going to be, and I think in particular on the reasoning issue a lot of people are quite excited that RL over CoT is looking really really promising for this.
I agree with your broader point though that I'm not sure how close we are and there's an awful lot of noise right now
But if you think about it, an unstated yet necessary prerequisite is that the definition of "crime" must be morally aligned with what is right. If it's not, well then you're living in a dystopia. Imagine a world where slavery is still legal and being a runaway slave is a crime. How do people like Frederick Douglass escape and survive long enough to make a difference?
And that's before we get into the prerequisite that such a state must apply the laws completely evenly with no special tiers based on class, wealth, political connection, celebrity status, etc, which AFAIK has never been done. Given the leadership, it doesn't look like it's goig to happen anytime soon. IMHO I think it's heavily contrary to human nature and just won't be achievable short of altering human nature.
Yeah, obviously the whole thing makes no fucking sense.
Success is dangerous.
Have we not seen enough of these people to know their character? They're predators who, from all accounts, sacrifice every potentially meaningful personal relationship for money, long after they have more than most people could ever dream of. If we legalized gladiatorial blood sport and it became a billion-dollar business, they'd be doing that. If monkey torture porn was a billion dollar business they'd be doing that.
Whatever the promise of actual AI (and not just performative LLM garbage), if created they will lock the IP down so hard that most of the population will not be able to afford it. Rich people get Ozempic, poor people get body positivity.
Which is - no doubt - an astonishing achievement, but absolutely not like the "AI" hype train people try to paint it.
The "rapidly approaching" part is true in terms of the velocity, but all of this are just baby steps while walking upright properly is way beyond the horizon.
I wouldn't mind being wrong about this, of course.
South America didn't have a mix of domestic and foreign investors deploying massive quantities of private money into capital assets in the 60s and 70s. They had governments borrowing to fund their citizens' consumption. Massive difference on multiple levels.
Income, not wealth. Particularly not after inheritances transfer.
I'm not saying "this time will be different". I'm saying this is business as usual.
It's not only Trump. Before leaving Biden already ordered the DoE and the DoD to lease sites for data centers and energy generation. The only reason we don't see a "Department of AI" or a "National AI Agency" is due to how the military industrial complex works, and a lot of lobbying I'm sure.
[1] https://www.scmp.com/tech/policy/article/3295662/beijing-mee...
[2] https://www.insideglobaltech.com/2025/01/20/biden-administra...
[3] https://www.utilitydive.com/news/biden-doe-dod-lease-sites-a...
[4] https://www.technologyreview.com/2025/01/21/1110269/there-ca...
I think OpenAI was originally founded against that kind of force. Autocratic governments becoming masters of AI.
One of the broken parts of the American system is permitting. Trump can sidestep that by letting this be built on federal land. That, in turn, unlocks investment.
Beyond that, DoD and DoE are massive buyers of compute. Seeding the venture with purchase agreements from them de-risks the project further.
Finally, by Trump putting his name to it he assigns it his bully pulpit's prestige. (Though that doesn't appear to have carried over to Musk, who's already taking pot shots at it.)
ASI is basically a god. This is the ultimate solution (or problem). It will push us to the singularity, and create an utopia or drive humanity to extinction. Imagine someone who is so smart that would win every single nobel prize available, and make multiple discoveries in a matter of a year. And now multiply this person's intelligence by 100 (most likely more, but 100 is already hard enough to grasp). There's no point in investing in anything else. An investment in ASI is an investment in everything (could be a bad one though, depending on the outcome).
The government is banking on being able to control it, which is also pretty funny. It's like a pet hamster thinking they can dictate what a human does.
A graph of the stocks for UnitedHealth, Elevance (formerly Anthem) and Cigna shows that they're all on the growth track for the last five years.
If a subscriber pays them what they do, and they don't have money to pay a claim declared medically necessary by a medical doctor, but do have the money to forward to a retirement fund, they are charging too much.
Most of the rest of the industrialized world seems to grasp this concept, and their people live longer.
This is first-mover industrial development being funded by private actors looking out for a return on their investments. South America saw nothing similar--it was duplicating others' industrialisation with state capital (often borrowed from overseas) while spending massively on handouts.
Age-adjusted?
So if you take out the fact that it took up more of the one resource that matters more than anything else to become property owners, then, yes, Millennials have more of it.
Which is kind of proving my point.
I'd give the money to folks starting in the trades before bailling out the college-educated class.
Also, wiping out numbers on a spreadsheet doesn't erect new homes. If we wiped out student debt, the portion of that value that went into new homeownership would principally flow to existing homeowners.
Finally, you're comparing a government hand-out to private investment into a capital asset. That's like comparing eating out at a nice restaurant to buying a bond. Different prerogatives.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
Zuckerberg lost $30bn or more trying to create a VR amusement park. Scale that up to $500bn and see how much waste and dead-weight losses are created.
Ask before assuming.
Age adjusted means taking each generation when they were the same age, how wealthy were they? A Boomer today is wealthier than a Millenial because they've had more time to accumulate. But when a Boomer was Millenial-aged, she had on average less wealth than a Millenial today.
I do find impressive that SpaceX engineers figured out reusable rockets and now we can send things more cheaply out to orbit. But in all seriousness, should we care about getting to Mars cheaply? Or do people care because Musk came along to convince them (and the US government) to invest in this venture of his?
"Up to $500bn" is business as usual for Silicon Valley post-2021.
I'm one of the people who paid off a large portion of debt and probably don't need this assistance. However, this argument is so offensive. People were encouraged to take out debt for a number of reasons, and by a number of institutions, without first being educated about the implications of that. This argument states that we shouldn't help people because other people didn't have help. Following this logic, we shouldn't seek to help anyone ever, unless everyone else has also received the exact same help.
- slaves shouldn't be freed because other slaves weren't freed - we shouldn't give food to the starving, because those not starving aren't getting free food - we shouldn't care about others because they don't care about me
These arguments are all the greedy option in game theory, and all contribute to the worst outcomes across the board, except for those who can scam others in this system.
The right way to think about programs that help others is to consider cooperating - some people don't get the maximum possible, but they do get some! And when the game is played over and over, all parties get the maximum benefit possible.
In the case of student debt, paying it off and fixing the broken system, by allowing bankruptcy or some other fix, would benefit far more people than it would hurt; it would also benefit some people who paid their loans off completely: parents of children who can't pay off their loans now.
In the end the argument that some already paid off their debts is inherently a selfish argument in the style of "I don't want them to get help because I didn't get help." Society would be better if we didn't think in such greedy terms.
All that said - there are real concerns about debt repayment. The point about emboldening universities to ask for higher tuition highlights the underlying issue with the student loan system. Why bring up the most selfish possible argument when there are valid, useful arguments for your position?
2. There is no commitment to spend in a single year
3. There is no actual contractual commit here, this is a press release (i.e. Marketing)
4. There is not actually a $500B pile of gold being spent. This is more of a "this is how big we think this industry will be and how much we may spend to get exposure to that industry"
[1] https://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile...
my dad was basically expected to work the farms his entire life and school ended at the 3rd grade where he grew up, he moved to the US and became a chess master & went to one of the best colleges in the country. impossible where he was from and really shows how stupid and zero-sum-minded old world elites are compared to the US/anglo culture.
How is free college a giveaway to the wealthier third of society? For starters, I can assure you the wealthy care a lot about the name of the institution issuing the diploma, and they can afford it. They'll happily front extra cash so their kids can network with people of similar economic status.
The venture was announced at the White House, by the President, who has committed to help it by using executive orders to speed things up.
It might not have been voted by congress or whatever, but just those things makes it pretty clear the government provides more than just "moral support".
As Elon said, they don't the money.
Talk is cheap.
$500B is not.
Stock price ! profitability, but you're still correct. UnitedHealth's operations have churned out cash each of the last four years [1], as have Cigna [2] and Elevance [3]. Underwriting gains across the industry have been strong for years [4]. The only story I can think of where American health insurers lost money was Aetna with its underpriced ACA plans [5].
That said, whimsicalism is also partly right in that insurers aren't the cause of the unaffordability of American healthcare. They by and large pay out most of their premiums. (With some variance.)
[1] https://finance.yahoo.com/quote/UNH/cash-flow/
[2] https://finance.yahoo.com/quote/CI/cash-flow/
[3] https://finance.yahoo.com/quote/ELV/cash-flow/
[4] https://content.naic.org/sites/default/files/2021-Annual-Hea...
[5] https://spia.princeton.edu/news/why-private-health-insurers-...
Let them be discharged in bankruptcy. The system will fix itself around that.
Also the guy disputing it is trying to regain control of an entity that he was too distracted to hold to its original mission, is on record as agreeing with the statement that Jewish people are the enemies of white people, takes copious amounts of mind-altering substances daily, has lost billions of dollars on purchasing a company that had a path to (modest) profitability, and did what could easily be seen as a Roman salute at an inauguration speech. Maybe he's not a great source of statements on objective reality, even within the AI industry.
With regard to the monetary amount, understand, once you reach a certain point, the amount of capital held by the quantity of individuals we're talking about is immaterial. Any capital they raise is usually derived from the labor of others and they operate a racket to prevent any real competition for how that capital is distributed by the labor or the customers who are the source of their actual wealth. The average Oracle employee (I know a few), for example, probably has a few more immediate things they want the surplus value of their labor to be spent on than Larry's moonshot. However, he ultimately controls the direction of that value through a shareholder system that he can manipulate more-or-less at-will through splits, buybacks, and other practices.
His customers would probably also like to pay less for what are usually barely Web 2.0 database applications. Of course, he has the capital to corner markets and shove competition out of the space.
All of this is to say when you reach this amount of money in the hands of one individual, they're more likely to regularly harm people than beat the odds on their next bet in a way that actually uplifts society, at least in a way that could beat the way just disbursing that capital among those who created it could.
> If a subscriber pays them what they do, and they don't have money to pay a claim declared medically necessary by a medical doctor, but do have the money to forward to a retirement fund, they are charging too much.
If it is only legal to lose money on providing insurance, nobody would do it.
> Most of the rest of the industrialized world seems to grasp this concept, and their people live longer.
I agree that there are problems with cost/performance in our healthcare market. I think it is largely due to overutilization & misallocation, combined with some poor genetic/cultural luck around opioids and obesity.
0: https://content.naic.org/sites/default/files/industry-analys...
The rest is fees from the Panama (EDIT: Suez) Canal and tourism. Getting into a war, particularly with a country on the Red Sea, is suicide. (Also, the main flash point between Egypt and Ethipia has receded since the GERD finished filling.)
You'd think the healthy working population wouldn't be that much of a burden to care for as well, but they have to go out of pocket and get insurance to provide for themselves after providing for everyone else.
There is a lot of graft going on for this to be the case. It may not be the fault of insurance companies but someone is stealing a great deal of money from the American people.
Now here's the million dollar question; are you aware of this obvious fact? Have you ever heard someone frame the socialized medicine debate in this way: "If we could be as efficient as the UK we could give you free healthcare AND cut your taxes!". If not, why not?
[0]https://www.statista.com/statistics/283221/per-capita-health...
The last few months, between TikTok ban, RedNote, elections, United Healthcare CEO, etc I’ve seen so many people compare the US to China, and favor China. Which is of course crazy because China has things like forced labor and concentration camps of religious minorities, and far worse oppression than the US. But many people just view everything coming out of the US Gov’s mouth as bad.
Is the Chinese government worse than the US government? Probably. Do people universally think that still? Not really. The US Gov will have to contend with the reality that people -even citizens- are starting to view them and not their “enemy” as the “Bad Guys”.
This is a valid conflict of interest. That means we should closely scrutinize his claims. From what I can tell, he's added up correctly in respect of the named backers' wealth and liquidity.
> a paper that is literally named after a place where the vast majority of people have never had to do any real labor in their lives
Yes, we should ignore bankers when it comes to questions about money...
Do you have an actual claim? Or is it all ad hominem?
> capital they raise is usually derived from the labor of others and they operate a racket to prevent any real competition for how that capital is distributed by the labor or the customers who are the source of their actual wealth
They're capitalists, herego they can raise unlimited wealth?
If you have more wealth, you can theoretically purchase more goods and services than if you had less.
The exception to this, of course, is if the goods and services cost more, and for things that you need to exist in American society (healthcare, education, transportation, housing, food), those things generally cost several times more for younger people than they did, "age-adjusted", when their parents were the same age, often with a difference that is more than that in wealth. That's why wages have been flat.
There's also the question of how that wealth is distributed among the generations and how it's stored. If the property-owning Millennial owns a few rental properties that their peers have to pay to live in, the "average" properties owned by the group can be the same (or even higher) but the number of people those properties are spread among is lower.
There's also the fact that lots of wealth is held in the casin... er... stock markets as people need to participate in those markets with their 401(k)s to be able to retire some day. You can't sleep in a stock certificate, but if you want to have any savings, it's easier to enter the equities market than it is to get into real estate from a startup cost perspective. People are having to compromise the "stability" of their fundamental needs (like housing) in order to grow more abstract definitions of wealth.
[0]https://www.worldometers.info/demographics/life-expectancy/
The problem here being that it was money spent that was never earned back, and money that eventually had to be paid back, right?
This can also happen with private capital. 2008 was a bust caused by private banks, for example. AI hasn't proven to be profitable yet [1], and I'm not sure it'll makes a difference, for the success of projects like this, wether the money is coming from government or not.
In fact, if the 2008 bank bail-out, auto industry bail-out, the Silicon Valley bank prop-up, and other such actions by the US government are considered [2], if this turns out to be a bubble it will be taxpayers who end up fronting the bill.
[1] https://www.cbc.ca/news/business/ai-generative-business-mone...
[2] https://www.investopedia.com/articles/economics/08/governmen...
Which is why these figures have been inflation adjusted.
> lots of wealth is held in the casin... er... stock markets
Pretty sure Boomers hold more stocks than Millenials. This is an argument for Millenials being even better off than the statistics show.
> People are having to compromise the "stability" of their fundamental needs (like housing) in order to grow more abstract definitions of wealth
Yes. But that doesn't broadly describe Millenials, and it describes more people in older generations when they were present Millenials' ages.
You're trying to argue against facts with philosophy.
on the previous examples i can see language gave native speakers and advantage in becoming familiar with the technology but on ai i'm not seeing an advantage that would give americans an advantage over everyone else, besides controlling access to the tech.
the reason i'm insisting on this is because i feel as if that argument has merit but i have yet to grasp how it applies to these technologies.
As far as I am aware the only information from within OpenAI one way or another is from their financial documents circulated to investors:
> The fund-raising material also signaled that OpenAI would need to continue raising money over the next year because its expenses grew in tandem with the number of people using its products.
Subscriptions are the lions share of their revenue (73%). It's possible they are making money on the average Plus or Enterprise subscription but given the above claim they definitely aren't making enough to cover the cost of inference for free users.
https://www.nytimes.com/2024/09/27/technology/openai-chatgpt...
well not every other nation, but i know what you mean.
other nations are much better at managing overutilization by denying care where it is not needed. the US insurance system shields people from cost and encourages overutilization due to a number of stupid policy choices (aka refusal to have 'death panels' like in Canada/UK but also refusal to do away with massive publicly subsidy for health expenditure).
for a personal story, my parents basically get free MRIs from the state for little reason whereas people I know have to pay an arm and a leg for MRIs because their insurance is worse. at minimum, we could at least also make my parents have to pay an arm and leg for useless MRIs and doctors would stop encouraging them or lose patients.
In part. It was money borrowed by the state. That means when it can't be paid back, it's automatically a systemic issue. And it was money borrowed to fund consumption. There was no good reason to ever expect it to be paid back because it wasn't funding productive activity.
> if this turns out to be a bubble it will be taxpayers who end up fronting the bill
Very possibly, particularly if part of the package are e.g. federally-subsidised loans. Before that, however, private parties will almost certainly lose tens if not hundreds of billions of dollars. That cushion, together with those parties being spread between domestic and foreign sources, is what makes this less risky to the United States than similar relative-magnitude projects in South America. (Plus the fact that this is a capital asset versus consumption.)
Again, consider my example about YouTube - it's not illegal for Google to put pornography on YouTube. They still moderate it out though, not because they want to "censor" their users but because amateur porn is a liability nightmare to moderate. Similarly, I don't think ChatGPT's limitations qualify as censorship.
[0]https://www.vox.com/2014/9/4/6104533/the-125-percent-solutio...
[1]https://www.opensecrets.org/federal-lobbying/industries/summ...
[2]https://www.fiercepharma.com/marketing/hey-big-spenders-phar...
"Censorship is censorship is censorship" is the sort of defense you'd rely on if you were caught selling guns and kiddie porn on the internet. It's not the sort of defense OpenAI needs to use though, because they have a semblance of self-preservation instinct and would rather not let ChatGPT say something capable of pissing off the IMF or ADL. Call that "censorship" all you want - it's like canvassing for your right to yell 'fire!' in a movie theater.
The Snowflake-for-health is more about opening EHR data for operational use by providers and facilities.
Versus being locked into respective EHR platforms.
If Oracle provided a compelling data suite (a la MS) within their own cloud ecosystem, they'd have less reason to restrict it at the EHR level (as they'd have lock-in at the platform level), which would help them compete against Epic (who can't pivot to openness in the same way, without risking their primary product).
But even the smartest people still get it wrong on occasion.
edit: Bezos doesn't own the WSJ. I'm wrong.
Haven’t all three examples you note (2008 crash, auto bailout, and SV prop up) resulted in a net return/gain for the taxpayer?
It's like when people claim that other countries have worse medical systems because they have to wait, as if my friend didn't just wait 2 months for a simple injection recently, and my mom isn't waiting 2 weeks for an MRI after a stroke.
The vast majority of people who insist we have the greatest healthcare don't even go to the doctor's regularly. Because they were raised in a system where going to the doctor is something you have to weigh the cost of! We have worse medical outcomes simply because people wait until a cheap situation turns into a shitty and painful and expensive situation.
2) if government gives everyone tax break but not me, it means only that the government taxes only me
3) if everyone has 50% more money, there is very high probability that my business will go up A LOT
Seriously, dude, it’s not worthy anymore to try and explain to you very basic stuff. Inflation is not a balance between taxation and spending. All Middle Eastern countries are having huge spending and almost zero taxes. I asked you very simple question and you couldn’t answer.
What bothers me most is why people write about things they have no clue about and clearly haven’t even put a decent thought into it.
Basically what you believe in is that the thieves are controlling the inflation because they get some of the citizens wealth.
The people telling you that there is an immense wave of shoplifting are outright lying.
Economies of scale should make them cheaper. An MRI machine and technician that sits there unused half the day has to charge more per visit than one used all day long. Have too many customers? Get more machines and techs, now the MRI manufacturer is making more units, offering volume discounts...
Rationing of care doesn't explain why the individual units of care are themselves much more expensive. Compare inhaler prices in Canada vs the US, $10 in Canada $100 here[0], that isn't because too many of them are given out. It's theft.
Addendum: Further, the young and healthy ration their care quite a bit under the current system, they are taxed too heavily (to pay for the care of the elderly) to afford it for themselves so they go without.
[0]https://www.usnews.com/news/healthiest-communities/articles/...
If you look at the original video [1], starting at 1:09:00, he's talking specifically about police body/dashcams recording interactions with citizens during callouts and stops, not everyone all the time as that article strongly implies. The USA already decided to record what police see all the time during these events, so there's no new privacy issue posed by anything he's suggesting. The question is only how those videos are used. In particular, he points out that police are allowed to turn off bodycams for privacy reasons (e.g. bathroom breaks), which is a legitimate need but it can also be abused, and AI can fix this loophole.
In the same segment he also proposes using AI to watch CCTV at schools in real time to trigger instant response if someone pulls out a gun, and using AI to spot wildfires using drones. For some reason the media didn't condemn those ideas, just the part about supervising cop stops. How curious.
[1] https://www.oracle.com/events/financial-analyst-meeting-2024...
Since when does "private capital" speak in such honeyed tones to state powers?
https://www.cnn.com/2025/01/21/tech/openai-oracle-softbank-t...
Groupthink capital, directed by mostly 2 "thought leaders".
Economies don't like Groupthink Capital, regardless of it being private, public or a combination of the 2
Of course the US economy as a whole is huge so even billions can be absorbed, once you start talking about half a trillion though...
The demand for more AI compute is already here and is less risky of an investment.
"Centralized planning" was effective under Bell Labs
I used to wonder how the hundreds of thousands of employees that work in Big Oil or Big Pharma could tolerate all the terrible things their company does... e.g. the opioid epidemic. The naive optimist in me never thought that the tech industry would ever be that bad.
Now, as someone thats been in the industry for 10+ years and working adjacent to LLMs, this is all so depressing. The hype has gotten out of control. We are spending hundreds of billions of dollars on things that simply are not making life better for the majority of people.
you need someone willing shop and pick the cheaper options for competition to bring down prices. You also need someone willing to say "that's too expensive, I wont buy it" and walk away. Same is true for the inhalers. If someone will pay $100 before switching to the generic, that is what they get charged. In Canada, the state is only willing to pay $10, so that is the price. This is the demand side of the problem.
There is also a supply problem, where the state provides medical company monopolies through "certification of need". It is basically illegal to open an MRI clinic that would compete with an existing one in many jurisdictions.
https://radiologybusiness.com/topics/medical-imaging/magneti...
- there is a shortage of housing
- predatory loans for higher education
- chronic health crisis due to terrible government health policy and guidelines
- globalization has led to an international labor market
The last point may be bad for many Americans but an unequivocal good for the world. Global poverty has seen an incredible drop in the past 70 years. https://en.wikipedia.org/wiki/Extreme_poverty#/media/File:Wo...
I mean, I can see how numbers wise this decision makes sense.
It's private money. CEOs will say whatever they need to say to achieve goals (here, favorable conditions for AI work), look at what the actual money flows say.
The one I really don't get is that they funded Adam Neumann's new company after the collapse of WeWork. How stupid do you have to be to give that guy any more money?
$500B is a lot - no matter how you slice it.
It's a lot easier to talk pie in the sky than to actually get $500B to spend.
Mine is. It's about incentives. Now you can take it from there, and at least in my interpretation the rest of your rebuttal falls apart.
There is absolutely no equivalency to slavery. That is simply dishonest. Slaves didn't choose to be slaves. Do students who take on debt have no agency whatsoever to you? Did the people who paid such debts had no agency when paying?
It is a fact that wages have remained stagnant for four decades.
It's also a fact that the wealth gap is growing between rich and poor, and that's what's distorting the figures you're citing. That's the only way, mathematically, you see wages remain flat while seeing wealth rise.
Look deeper at your facts, instead of letting them be tainted by your philosophy.
There are more low-income people in private universities (with private or private/public loans) than in public universities.
[0]: https://www.yahoo.com/news/elon-musks-drug-becoming-problem-...
There's actually lots of taxes that aren't income or sales tax
> if everyone has 50% more money, there is very high probability that my business will go up A LOT
No, you'll be getting twice the money, but the money is worth half as much.
> Inflation is not a balance between taxation and spending.
It is for the US federal government.
> All Middle Eastern countries are having huge spending and almost zero taxes.
Those countries peg their currency to the dollar. Their money doesn't come from taxes, but instead from state oil companies. These countries aren't as free to hand out money like the US. If enough people tried to exchange their Saudi Riyals for dollars quick enough, and the Saudi government couldn't gather US dollars quick enough, their currency would very quickly collapse.
First is sort of correct for a very specific slice of America, those just above the welfare cut off. (For whom real wages have been flat to negative, assuming we scale up housing preferences and add in costs that didn’t make sense before, e.g. internet and cell-phone bills.) The second—about rising inequality—is true throughout.
Neither advances your argument, however—one can better off while others are much better off, and most in a population can be better off while some are worse off. (Observe the median Millenial and the statistics stand. Millenials are rich, in part because we’re going to stick Gen Alpha with the bill.)
If you've taken a couple of lectures about AI, you've probably been taught not to anthropomorphize your own algorithms, especially given how the masses think of AI (in terms of Skynet, Cortana, "Her", Ex Machina, etc). It encourages people to mistake the capabilities of the models and ascribe to them all of the traits of AI they've seen in TV and movies.
Sam has ignored that advice, and exploited the hype that can be generated by doing so. He even tried to mimic the product in "Her", down to the voice [0]. The old board said his "outright lying" made it impossible to trust him [1]. That behavior raises eyebrows, even if he's got a legitimate product.
[0]: https://www.wired.com/story/openai-gpt-4o-chatgpt-artificial...
[1]: https://www.theverge.com/2024/5/28/24166713/openai-helen-ton...
We know that the idea of a rational agent in economics is a myth, and as you mentioned, it is about incentives, as well as motives.
Students who take on debt that limits them in later life don't have all the information they need at the time they make the decision. Saying the information is available is not reasonable. These students are told they _most_ go to college to make a living.
They are not told they need to get an engineering, medical, or finance degree to make going to college worth it, economically.
They are shown all the loans they can get without an equivalent amount of effort put into educating them about the consequences those loans represent. For example, how much the loans will cost in the long run, along with estimated pay for various fields of study.
Furthermore, the loans are given for any degree program without restriction.
All the comments I made about game theory still stand, and we don't need to get into the myriad problems with our education and student loan systems. I agree they aren't perfect; I just think the argument 'I didn't get my loans paid off neither should you' is an extremely selfish one. Just because someone suffers doesn't mean everyone should. Also - in my experience people who are ready to make that selfish argument are very offended when it gets flipped on them. So they can understand intuitively the issue with the selfish position.
Getting 150 apples once is better than nothing but still doesn’t fix the problem.
It's no different than sports stadiums selling the same idea to local governments:
"We'll create jobs."
"Mostly low quality jobs with poor benefits and minimum wages."
This is no different.
Whenever you hear about job claims you should be asking what quality of jobs?
Are there computing and cryptography problems that the infrastructure could be (publicly or quietly) reallocated to address if the United States found itself in a conflict? Any cryptographers here have a thought on whether hundreds of thousands of GPUs turned on a single cryptographic key would yield any value?
For example it isn't what you can do tinkering in your home/garage anymore; or what algorithm you can crack with your intrinsic worth to create more use cases and possibilities - but capital, relationships, hardware and politics. A recent article that went around, and many others are believing capital and wealth will matter more and make "talent" obsolete in the world of AI - this large figure in this article just adds money to that hypothesis.
All this means the big get bigger. It isn't about startup's/grinding hard/working hard/being smarter/etc which means it isn't really meritocratic. This creates an uneven playing field that is quite different than previous software technology phases where the gains/access to the gains has been more distributed/democratized and mostly accessible to the talented/hard working (e.g. the risk taking startup entrepreneur with coding skills and a love of tech).
In some ways it is kind of the opposite of the indy hacker stereotype who ironically is probably one of the biggest losers in the new AI world. In the new world what matters is wealth/ownership of capital, relationships, politics, land, resources and other physical/social assets. In the new AI world scammers, PR people, salespeople, politicians, ultra wealthy with power etc thrive and nepotism/connections are the main advantage. You don't just see this in AI btw (e.g. recent meme coins seen as better path to wealth than working due to weak link to power figure), but AI like any tech amplifies the capability of people with power especially if by definition the powerful don't need to be smart/need other smart people to yield it unlike other tech in the past.
They needed smart people in the past; we may be approaching a world where the smart people make themselves as a whole redundant. I can understand why a place like this doesn't want that to succeed, even if the world's resources are being channeled to that end. Time will tell.
And frankly, some of the most effective altruism may be just to directly give cash to people, yet I don't know how many people in the EA community would trust people so much with unconditional cash.
I would put that more on a failure of culture to value healthy living and activity. I wouldn't call that the responsibility of the government. Perhaps lack of clarity on ownership is related to the crisis itself.
1)Leibniz wasn't superhuman 2) Leibniz couldn't work 24/7 3) he could not self increase the speed of his own hardware (body) 4) he could not spawn 1 trillion copies of him to work 24/7
Like how much time did you think before writing this
Redistribution of wealth is tricky and almost certainly runs into the same wall I mentioned in my last comment. When everyone competing with each other (financially) see a similar bump in income they didn't really change anything. Redistribution is more helpful when targeting the wealth gap and not very useful when considering how wealtht the majority of people "feel".
That said, I 100% agree people shouldn't be working their entire life on a rich person's boat. That's a much bigger, and more fundamental, problem though. That gets to the core of a debt-based society and the need for self reliance. The most effective way to get out from under someone else's boot (financially) is to work towards a spot where you aren't dependent on them or the job's income.
If this is a bubble and it bursts in a few years, a lot of investors in specific companies, and in the market broadly, will lose a lot of money, but Sam Altman and Jensen Huang will remain very wealthy.
I'm a capitalist and I think there are good reasons for wealth to accrue to those who take risks and drive toward technological progress. But it just also is the case that they are incentivized to hype their companies, even if it risks getting out over their skis and leads to a bubble which eventually bursts. There are just have lots of ways to extract wealth prior to a bubble bursting, so the downsides of unwarranted hype are not as acute as they might otherwise be.
I don't know the ins and outs of the UK education system, but I have to assume the facilities and employees are still paid for.
> Does food and water? Shelter?
If everyone had access to it for free? Absolutely! I wouldn't work as a farmer or build houses if no one had to pay for those products. Value, or price in this context, is only really feasible for scarce assets. If something is seemingly unlimited and freely available it will have no (financial) value.
100% this but entire system is setup to make sure this doesn't happen at scale. even here on HN if you post something along these lines but in real terms you will get downvoted like crazy and get even crazier comments.
the system is setup to make sure there are workers, w2 workers. this is why there is student loans and this is why schools do not teach you to be an entrepreneur, to be a salesman, to hustle for yourself and not for someone else. I see so many people here talking about leetcode and faang and I think to myself that is just modern day slavery. if you are LXXX at say Meta making say $750k/year, I think the same - you are a modern-day slave. if Meta is paying you $750k/year that really means that you are worth twice that, if not more. no company is going to pay you more than you are worth to them and they won't even break even with you so-to-speak so you can bank on this fact whoever you work for and whatever you bank. though there is a big difference between working on someone's yacht and making $750k the principles are the same but system is working hard and succeeding in making sure it stays as it is...
Problem is, the parent comment is right. Even if you think student loan mitigation has washy economics behind it, the outcome is predictable and even desirable if you're playing the long-game in politics. If not that, spend $500,000,000,000 towards onshoring Apple and Microsoft's manufacturing jobs. Spend it re-invigorating America's mothballed motor industry and let Elon spec it out like a kid in a candy shop. Relative to AI, even student loan forgiveness and dumping money into trades looks attractive (not that Trump would consider either).
Nobody on HN should be confused by this. We know Sam Altman is a scammer (Worldcoin, anyone?) and we know OpenAI is a terrible business. This money is being deliberately wasted to keep OpenAI's lights on and preserve the Weekend At Bernie's-esque corpse that is America's "lead" in software technology. That's it. It's blatantly simple.
The average person's utility from AI is marginal. But to a psychopath like Elon Musk who is interested in deceiving the internet about Twitter engagement or juicing his crypto scam, it's a necessary tool to create seas of fake personas.
Well, this one is really simple - nothing of note will happen in the US in the next four years without giving Trump credit for it, because if you don't he'll turn the full power of the state against you. And with no checks on his power, there's nothing to stop him. So yes, this has nothing to do with Trump, but if you don't want to get arrested and harassed, you better give him credit for it. Same playbook Elena Ceausescu used, except she did it just for scientific papers, Trump will do it for everything.
Please spend my tax dollars on curing disease, fixing homelessness, free addiction treatment, better mental health care, improving our justice system, or even cold fusion. All of these have better outcomes than does paying off student debt.
> These arguments are all the greedy option
You left out the best argument against: there are much better things to spend money on.
I could get behind fixing Bush's biggest mistake - his bankruptcy change that moved the pendulum to lifetime debt. I'd love to see people be able to discharge student loans that are impossible to pay off or where the debtor was put in debt by a fraudulent or failed education institution.
https://thedocs.worldbank.org/en/doc/d5f32ef28464d01f195827b...
Furthermore, they became #4 GDP PPP last year and and were reclassified as a high income country.
https://www.intellinews.com/russia-s-economy-is-booming-3289...
The poorer regions are actually benefiting from high contract salaries. How sustainable that is, guess we'll see.
cash transfers are seen as the "default" baseline. the bar for charity is that it must be better than cash transfers. they do find some such charities that they claim are even better than cash transfers, but they are totally comfortable with giving people unconditional cash.
I joined in 2012, and been reading since 2010 or so. The community definitely has changed since then, but the way I look at it is that it actually became more reasoned as the wide-eyed and naive teenagers/twenty-somethings of that era gained experience in life and work, learned how the world actually works, and perhaps even got burned a few times. As a result, today they approach these types of news with far more skepticism than their younger selves would. You might argue that the pendulum has swung too far towards the cynical end of the spectrum, but I think that's subjective.
Well yes, I can talk to two different points when the context is different. A good conversation isn't just people shouting their personal opinions, its people playing off of the discussion at hand and considering different angles.
> Here, you worry that making society more educated via university training might decrease the economic value of a degree.
That's actually not what I was saying, I may have phrased it poorly. I did not mean that I worry about anyone getting educated. I was simply trying to point out that a degree has much less value when anyone can get it, like that's because it is free as is the topic here.
In the other thread I wasn't actually concerned about inflation personally, only pointing out that inflation will go up if a large amount of student debt is made to just disappear. I was raising that as a prediction with high likelihood, personally I have opinions on the underlying approach but I don't really have dog in the fight either.
1) Build fully autonomous cars so there are zero deaths from car accidents. This is ~45K deaths/year (just US!) and millions of injuries. Annual economic cost of crashes is $340 billion. Worldwide the toll is 10 - 100x?
2) Put solar on top of all highways.
3) Give money to all farmers to put solar.
4) Build transmission.
And many more ...
The Manhattan Project employed nearly 130,000 people at its peak and cost nearly US$2 billion (equivalent to about $27 billion in 2023): https://en.wikipedia.org/wiki/Manhattan_Project
> But artificial scarcity seems like a bad way to go about it.
What artificial scarcity are you talking about here?
I'm not trying to say we need artificial scarcity, university should be a market like any other product or service.
Personally I tend to go even further away from most when it comes to scarcity in the job market too - I'd rather have open borders than immigration systems that limit how many people can come here and compete for jobs.
ok... ??? doesn't mean a thing, frankly.
2. What are they doing? AGI/ASI is a neat trick, but then what? I’m not asking because I don’t think there is an answer; I’m asking because I want the REAL answer. Larry Ellison was talking about RNA cancer vaccines. Well, I was the one that made the neural network model for the company with the US patent on this technique, and that pitch makes little sense. As the problem is understood today, the computational problems are 99% solved with laptop-class hardware. There are some remaining problems that are not solved by neural networks, but by molecular dynamics, which are done in FP64. Even if FP8 neural structure approximation speeds it up 100x, FP64 will be 99% of the computation. So what we today call “AI infrastructure” is not appropriate for the task they talk about. What is it appropriate for? Well, I know that Sam is a bit uncreative, so I assume he’s just going to keep following the “HER” timeline and make a massive playground for LLMs to talk to each other and leave humanity behind. I don’t think that is necessarily unworthy of our Apollo-scale commitment, but there are serious questions about the honest of the project, and what we should demand for transparency. We’re obviously headed toward a symbiotic merger where LLMs and GenAI are completely in control of our understanding of the world. There is a difference between watching a high-production movie for two hours, and then going back to reality, versus a never-ending stream of false sensory information engineered individually to specifically control your behavior. The only question is whether we will be able to see behind the curtain of the great Oz. That’s what I mean by transparency. Not financial or organizational, but actual code, data, model, and prompt transparency. Is this a fundamental right worth fighting for?
This is a very well known fact. When I was in high school in the 2000s, it was a well known joke about how the arts / english majors won't land you a job. And even if you never heard about it, the data for average salary for graduates in the college, its dropout rates, and salary by majors is highly publicized. This isn't advanced research to do and in the age of internet, someone considering college should be able to do. I think the problem is no one believes they are the average case and instead are the exception who'll make it work.
The relative complexity of projects only ever increases, because if they were simpler we would already have done them. The modern LHC is far more complicated then the Manhattan project. So is ITER. Hell, the US military's logistics chain is more complicated then the Manhattan project.
The fundamental attribution error here is going "look the power to destroy a city was so much cheaper!"
One, you’re not getting MGX and SoftBank to pay off student debt.
Two, if they do what they say they want to, they’ll be building new power generation, transmission infrastructures and data centres. Even if AI is a hype, that’s far from useless capital.
> money is being deliberately wasted to keep OpenAI's lights on
OpenAI is spending their own money on this.
You think consumers wouldn't do that if they were able to do so? You call the facility and everyone says the price is "it depends". They decide what they are going to charge you after you have left. Is any other industry allowed to do that? Hire someone to paint your house and he comes up with the price after he is done?
> There is also a supply problem, where the state provides medical company monopolies through "certification of need"
I'm well aware of this. Isn't it interesting that the people who give some of the largest campaign contributions have these sort of laws carved out for them? Charge whatever you want, decide the price in a opaque manner after the fact, competitors aren't allowed to establish themselves without their permission, importing drugs from other countries is forbidden. The list goes on and on.
Then you would think, if there is this much rampant and obvious corruption the fourth estate would step in right? Oh, they receive billions a year to advertise prescription drugs. Advertisement that can't be that effective, sometimes for pretty rare conditions, things your doctor should be made aware of but really odd to tell people about in a massive ad campaign.
The mainstream media and both parties are paid handsomely to allow this to continue. The problem isn't people are fat, or death panels or any of the distractions. The debate isn't about socialized medicine vs private. It's not about "keeping your doctor". There is just massive corruption to the tune of trillions of dollars in the past decade. There needs to be criminal investigations.
Does health insurance also lose its value when anyone can get it for free?
Whatever LeCun says and really even he has said "AGI is possible in 5 to 10 years" as recently as 2 months ago (so if that's the 'skeptic' opinion, you can only imagine what a lot of people are thinking), Meta has and is pouring a whole lot of money into LLM development. "Put your money where your mouth is" as they say. People can say all sorts of things but what they choose to focus their money on tells a whole lot.
This has been getting less and less true since the Industrial Revolution. We’re not quite at the point where we don’t need menial labour. But we can sure see the through line to it. The alternate future to the despairingly unemployed is every person being something of an owner.
> if Meta is paying you $750k/year that really means that you are worth twice that, if not more
Whole is greater than the sum of its parts. Also, if you’re being paid $750k/year, you’d better be worth more than $1.5mm to your employer, because taxes and regulatory costs are typically estimated around 100% of base up to the low millions.
When the old gang at Open ai was together, Sutskever, not Sam was easily the most hypey of them all. And if you ask Norvig today, AGI is already here. 2 months ago, Lecun said he believes AGI could be here in 5 to 10 years and this is supposed to be the skeptic. This is the kind of thing i'm talking about. The idea that it's just the non academics caught in the hype is just blatantly false.
No, it doesn't have to be literally everybody to make the point.
how so? what do you think is the breakdown between say working people in the USA (excluding gig-jobs cause you know…) who are W2 vs. 1099 and/or business owners? 99.78% to 0.22% roughly?
Look at who is president, or who is in charge of the biggest companies today. It is extremely clear that intelligence is not a part of the reason why they are there. And with all their power and money, these people have essentially zero concern for any of the topics you listed.
There is absolutely no reason to believe that if artificial superintelligence is ever created, all of a sudden the capitalist structure of society will get thrown away. The AIs will be put to work enriching the megalomaniacs, just like many of the most intelligent humans are.
Automation. Consider the number of jobs today that one can do singly today that didn't even exist then.
> W2 vs. 1099 and/or business owners? 99.78% to 0.22% roughly?
There are about 165 million workers in the American labour force [1]. There are 33 million small businesses [2]. Given 14% have no employees [3], we have a lower bound of 5 million business owners in America, or 3% of the labour force.
Add to that America's 65 million freelancers and you have 2 out of 5 Americans not working for a boss. (Keep in mind, we're ignoring every building, plumber or design shop that has even a single employee in these figures.)
[1] https://en.wikipedia.org/wiki/Labor_force_in_the_United_Stat...
[2] https://www.uschamber.com/small-business/state-of-small-busi...
[3] https://www.pewresearch.org/short-reads/2024/04/22/a-look-at...
Even now, as we have fully capable conversational models we don't really have any great immediate applications. Our efforts at making them "think" is yielding marginal returns.
People get free insurance but hospitals get fixed amounts of cash allowing them to admit fixed amount of patients
In this scenario the answer is yes, it loses some value. Still much better system than private care in US
Friend, neither of those is a body that can say constitution in US is null and void. Nor to they get to pick and choose which speech is kosher. It is not up to those orgs to decide.
<< They're accepting your definition of censorship to highlight how fucking stupid it is.
They are accepting it, because there is no way it cannot not be accepted. Now.. just because there is some cognitive dissonance over what should logically follow is a separate issue entirely.
Best I can do is spread some seeds..
18 year olds don't understand what a loan is? Zero accountability?
Don't throw more money at schools. They will happily take the money and jack up tuition even more. There is no reason why tuition is going up at the pace it does.
US also has forced labour, huge prison population, bombing civilians and journalists to oblivion, literally nuking other countries and religious fanatics — do I still think china would be less pleasant as our new overlord? Yes — Do I think the world is better off with US-American hegemony? I’m not so sure.
Maybe it’s a net good for the world if not one power is dominating — maybe it’s the start of a hellish ww3. I choose to believe the former.
edit: typos
A) this is a pledge by companies they may or may not even have the cash required to back it up. Certainly they aren’t spending it all at once, but to be completely honest it’s nothing more than a PR stunt right now that seems to be an exercise in courting favor
B) that so called private capital is going to get incentives from subsidies, like tax breaks, grants etc. It’s inevitable if this proceeds to an actual investment stage. What’s that about it being pure private capital again?
C) do to the aforementioned circumstances in A it seems whatever government support systems are stood up to support this - and if this isn’t ending in hot air, there will be - it still means it’s not pure private capital and worse yet, they’ll likely end up bilking tax payers and the initiative falls apart with companies spending far less then the pledge but keeping all the upside.
I’ll bet a years salary it plays out like this.
If this ends up being 100% private capital with no government subsidies of any kind, I’ll be shocked and elated. Look at anything like this in the last 40 years and you’ll find scant few examples that actually hold up under scrutiny that they didn’t play out this way.
Which brings me to my second part. So we are going to - in some form - end up handing out subsidies to these companies, either at the local state or federal level, but by the logic of not paying off student debt, why are we going to do this? It’s only propping up an unhealthy economic policy no?
Why is it so bad for us to cancel student debt but it’s fine to have the same cost equivalent as subsidies for businesses? Is it under the “creates jobs” smoke screen? Despite the fact the overwhelming majority of money made will not go to the workers but back to the wealthy and ultra wealthy.
There's no sense of equity here. If the government is truly unequivocally hands off - no subsidies, no incentives etc - than fine, the profits go where they go, and thats the end of it.
However, it won't be, and that opens up a perfectly legitimate ask about how this money is going to get used and who it benefits
At that point, it's not technology, that's religion (or even bordering on cult-like thinking)
Of course if you're an ignorant right wing anti-intellectual climate change and evolution denying religious fanatic, the idea of everyone having a Harvard degree is existentially terrifying for other reasons that it losing a little bit of value.
There is and it's explained by Baumol's cost disease. Basically you can't sustain paying professors the same wage while productivity increases in other parts of the economy. Even if the actual labor of "professing" hasn't gotten more productive. You have to retain them by keeping up with the broader wage increases. And that cost increase gets passed down to students.
I use arch, btw.
I don't think that was the actual literal expectation, rather that the cost to the tax payer - and there will be a cost to the tax payer - should be best spent elsewhere.
>Two, if they do what they say they want to, they’ll be building new power generation, transmission infrastructures and data centres. Even if AI is a hype, that’s far from useless capital.
Nothing has proven this to be true yet
>OpenAI is spending their own money on this.
Not a single entity has spent any real money on this. So far, its a PR stunt. The general lack of roll out corresponding with the announcement is telling. When real money is spent than I'll believe they might go through with it all the way.
Whats more likely to happen is that these companies will spend at most a token amount of money, then lobby congress and the executive branch for subsidies in order to proceed more 'earnestly' and since this is a pledge, there's nothing in writing that binds a contractual commitment of these funds and their purpose, so they could just as well pocket what they can to offset the costs, use what infrastructure gets built as a result, but shut down the initiative. Bad press won't matter, if its reported on at all.
This has played out for decades like this. Big announcements, so called private money commitments, then come the asks from the government to offset the costs they supposedly pledged to pay anyway, and eventually if you're lucky 1 widget gets built in some economic development area and the companies pocket what they can manage to bilk before its all shut down.
We have the benefit of hindsight now and we understand that technological revolutions improve living standards for everyone and drag whole populations out of grinding labour and poverty.
And it would be foolish to allow China and Russia to out-invest the West in AI and make us mere clients (or worse, victims) of their superior technology.
Industrialists understand that the way to fix the world’s problems is to advance society, as opposed to resting on the laurels of past advancement, and dividing the diminishing spoils of those achievements.
But yeah, let's make sure we squeeze every drop out of those college students, they should have understood their loan terms.
[1] https://www.forbes.com/sites/nicholasreimann/2020/10/27/repo...
Whats a truly competitive market place where all competitors, broadly speaking, are playing on the same playing field and the best business wins?
There's been nothing but waves of consolidation across nearly all industries for the last 40 years. Competition is scarce, it seems.
I'd rather fix the law then try to decide who to hand out tax surpluses to.
One time I bought a can of what I clearly thought was human food. Turns out it was just well dressed cat food.
> to unlimited energy, curing disease, farming innovations to feed billions,
Aw they missed their favorite hobby horse. "The children." Then again you might have to ask why even bother educating children if there is going to be "superintelligent" computers.
Anyways.. all this stuff will then be free.. right? Is someone going to "own" the superintelligent computer? That's an interesting proposition that gets entirely left out of our futurism fatansy.
I agree that no matter if we go to a more private or socialized system, a whole system of broken regulation needs to be removed, and this will be the main point of resistance from those who benefit from the status quo.
OpenAI is what you get when you take Goodhart's Law to the extreme. They are so focused on benchmarks that they are completely blind to the rate of progress that actual matters (hint...it's not model capability in a vacuum).
Yann indeed does believe that AGI will arrive in a decade, but the important thing is that he is honest that this is an uncertain estimate and is based off of extrapolation.
Just better redistribution and georgism/UBI type stuff but also keeping the need based stuff (medicaid, social security disability etc.) I think would be more fair and not punish people who paid off their debt or worked a job during school. Expanding free public education to K-16 and maybe ?more heavily taxing elite universities that get most of their value from the prestige of their own high ranking students who then have to pay more for it and other things like prestigious journals and even startup funds like YC, top law firms, etc. that work largely as prestige money redirectors where the value comes from those capturing the prestige but is redirected almost entireoy to just whoever kicked off the prestige flywheel early..
Why take that at face value? Its generally used for wage suppression[0][1] by big companies (not only in tech) and due to how its structured, creates an unhealthy power balance between employers and H1B employees
[0]: https://link.springer.com/article/10.1007/s10551-024-05823-8
[1]: https://www.paularnesen.com/blog/the-h-1b-visa-corporate-ame...
i don't agree that debt is the problem
human intensive things like medical care are characterized by diseconomies to scale when viewed from the whole industry perspective and baumol cost disease. overutilization makes the problem much worse
Totally fair, by no means is that a settled issue. Debt is just my opinion of a likely root cause.
> there are lots of redistributions that are net beneficial even when you account for the incentive hit. marginal increases in the estate tax, for instance, almost certainly fall under this umbrella
That requires a lot more context to answer. The costs and benefits considered are important to lay out. Without that context I really can't say if it's a net benefit or not, I would assume that two average people would have a different list of factors they'd consider when saying whether its a net benefit or not.
Personally I don't see estate taxes as net beneficial. I don't agree with the principle that death is a taxable event, and I don't prefer the government to have in incentives to see people die (i.e. when someone with an estate dies the government makes money). Financially, to stick with just the numbers, I don't consider $66B in annual revenue worth the bureaucracy or legal complexity required to manage the estate tax program.
Also part of this is making education better bang for buck.
You can say who's gonna pay for it for everything. Defense and meddling in world affairs is a big cost too.
I'm not even saying that's a bad thing, if most people want it that way I don't see the problem. But it isn't free.
No.
I mean, I had some faith in these things 15 years ago, when I was young and naive, and my heroes were too. But I've seen nearly all those heroes turn to the dark side. There's only so much faith you can have.
Though yes, financially health insurance also has no monetary value when anyone can get it for free. You can't assign a price to it and anyone in the health insurance business is entirely at the whims of what the government is willing to pay them to provide a service deemed essential enough to subsidize the entire cost of the product.
Is it your opinion that Harvard could provide the same quality of education to an unlimited number of students?
This isn't a right/left scenario, its logistics and market dynamics. Expanding access to a scarce resource means value of that resource goes down. A supply glut doesn't mean the product is any less useful, just that there's more for it so people will get to pay less for it.
I was a software consultant for many years. I'd put that on the list of truly competitive marketplaces. People were either willing to pay me to do a job or they weren't, and I would have to adjust my prices and terms to try to increase or decrease my workload.
And the disincentive effects are much smaller than taxing the equivalent in directly earned income.
Everyone understands that public services are free to use because they are funded by taxes. It's not the gotcha you think it is. People say that roads, K-12 education, etc are "free" when they mean there is not a direct fee to use them because they are paid for by the government using tax dollars. You don't have to pretend to not understand this
Given that all of the capital and implementation is private anyways, I am not even sure why this was announced with Trump on stage. To me it seemed like a spectacle to help Trump in return for maybe favorable regulation on things like antitrust or copyright or AI regulation or whatever.
I'm not pretending to not understand here. Someone said it would be free and I'm asking how. The fact that "free" doesn't mean free is the problem, not an issue of me misunderstanding.
> You can say who's gonna pay for it for everything. Defense and meddling in world affairs is a big cost too.
For sure, no disagreement here. My personal opinion is that defense is only necessary in times of war and meddling in world affairs is never necessary.
What is "fair" requires context. I could argue that nonintervention is fair or that a top-down, Marxist approach is fair depending on how "success" is defined.
When my parents die, assuming they go before me, I don't see why the government should be involved. To be clear, my parents are well below estate tax thresholds, but the underlying premise is the same. Someone's relative dying and leaving them an estate shouldn't by a taxable event as far as um concerned.
$66B should be a lot of money, but our federal government doesn't know what it means to balance a budget. We could easily cut $66B in current spending if we cared.
Extropic update on building the ultimate substrate for generative AI https://twitter.com/Extropic_AI/status/1820577538529525977
However the US system. seems to create a lot if inefficiency. There is no free lunch. But a lunch where you don't throw out as much bread as you eat is more efficient.
If everyone has sanitised water it loses value.
Value is the overloaded word. We don't need to scarcity things so dollar number goes up for some elite group.
A good test is forget money and think of human collaboration. People doing things. Does it makes sense from that perspective.
Best way to scale Harvard is easy: make all the other places better (or if they are make people realise that)
Are you trying to estimate only those without employees?
> it would embolden universities to ask even higher tuition.
Then cap the amount you give out loans. Many of them are back by one level of the government or another.
> A second problem is that not all students get the benefit, some already paid off their debts or a large part of it. It would be unfair to them.
This is a very flimsy argument. Shall we get rid of the polio vaccine since it's unfair to those who already contracted it that our efforts with the vaccine don't benefit them?
The microprocessors concerned are very high value goods, manufacturing and R&D for them can't be easily and quickly spun up on a whim. The country and companies first to start them up and win will secure the supply chains, and once secured it will take monumental money and effort to reconfigure them. A lot of money is at stake, in other words.
Geopolitically, it also means that the country who secures the supply chain also gets to quite literally write the rules regarding who and where the microprocessors can be sold to and exported. Either the US or China gets to decide who can buy the microprocessors depending on who wins the supply chain.
Just like Nvidia was the first past the post and now enjoys absolute incumbency advantage, whichever country (namely US or China) is first past the post in the "AI" industry will enjoy absolute incumbency advantage.
However, I think it is worth clarifying your following point.
>if Meta is paying you $750k/year that really means that you are worth twice that, if not more.
This is far from slavery. You are worth that to Meta. You might be worth significantly less without Meta. If you can make 1.5 mil/year alone and quit, meta wont send the slave patrol to bring you back in shackles. Instead, it is the golden shackles of greed that keep people making $750,000 instead of opting out.
But these are small niches that don’t make a whole sector, and arguably it’s on the fringes comparatively to everything else
Broadly speaking the so called free market is only in its name
i have to say the ascii, feet, and knots were a bit confusing though. these do not seem to be the same kinds of "wins" as what we're expecting to see with this race though. utf8 is mostly the default around the world and airbus is a serious competitor in international markets.
However: 1. That has no bearing on how much they actually spend, which is what was being discussed and 2. Neom is much more than just The Line. As you can see from the YouTube link I posted, Sindalah seems to be on track to open this year, which is part of Neom.
So while Neom overall might be behind schedule (and The Line or other components may never open), it is clearly not an "imaginary" project given that parts of it will open soon.
Was using this as a proxy for business owners who probably don’t have a filing cabinet of SBA and Census small businesses.
America to this very day gets to dictate how computing and aviation work. Knots, feet, ASCII and so on are just the obvious signs of that.
>utf8
Case in point, UTF-8 (aka Unicode) has ASCII as its starting point. ASCII can be converted to UTF-8 without data loss easily and perfectly because the first entries in Unicode are literally ASCII mappings. This is the virtue of winning first and getting to write the rules.
>airbus is a serious competitor in international markets.
And yet everyone outside of China and Russia still fly using knots and feet, that includes Airbus.
We’re well into the pendulum swinging back. The vanguard were the dropout wunderkids. Now it’s salt-of-the-earth tradesmen and the like.
OpenAI is the Jeb Bush of D.C. They’re spending a lot of money, but it ain’t going far. Last time Altman asked for regulation he had to “jk” backwards when Europe and California proposed actual rules.
I want to interact with real people, not bots, I'm already spending most of my time wasting my life in front of a fucking screen for work
This WAS a thing without the quotas, though.
I mean, AI the tech can be spectacular and the hype can be overblown, right? I'm not even sure that the hype is overblown, but it sure feels like the kind of hype that we'll say, a few years from now, was overblown.
The lesson of everything that has happened in tech over the past 20 years is that what tech can do and what tech will do are miles apart. Yes, AGI could give everyone a free therapist to maximize their human well-being and guide us to the stars. Just like social media could have brought humanity closer together and been an unprecedented tool for communication, understanding, and democracy. How'd that work out?
At some point, optimism becomes willfully blinding yourself to the terrible danger humanity is in right now. Of course founders paint the rosy version of their product's future. That's how PR works. They're lying - maybe to themselves, and definitely to you.
I just want an objective opinion from someone who has a deep understanding of the cutting edge.
It’s maddening to try to plan for a future which everyone is incentivised and permitted to fabricate.
[1] https://www.lazard.com/media/gjyffoqd/lazards-lcoeplus-june-...
If you're saying all the different ways you could spend that money, then you're saying non-intervention for the wealthier people how made a bad financial choice, and yes-intervention for other ways in which the money could be spent, which again, is a decision on where to give limited resources.
I'm not saying I agree or don't agree with whether it would be more helpful to give it to those who have college debt or those in the US who are without a home or frankly those here in Kenya (where I am now) who if don't have money, might starve to death.
Moreover that each decision can be judged.
> Now its a totally different question whether its fair that some people are in this position today. The answer is almost certainly no, but that doesn't have a direct impact on whether an intervention today is fair or not.
If we approach it from this side, I agree. Non-intervention, or not giving any limited resources to anyone, is the most fair approach and then we can evaluate whether it's fair the position in which those people are. Yet I don't know how realistic this is, to withhold all resources from everyone.
> Insane thing: we are currently losing money on OpenAI pro subscriptions! people use it much more than we expected.
Ref: https://techstartups.com/2025/01/06/openai-is-losing-money-o...Your family doesn't have money? No food. No service at the emergency room. Heck, even no water.
I think there's a balance and that people who want more apathy and inaction may not realize what it's like when that's actually the case.
Do you act or do you not act?
Both have unintended, often unpredictable consequences.
But as I said, I'm glad to hear that unconditional cash is gaining traction with those folks, as I think it not only gives someone financial resources but also trust.
> And this trust — another resource it’s difficult to measure — is the aspect of gifts that many have said they value most.
The above is an excerpt from MacKenzie Scott's essay, "No Dollar Signs This Time." [0] I really appreciate the approach she is taking, which seems to be especially embracing the uncertainty of it all and trusting people to do what they believe is best.
[0]: https://yieldgiving.com/essays/no-dollar-signs-this-time
It actually reminds me of an essay I wrote years ago called "The Subjective Adjective" [0] (wow, I wrote it 10 years ago!) The premise is that we take how we subjectively feel and then transform it into an objective statement on reality, overlooking how subjective it really is.
Anyways, I agree some of these conversations seem to devolve into definitional debates that may not get at the real point.
I think I also replied to a different comment thinking it was you—identity and conversational continuation, an aspect of context so often hidden/lacking on HN.
In general, I agree with you that a policy could be equal/fair as in giving everyone an equal amount of X, and that the unfair part is where people are in life. I actually liked the idea of charging a flat tax across the US and then having people voluntarily pay the tax for those who couldn't pay it, because I agree, I would see the tax as fair but the wealth inequality as unfair and one way to rectify that is for people to voluntarily rebalance the wealth. But yeah, I'm sure tons of people would see that as unfair.
I really don't know lol.
Putin growing concerned by Russia’s economy, as Trump pushes for Ukraine deal https://www.reuters.com/world/europe/putin-growing-concerned...
Also basically the whole western world are progressively sanctioning them eg. https://www.gov.uk/government/news/uk-imposes-new-wave-of-sa... and https://kyivindependent.com/us-likely-to-sanction-russia-if-...
Plus the war is expensive. Plus Ukraine's main strategy at the moment seems to be to take out their oil and related industries using drones https://www.newsweek.com/russia-map-shows-critical-infrastru...
I'm not sure it's going to change unless there is some sort of deal or Putin goes.
This is because it's also a dystopia in disguise. It's a social criticism and a cautionary tale about the way fetishizing technology is emotionally crippling us as individuals in a society. It kind of amazes me that this aspect seems to go over some people's heads.
It's obviously true what Booker said: What one person considers an ideal dream might to another person seem a nightmare.
Hype is extremely normal. Everyone with a business gets the chance to hype for the purpose of funding. That alone isn't going to get several of the biggest tech giants in the world to pour billions.
Satya just said, "he has his 80 billion ready". Is Microsoft an "AI foundation company" ? Is Google ? Is Meta ?
The point is the old saying - "Put your money where your mouth is". People can say all sorts of things but what they choose to spend their money on says a whole lot.
And I'm not saying this means the investment is guaranteed to be worth it.
At any rate, I'm not saying this means that all this investment is guaranteed to pay off.
[0] With 300 million weekly active users/1 billion messages per day and #8 in visits worldwide the last few months just 2 years after release, ChatGPT is the software product with the fastest adoption ever.
For taxes, the government provides estimates or recommendations on what a household would owe but its voluntary. You throe your money into programs that you want to see funded.
It could go horribly wrong, but so can centralized planning. At least this way the people are responsible for it either way.
I don't think that's an accurate comparison. None of the politicians in Washington are my neighbors, the closes one lives about 300 miles away but he is never actually home. Those politicians have an extreme amount of control over my life, well beyond what seems reasonable given how disconnected we are.
> Do you act or do you not act?
That varies a lot, but context is everything. If I see someone bleeding out, yes I would help. I generally have basic first aid on me including a tourniquet and chest seals. If I have open cuts on my hands and no gloves I'd have to consider the risk of infection, but if someone is likely to die I think I would take the risk (you never know until you're in the situation though).
If someone is attacked on the street, again yes I'd likely act. Context still matters, if I'm 30 feet away and the person has a gun I'd be of no use unless I'm also armed, and even then I'd have to draw before they saw me. If someone is getting beat up, mugged, even stabbed, sure I'd jump in. I think of have a really hard time living with the knowledge that I watched someone get attacked or murdered and did nothing.
not to sound snarky but seldom do I read here something more wrong... if they did they would NEVER take on the kind of debt they are taking on in droves to get that paper. "debt" is one thing, you probably understand "debt" when you are a kid... understanding loans however - is an entirely different thing from general concept of "debt"
Also, who is the person that "screwed up big"? I'm guessing you mean SBF but my view is that MacAskill is an outright shuckster.
Those who are struggling to afford it are the ones who will appreciate the help
[1] https://www.stern.nyu.edu/sites/default/files/assets/documen...
>Certainly there has been no shortage of gay people among his top-level appointees in either his first or second administrations.
The original quote I was referring to:
Fascism should more appropriately be called Corporatism because it is a merger of state and corporate power. - Benito Mussolini
If that were the case then Stargate is already a thing because OpenAI must have a data center somewhere already.
And yes, the plan was for it to be 170km, since it was announced.
That doesn't mean much on its own. Their per capita GDP is still low.
Also arguably their GDP figures are worth even less than Ireland's. A huge proportion of Russia's economy is tied in military production (and huge proportion of that is funded through debt).
If you make a rocket worth $1 million and then blow it up the next month that cost is obviously included in GDP but it's literally the equivalent of burning money/productivity.
Of course the market being extremely concentrated and effectively an oligopoly even in the best case does shine a somewhat different light on it. Until/unless open models catch up both quality and accessibility wise.
i.e. denying someone who is running an online platform/community or training an LLM model or whatever the right to remove or not provide specific content is a clearly limiting their right to freedom of expression.
What i describe is much like the movie H.E.R. which Sam Altman chatGPT's CEO asked Scarlett Johansan (voice of the AI in that movie) to be the voice of GPT. GPT is now a little like H.E.R the movie as you can have a full conversation with it unlike Siri. Just atm you dont see how GPT looks .. it doesn't look like a FaceTime call with human AI friend/Assistant (how your AI Assistant/Friend looks.. could look and sound like a deceased loved one .. that's my own crazy idea not from the movie H.E.R). Yet maybe in the future it will.. I'm betting it will, but it's only a guess and time will tell.
Im awaiting your downvote :) but will revisit this thread in a few years or more. Well if im right ;)
...on their roofs? Over all their crops? What's the play here?
- They are run by the mob
- They export a lot of natural resources but don’t have a strong manufacturing base. They don’t have high tech, they don’t export many manufactured items to developed countries. It’s a mob run country, resources are easy to extract.
Her face and arms began to swell, and whitey's on the moon.
I can't pay no doctor bills, but whitey's on the moon.
Ten years from now I'll be payin' still, while whitey's on the moon.
The man just upped my rent last night, cause whitey's on the moon.
No hot water, no toilets, no lights, but whitey's on the moon.
I wonder why he's upping me? Cause whitey's on the moon?
Well I was already giving him fifty a week with whitey on the moon.
Taxes taking my whole damn check, junkies making me a nervous wreck,
the price of food is going up, and as if all that shit wasn't enough:
A rat done bit my sister Nell, with whitey on the moon.
Her face and arm began to swell, and whitey's on the moon.
Was all that money I made last year for whitey on the moon?
How come I ain't got no money here? Hmm! Whitey's on the moon.
Y'know I just 'bout had my fill of whitey on the moon.
I think I'll send these doctor bills
airmail special
to whitey on the moon.
—Gilbert Scott-Heron
Im just jumping ahead utilizing what was seen in H.E.R. to envision where we are headed (possibly) as well adding my own crazy ... your AI Assistant Friend seen on your lock screen via a Facetime UI/UX call looks and sounds like a deceased loved one. Mom still guiding you through life.
And they are out manufacturing the combined west on pretty complex stuff like missiles, air defense systems, drones, artillery. Plus, due to sanctions, their civilian industrial sector has grown so much that's theres a shortage of facilities (not to mention labor).
This shows the average total owed by graduate students is much higher than undergraduates, about 3x. https://www.usatoday.com/money/blueprint/student-loans/avera... So just spitballing here, if there are more than 3x undergraduates than graduate students, and the same number have loans, the undergraduate debt is higher overall.
But then there's this showing the median being closer to only 2x different https://www.pewresearch.org/short-reads/2024/09/18/facts-abo...
The long rise of for profit undergraduate institutions until quite recently says it was extremely profitable to get students into debt for questionable education value, it's almost like payday loan shops, just preying on different segment of population.
https://www.highereducationinquirer.org/2022/01/how-universi...
I don't think traditional public or private four year universities are blameless, either, raising tuition to match this endlessly rising loans guaranteed by Federal government with spiraling administrative system costs.
Even though the cost is high I thought in the USA the number of med school students is restricted to a very small number.
“The worst they’re going to be” line is a bit odd. I hear it a lot, but surely it’s true of all tech? So why are we hearing it more now? Perhaps that is a sign of hype?
Edit: aaaand right after posting I stumble across a documentary running on TV in this very moment, in which a dying guy trained an AI on himself to accompany his widow after his death. Seems you're not the only one to find that desirable...
I continue to be amazed at how desperate some of us are to live in Disney's Tomorrowland that we worship non-technical guys with lots of money who simply tell us that's what they're building, despite all actions to the contrary, sometimes baldfaced statements to the contrary (although always dressed up with faux-optimistic tones), and the negative anecdotes of pretty much anyone who gets close to them.
A lot of us became engineers because we were inspired by media, NASA, and the pretty pictures in Popular Science. And it sucks to realize that most if not all of that stuff isn't going to happen in our lifetimes, if at all. But you what guarantees it not to happen? Guys like Sam Altman and Larry Ellison at the helm, and blind faith that just because they have money and speak passionately that they somehow share your interests.
Or are you that guy who asks the car salesman for advice on which car he should buy? I could forgive that a little more, because the car salesman hasn't personally gone on the record about how he plans to use his business to fuck you.
From there, The Line =/= Neom. That said, 170km is still the plan for The Line – the first segment was supposed to be 5km by 2030, now it is 2.5km by 2030. And again, them not building as much as they said does not mean it is cheaper (cost is what we're discussing, after all). If anything it means they are already over budget.
Stargate is a new company, so OpenAI (a different company) having data centers does not seem relevant. Also, I do not think OpenAI actually does own any data centers – for the most part they've been using Azure infra AFAIK. But I did not claim that Stargate was imaginary, so I am unsure how this is a relevant point for you to make.
I was explaining why it is more harmful and thought you were arguing it is not harmful?
If these guys really have $500 billion, they're going to find a way to get electricity.
Compared with who exactly? Certainly not EMEA. Korea is mainly hardware and China has run out of IP to copy.
'Well known' password notwithstanding, let's use the following as a password:
correct-horse-battery-staple
This password is 28 characters long, and whilst it could be stronger with uppercase letters, numbers, and special characters, it still shirtfronts a respectable ~1,397,958,111 decillion (1.39 × 10^42) combinations for an unsuspecting AI-turned-hashcat cluster to crack. Let's say this password was protected by SHA2-256 (assuming no cryptographic weaknesses exist (I haven't checked, purely for academic purposes)), and that at least 50% of hashes would need to be tested before 'success' flourishes (lets try to make things a bit exciting...).
I looked up a random benchmark for hashcat, and found an average of 20 gigahashs/second (GH/s) for a single RTX 4090.
If we throw 100 RTX 4090s at this hashed password, assuming a uniform 20 GH/s (combined firepower of 2,000 GH/s) and absolutely perfect running conditions, it would take at least eleven-nonillion-fifty octillion (1.105 x 10^31) years to crack. Earth will be long gone by the time that rolls around.
Turning up the heat (perhaps literally) by throwing 1,000,000 RTX 4090s at this hashed password, assuming the same conditions, doesn't help much (in terms of Earth's lifespan): two-octillion-two-hundred-ten septillion (2.21 x 10^27) years.
Using some recommended password specifications from NIST - 15 characters comprised of upper and lower-case letters, numbers, and special characters - lets try:
dXIl5p*Vn6Gt#BH
Despite the higher complexity, this password only just eeks out a paltry ~ 41 sextillion (4.11 × 10^22) possible combinations. Throwing 100 RTX 4090s at this password would, rather worryingly, only take around three hundred twenty-six billion seven hundred thirteen million two hundred seventeen thousand (326,713,217,000) years to have a 50% chance of success. My calculator didn't even turn my answer into a scientific number!
More alarming still, is when 1,000,000 RTX 4090s get sic'ed on the shorter hashed password: around thirty-two million six hundred seventy-one thousand (32,671,000) years to knock down half of this hashed password's strength.
I read a report that suggested Microsoft aimed to have 1.8 million GPUs by the end of 2024. We'll probably be safe for at least the next six months or so. All bets are off after that.
All I dream about is the tital wave of cheap high-performance GPUs flooding the market when the AI bubble bursts, so I can finally run Farcry at 25 frames per second for less than a grand.
A trillion hyperintelligent demons might be cogitating right now on the head of a pin. You can't prove they aren't thinking up all sorts of genius evil schemes. My point is that "intelligence" has never been a sufficient - or even necessary - component of imposing ones will on humans.
I feel like HN/EA/"Grey Tribe" people fail to see this because they so worship intellect. I'm much more likely to fall victim to a big dumb man than smart computers.
The Manhattan Project would be a cute example if the Los Alamos scientists had gone rogue and declared themselves emperors of mankind, but no, in fact the people in charge remained the people in charge - mostly not supergeniuses.
But again, I don’t see them out manufacturing the west on military equipment when some of that equipment is getting overrun easily by 40 year old western equipment. It’s still a poor nation being run by the mob with some shining spots.
Censorship in the west and china are both done by unelected people
How so ?
You don't! If the government is backing the loan, as is true in almost all cases, you eagerly take the write-off on the public purse in the knowledge that you've just gained an enthusiastic taxpayer who can open a new business or take a flyer on a new career, instead of a debt slave that is terrified to do anything but brownnose their way up the ladder at their dead-end callcenter job that will disappear the minute someone figures out it can be done in Bangladesh or by AI for cents on the dollar.
Sure, it would be better and more efficient to do this directly by nationalizing the state schools and offering free tuition or something, but we have to work with what we have.
Secondly, I think there's a tendency in AI for some ppl to look at failures of models and attribute it to some fundamental limitation of the approach, rather than something that future models will solve. So I think the line also gets used as short-hand for "Don't assume this limitation is inherent to the approach". I think in other areas of tech there's less of a tendency to try to write off entire areas because of present-day limitations, hence the line coming up more often
So you're right that the line is kind of universally applicable in tech, I guess I just think the kinds of bad arguments that warrant it as a rejoinder are more common around AI?
My speculation is based on not seeing any constraints that will block progress of machine intelligence from reaching those capabilities within 10 years.
Also, Kurzweil's predictions from early 2000s have been eerily prescience and this is the time frame he predicted for the Singularity.
By "solve problems" you mean "temporarily mitigate problems by throwing money at them", right? Or do you actually have specific examples of problems that can be permanently solved and aren't already being tackled?
Not to mention efforts for mitigating climate change and ecological collapse. But of course those issues reside outside our economic system, so why would anyone invest in them.
It sounds counter intuitive, but more taxes is more fair and better as a whole. To prove, it takes no more than to look up correlation of amounts of taxes with percentages of homelessness (and other such indicators) between western countries.
This reminded me of https://www.bitsaboutmoney.com/archive/optimal-amount-of-fra...
If anyone has experience on getting this right, I would like to know how you do it.
This sounds ridiculous to anybody who pays attention to what Trump/MAGA and the modern Republican party say.
AGI can go wrong in innumerable ways, most of which we cannot even imagine now, because we are limited by our 1 times human intelligence.
The liftoff conditions literally have to be near perfect.
So the question is, can humanity trust the power hungry billionaire CEOs to understand the danger and choose a path for maximum safety? Looking at how it is going so far, I would say absolutely not.
If you own a home you can exclude it from the bankruptcy. You can also include it and continue to pay the payments and the contract must be honored!
I don't consider models suddenly lifting off and acquiring 1000 times human intelligence to be a realistic outcome. To my understanding, that belief is usually based around the idea that if you have a model that can refine its own architecture, say by 20%, then the next iteration can use that increased capacity to refine even further, say an additional 20%, leading to exponential growth. But that ignores diminishing returns; after obvious inefficiencies and low-hanging fruit are taken care of, squeezing out even an extra 10% is likely beyond what the slightly-better model is capable of.
I do think it's possible to fight against diminishing returns and chip away towards/past human-level intelligence, but it'll be through concerted effort (longer training runs of improved architectures with more data on larger clusters of better GPUs) and not an overnight explosion just from one researcher somewhere letting an LLM modify its own code.
> can humanity trust the power hungry billionaire CEOs to understand the danger and choose a path for maximum safety
Those power-hunger billionaire CEOs who shall remain nameless, such as Altman and Musk, are fear-mongering about such a doomsday. Goal seems to be regulatory capture and diverting attention away from the more realistic issues like use for employee surveillance[0].
It’s really only a problem if you (1) choose a private college and don’t stay in-state, (2) get a degree which doesn’t have a lot of practical value, and (3) then want to pursue a low-paying field or get a not-useful graduate degree. For example, a friend of mine did her undergrad in art history, master’s in museum studies, and works for a non-profit. She’s not rich but she’s able to survive reasonably comfortably. She’s not dumb or financially illiterate, and she knew what she was getting in for.
Allegedly almost half of their defense budget since the war began was funded through private forced loans issues by banks directly to (effectively state owned?) military contractors. So that's not reflected in their military budget.
Also I doubt Russia could borrow a lot on the international markets even if they wanted to. Certainly not cheaply (like the US or especially Eurozone countries)
> And PPP is the number that matters - its one of the big factors which governs quality of life
Again... Russia's GDP per capita is still quite low (even if significantly higher than nominal).
Also if your nominal GDP is inflated by defence spending and energy exports and you multiply it with PPP (i.e. consumer price index) what exactly do you get?
> PPP is the number
Metrics adjusted by PPP might. What do you mean by PPP as such? The multiplier itself? https://data.worldbank.org/indicator/PA.NUS.PPPC.RF?most_rec...
Generally low prices indicate that the country is poor overall.
> its one of the big factors which governs quality of life,
PPP adjusted GDP per capita? Really? That's certainly not the best indicator of those things (even if there is strong correlation overall).
Do you think a median person in Ireland is 30% better off than the average Swiss or 2x better off than the average German?
Anyway, going back to Russia. If e.g. $100 comes in into the country through energy exports and is spent making bombs and other equipment what fraction do you think trickles down to the local economy?
The genie is out of the bottle and America must keep it's momentum in AI up .. ahead of all other countries for it's continued prosperity and security!
If the expensive schooling works, then the person should be able to get a nice, well paying job and pay off the loan over the next 20 years. Many of the differal programs (i.e. "putting the loan on hold") programs and policies would prevent this. It's when the clock runs out, and the student can't get the high-paying job that we have issues...
Sources: https://en.wikipedia.org/wiki/2008_Uyghur_unrest
https://en.wikipedia.org/wiki/April_2013_Bachu_unrest
https://en.wikipedia.org/wiki/Xinjiang_conflict#1990s_to_200...