zlacker

[return to "Stargate Project: SoftBank, OpenAI, Oracle, MGX to build data centers"]
1. wujerr+DI[view] [source] 2025-01-22 03:35:08
>>tedsan+(OP)
For fun, I calculated how this stacks up against other humanity-scale mega projects.

Mega Project Rankings (USD Inflation Adjusted)

The New Deal: $1T,

Interstate Highway System: $618B,

OpenAI Stargate: $500B,

The Apollo Project: $278B,

International Space Station: $180B,

South-North Water Transfer: $106B,

The Channel Tunnel: $31B,

Manhattan Project: $30B

Insane Stuff.

◧◩
2. krick+tY[view] [source] 2025-01-22 06:17:23
>>wujerr+DI
It's unfair, because we are talking in the hindsight about everything but Project Stargate, and it's also just your list (and I don't know what others could add to it) but it got me thinking. Manhattan Project goal is to make a powerful bomb. Apollo is to get to the Moon before soviets do (so, because of hubris, but still there is a concrete goal). South-North Water Transfer is pretty much terraforming, and others are mostly roads. I mean, it's all kinda understandable.

And Stargate Project is... what exactly? What is the goal? To make Altman richer, or is there any more or less concrete goal to achieve?

Also, few items for comparison, that I googled while thinking about it:

- Yucca Mountain Nuclear Waste Repository: $96B

- ITER: $65B

- Hubble Space Telescope: $16B

- JWST: $11B

- LHC: $10B

Sources:

https://jameswebbtracker.com/jwst/budget

https://blogfusion.tech/worlds-most-expensive-experiments/

https://science.nasa.gov/mission/hubble/overview/faqs/

◧◩◪
3. nopins+861[view] [source] 2025-01-22 07:34:14
>>krick+tY
The goal is Artificial Superintelligence (ASI), based on short clips of the press conference.

It has been quite clear for a while we'll shoot past human-level intelligence since we learned how to do test-time compute effectively with RL on LMMs (Large Multimodal Models).

◧◩◪◨
4. krick+N91[view] [source] 2025-01-22 08:14:05
>>nopins+861
Here we go again... Ok, I'll bite. One last time.

Look, making up a three-letter acronym doesn't make whatever it stands for a real thing. Not even real in a sense "it exists", but real in a sense "it is meaningful". And assigning that acronym to a project doesn't make up a goal.

I'm not claiming that AGI, ASI, AXY or whatever is "impossible" or something. I claim that no one who uses these words has any fucking clue what they mean. A "bomb" is some stuff that explodes. A "road" is some flat enough surface to drive on. But "superintelligence"? There's no good enough definition of "intelligence", let alone "artifical superintelligence". I unironically always thought a calculator is intelligent in a sense, and if it is, then it's also unironically superintelligent, because I cannot multiply 20-digit numbers in my mind. Well, it wasn't exactly "general", but so aren't humans, and it's an outdated acronym anyway.

So it's fun and all when people are "just talking", because making up bullshit is a natural human activity and somebody's profession. But when we are talking about the goal of a project, it implies something specific, measurable… you know, that SMART acronym (since everybody loves acronyms so much).

◧◩◪◨⬒
5. nopins+6k1[view] [source] 2025-01-22 09:44:34
>>krick+N91
Superintelligence (along with some definitions): https://en.wikipedia.org/wiki/Superintelligence

Also, "Dario Amodei says what he has seen inside Anthropic in the past few months leads him to believe that in the next 2 or 3 years we will see AI systems that are better than almost all humans at almost all tasks"

https://x.com/tsarnick/status/1881794265648615886

◧◩◪◨⬒⬓
6. hatefu+Sm1[view] [source] 2025-01-22 10:11:39
>>nopins+6k1
Not saying you're necessarily wrong, but "Anthropic CEO says that the work going on in Anthropic is super good and will produce fantastic results in 2 or 3 years" it not necessarily telling of anything.
◧◩◪◨⬒⬓⬔
7. nopins+Gn1[view] [source] 2025-01-22 10:20:58
>>hatefu+Sm1
Dario said in mid-2023 that his timeline for achieving "generally well-educated humans" was 2-3 years. o1 and Sonnet 3.5 (new) have already fulfilled that requirement in terms of Q&A, ahead of his earlier timeline.
◧◩◪◨⬒⬓⬔⧯
8. hatefu+Yr1[view] [source] 2025-01-22 11:08:17
>>nopins+Gn1
I'm curious about that. Those models are definitely more knowledgeable than a well educated human, but so is Google search, and has been for a long time. But are they as intelligent as a well educated human? I feel like there's a huge qualitative difference. I trust the intelligence of those models much less than an educated human.
◧◩◪◨⬒⬓⬔⧯▣
9. nopins+Ys1[view] [source] 2025-01-22 11:18:43
>>hatefu+Yr1
If we talk about a median well-educated human, o1 likely passes the bar. Quite a few tests of reasoning suggests that’s the case. An example:

“Preprint out today that tests o1-preview's medical reasoning experiments against a baseline of 100s of clinicians.

In this case the title says it all:

Superhuman performance of a large language model on the reasoning tasks of a physician

Link: https://arxiv.org/abs/2412.10849”. — Adam Rodman, a co-author of the paper https://x.com/AdamRodmanMD/status/186902305691786464

—-

Have you tried using o1 with a variety of problems?

[go to top]