zlacker

[parent] [thread] 108 comments
1. TheAce+(OP)[view] [source] 2025-01-22 00:03:02
I'm confused and a bit disturbed; honestly having a very difficult time internalizing and processing this information. This announcement is making me wonder if I'm poorly calibrated on the current progress of AI development and the potential path forward. Is the key idea here that current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...

I don't know how to make sense of this level of investment. I feel that I lack the proper conceptual framework to make sense of the purchasing power of half a trillion USD in this context.

replies(16): >>catman+i >>layer8+21 >>ilaksh+72 >>jazzyj+j2 >>HarHar+05 >>insane+d5 >>Davidz+Y5 >>dauhak+Lb >>famous+Hf >>petese+Ri >>lmm+Ls >>tim333+XW >>smartm+nZ >>AdamN+3p1 >>sesm+YG1 >>MetaWh+IZ1
2. catman+i[view] [source] 2025-01-22 00:04:49
>>TheAce+(OP)
This has nothing to do with technology it is a purely financial and political exercise...
replies(1): >>philom+kq
3. layer8+21[view] [source] 2025-01-22 00:09:59
>>TheAce+(OP)
> Is the key idea here that current AI development has figured out enough to brute force a path towards AGI?

It rather means that they see their only chance for substantial progress in Moar Power!

4. ilaksh+72[view] [source] 2025-01-22 00:19:10
>>TheAce+(OP)
I think the only way you get to that kind of budget is by assuming that the models are like 5 or 10 times larger than most LLMs, and that you want to be able to do a lot of training runs simultaneously and quickly, AND build the power stations into the facilities at the same time. Maybe they are video or multimodal models that have text and image generation grounded in a ton of video data which eats a lot of VRAM.
5. jazzyj+j2[view] [source] 2025-01-22 00:20:13
>>TheAce+(OP)
This announcement is from the same office as the guy that xeeted:

“My NEW Official Trump Meme is HERE! It's time to celebrate everything we stand for: WINNING! Join my very special Trump Community. GET YOUR $TRUMP NOW.”

Your calibration is probably fine, stargate is not a means to achieve AGI, it’s a means to start construction on a few million square feet of datacenters thereby “reindustrializing America”

replies(1): >>iandan+R2
◧◩
6. iandan+R2[view] [source] [discussion] 2025-01-22 00:22:32
>>jazzyj+j2
FWIW Altman sees it as a way to deploy AGI. He's increasingly comfortable with the idea they have achieved AGI and are moving toward Artificial Super Intelligence (ASI).
replies(2): >>davegu+U5 >>aithro+Yt
7. HarHar+05[view] [source] 2025-01-22 00:37:32
>>TheAce+(OP)
Largest GPU cluster at the moment is X.ai's 100K H100's which is ~$2.5B worth of GPUs. So, something 10x bigger (1M GPUs) is $25B, and add $10B for 1GW nuclear reactor.

This sort of $100-500B budget doesn't sound like training cluster money, more like anticipating massive industry uptake and multiple datacenters running inference (with all of corporate America's data sitting in the cloud).

replies(2): >>intern+Rx >>anonzz+lS
8. insane+d5[view] [source] 2025-01-22 00:39:20
>>TheAce+(OP)
It's a typical Trump-style announcement -- IT'S GONNA BE HUUUGE!! -- without any real substance or solid commitments

Remember Trump's BIG WIN of Foxconn investing $10B to build a factory in Wisconsin, creating 13000 jobs?

That was in 2017. 7 years later, it's employing about 1000 people if that. Not really clear what, if anything, is being made at the partially-built factory. [0]

And everyone's forgotten about it by now.

I expect this to be something along those lines.

[0] https://www.jsonline.com/story/money/business/2023/03/23/wha...

◧◩◪
9. davegu+U5[view] [source] [discussion] 2025-01-22 00:45:19
>>iandan+R2
Do you think Sam Altman ever sits in front of a terminal trying to figure out just the right prompt incantation to get an answer that, unless you already know the answer, has to be verified? Serious question. I personally doubt he is using openai products day to day. Seems like all of this is very premature. But, if there are gains to be made from a 7T parameter model, or if there is huge adoption, maybe it will be worth it. I'm sure there will be use for increased compute in general, but that's a lot of capex to recover.
10. Davidz+Y5[view] [source] 2025-01-22 00:45:34
>>TheAce+(OP)
Let me avoid the use of the word AGI here because the term is a little too loaded for me these days.

1) reasoning capabilities in latest models are rapidly approaching superhuman levels and continue to scale with compute.

2) intelligence at a certain level is easier to achieve algorithmically when the hardware improves. There's also a larger path to intelligence and often simpler mechanisms

3) most current generation reasoning AI models leverage test time compute and RL in training--both of which can make use of more compute readily. For example RL on coding against compilers proofs against verifiers.

All of this points to compute now being basically the only bottleneck to massively superhuman AIs in domains like math and coding--rest no comment (idk what superhuman is in a domain with no objective evals)

replies(5): >>philip+X8 >>lossol+ui >>rhubar+sM >>viccis+QV1 >>sgt101+vC2
◧◩
11. philip+X8[view] [source] [discussion] 2025-01-22 01:13:41
>>Davidz+Y5
You can't block AGI on a whim and then deploy 'superhuman' without justification.

A calculator is superhuman if you're prepared to put up with it's foibles.

replies(1): >>Davidz+Fc
12. dauhak+Lb[view] [source] 2025-01-22 01:30:26
>>TheAce+(OP)
> Is the key idea here that current AI development has figured out enough to brute force a path towards AGI?

My sense anecdotally from within the space is yes people are feeling like we most likely have a "straight shot" to AGI now. Progress has been insane over the last few years but there's been this lurking worry around signs that the pre-training scaling paradigm has diminishing returns.

What recent outputs like o1, o3, DeepSeek-R1 are showing is that that's fine, we now have a new paradigm around test-time compute. For various reasons people think this is going to be more scalable and not run into the kind of data issues you'd get with a pre-training paradigm.

You can definitely debate on whether that's true or not but this is the first time I've been really seeing people think we've cracked "it", and the rest is scaling, better training etc.

replies(3): >>Nitpic+oI >>rhubar+ZM >>lm2846+sg1
◧◩◪
13. Davidz+Fc[view] [source] [discussion] 2025-01-22 01:35:28
>>philip+X8
It is superhuman in a very specific domain. I didn't use AGI because its definitions are one of two flavors.

One, capable of replacing some large proportion of global gdp (this definition has a lot of obstructions: organizational, bureaucratic, robotic)...

two, difficult to find problems in which average human can solve but model cannot. The problem with this definition is that the distinct nature of intelligence of AI and the broadness of tasks is such that this metric is probably only achievable after AI is already in reality massively superhuman intelligence in aggregate. Compare this with Go AIs which were massively superhuman and often still failing to count ladders correctly--which was also fixed by more scaling.

All in all I avoid the term AGI because for me AGI is comparing average intelligence on broad tasks rel humans and I'm already not sure if it's achieved by current models whereas superhuman research math is clearly not achieved because humans are still making all of progress of new results.

14. famous+Hf[view] [source] 2025-01-22 01:54:51
>>TheAce+(OP)
"There are maybe a few hundred people in the world who viscerally understand what's coming. Most are at DeepMind / OpenAI / Anthropic / X but some are on the outside. You have to be able to forecast the aggregate effect of rapid algorithmic improvement, aggressive investment in building RL environments for iterative self-improvement, and many tens of billions already committed to building data centers. Either we're all wrong, or everything is about to change." - Vedant Misra, Deepmind Researcher.

Maybe your calibration isn't poor. Maybe they really are all wrong but there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there. And if you genuinely think that, them this kind of investment isn't so crazy.

replies(9): >>paul79+vC >>root_a+NI >>rhubar+rS >>skrebb+SV >>nejsjs+Ii1 >>DebtDe+lo1 >>sander+ER1 >>bcrosb+za2 >>ca_tec+bF2
◧◩
15. lossol+ui[view] [source] [discussion] 2025-01-22 02:13:47
>>Davidz+Y5
> All of this points to compute now being basically the only bottleneck to massively superhuman AIs

This is true for brute force algorithms as well and has been known for decades. With infinite compute, you can achieve wonders. But the problem lies in diminishing returns[1][2], and it seems things do not scale linearly, at least for transformers.

1. https://www.bloomberg.com/news/articles/2024-12-19/anthropic...

2. https://www.bloomberg.com/news/articles/2024-11-13/openai-go...

16. petese+Ri[view] [source] 2025-01-22 02:15:57
>>TheAce+(OP)
> Is the key idea here that current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...

Can't answer that question, but, if the only thing to change in the next four years was that generation got cheaper and cheaper, we haven't even begun to understand the transformative power of what we have available today. I think we've felt like 5-10% of the effects that integrating today's technology can bring, especially if generation costs come down to maybe 1% of what they currently are, and latency of the big models becomes close to instantaneous.

◧◩
17. philom+kq[view] [source] [discussion] 2025-01-22 03:15:08
>>catman+i
But why drop $500B (or even $100B short term) if there is not something there? The numbers are too big
replies(2): >>camel_+qx >>rf15+VX
18. lmm+Ls[view] [source] 2025-01-22 03:37:07
>>TheAce+(OP)
> current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...

Or they think the odds are high enough that the gamble makes sense. Even if they think it's a 20% chance, their competitors are investing at this scale, their only real options are keep up or drop out.

◧◩◪
19. aithro+Yt[view] [source] [discussion] 2025-01-22 03:47:04
>>iandan+R2
https://xcancel.com/sama/status/1881258443669172470

  twitter hype is out of control again. 

  we are not gonna deploy AGI next month, nor have we built it.

  we have some very cool stuff for you but pls chill and cut your expectations 100x!
I realize he wrote a fairly goofy blog a few weeks ago, but this tweet is unambiguous: they have not achieved AGI.
replies(1): >>madspi+941
◧◩◪
20. camel_+qx[view] [source] [discussion] 2025-01-22 04:20:41
>>philom+kq
this is an announcement not a cut check. Who knows how much they'll actually spend, plenty of projects never get started let alone massive inter-company endeavors.
replies(1): >>dark_g+VE
◧◩
21. intern+Rx[view] [source] [discussion] 2025-01-22 04:25:07
>>HarHar+05
Shouldn't there be a fear of obsolescence?
replies(1): >>HarHar+Vz
◧◩◪
22. HarHar+Vz[view] [source] [discussion] 2025-01-22 04:46:54
>>intern+Rx
It seems you'd need to figure periodic updates into the operating cost of a large cluster, as well as replacing failed GPUs - they only last a few years if run continuously.

I've read that some datacenters run mixed generation GPUs - just updating some at a time, but not sure if they all do that.

It'd be interesting to read something about how updates are typically managed/scheduled.

◧◩
23. paul79+vC[view] [source] [discussion] 2025-01-22 05:15:52
>>famous+Hf
My prediction is a Apple loses to Open AI who releases a H.E.R. (like the movie) like phone. She is seen on your lock screen a la a Facetime call UI/UX and she can be skinned to look like whoever; i.e. a deceased loved one.

She interfaces with AI Agents of companies, organizations, friends, family, etc to get things done for you (or to learn from..what's my friends bday his agent tells yours) automagically and she is like a friend. Always there for you at your beckon call like in the movie H.E.R.

Zuckerberg's glasses that can not take selfies will only be complimentary to our AI phones.

That's just my guess and desire as fervent GPT user, as well a Meta Ray Ban wearer (can't take selfies with glasses).

replies(4): >>liamwi+RO >>nhinck+O71 >>lm2846+4g1 >>varske+FB1
◧◩◪◨
24. dark_g+VE[view] [source] [discussion] 2025-01-22 05:42:50
>>camel_+qx
The $100B check is already cut, and they are currently building 10 new data centers in Texas.
replies(1): >>__loam+QV
◧◩
25. Nitpic+oI[view] [source] [discussion] 2025-01-22 06:18:04
>>dauhak+Lb
I agree with your take, and actually go a bit further. I think the idea of "diminishing returns" is a bit of a red herring, and it's instead a combination of saturated benchmarks (and testing in general) and expectations of "one llm to rule them all". This might not be the case.

We've seen with oAI and Anthropic, and rumoured with Google, that holding your "best" model and using it to generate datasets for smaller but almost as capable models is one way to go forward. I would say that this shows the "big models" are more capable than it would seem and that they also open up new avenues.

We know that Meta used L2 to filter and improve its training sets for L3. We are also seeing how "long form" content + filtering + RL leads to amazing things (what people call "reasoning" models). Semantics might be a bit ambitious, but this really opens up the path towards -> documentation + virtual environments + many rollouts + filtering by SotA models => new dataset for next gen models.

That, plus optimisations (early exit from meta, titans from google, distillation from everyone, etc) really makes me question the "we've hit a wall" rhetoric. I think there are enough tools on the table today to either jump the wall, or move around it.

◧◩
26. root_a+NI[view] [source] [discussion] 2025-01-22 06:21:10
>>famous+Hf
Motivated reasoning sings nicely to the tune of billions of dollars. None of these folks will ever say, "don't waste money on this dead end". However, it's clear that there is still a lot of productive value to extract from transformers and certainly there will be other useful things that appear along the way. It's not the worst investment I can imagine, even if it never leads to "AGI"
replies(1): >>famous+qO
◧◩
27. rhubar+sM[view] [source] [discussion] 2025-01-22 07:01:25
>>Davidz+Y5
> 1) reasoning capabilities in latest models are rapidly approaching superhuman levels and continue to scale with compute.

What would you say is the strongest evidence for this statement?

replies(1): >>__loam+zV
◧◩
28. rhubar+ZM[view] [source] [discussion] 2025-01-22 07:07:01
>>dauhak+Lb
> My sense anecdotally from within the space is yes people are feeling like we most likely have a "straight shot" to AGI now

My problem with this is that people making this statement are unlikely to be objective. Major players are in fundraising mode, and safety folks are also incentivised to be subjective in their evaluation.

Yesterday I repeatedly used OpenAI’s API to summarise a document. The first result looked impressive. However, comparing repeated results revealed that it was missing major points each time, in a way a human would certainly not. In the surface the summary looked good, but careful evaluation indicated a lack of understanding or reasoning.

Don’t get me wrong, I think AI is already transformative, but I am not sure we are close to AGI. I hear a lot about it, but it doesn’t reflect my experience in a company using and building AI.

replies(2): >>dauhak+e62 >>srouss+5k8
◧◩◪
29. famous+qO[view] [source] [discussion] 2025-01-22 07:21:08
>>root_a+NI
Yeah people don't rush to say "don't waste money on this dead end" but think about it for a moment.

A 500B dollar investment doesn't just fall into one's lap. It's not your run of the mill funding round. No, this is something you very actively work towards that your funders must be really damn convinced is worth the gamble. No one sane is going to look at what they genuinely believe to be a dead end and try to garner up Manhattan Project scales of investment. Careers have been nuked for far less.

replies(2): >>__loam+lV >>cibyr+z41
◧◩◪
30. liamwi+RO[view] [source] [discussion] 2025-01-22 07:24:41
>>paul79+vC
My take on this is that, despite an ever-increasingly connected world, you still need an assistant like this to remain available at all times your device is. If I can’t rely on it when my signal is weak, or the network/service is down/saturated, its way of working itself into people’s core routines is minimal. So either the model runs locally, in which case I’d argue OpenAI have no moat, or they uncover some secret sauce they’re able to keep contained to their research labs and data centres that’s simply that much better than the rest, in perpetuity, and is so good people are willing to undergo the massive switching costs and tolerate the situations in which the service they’ve come to be so dependent on isn’t available to them. Let’s also not discount the fact that Apple are one of the largest manufacturers globally of smartphones, and that getting up to speed in the myriad industries required to compete with them, even when contracting out much of that work, is hard.
replies(1): >>paul79+QQ
◧◩◪◨
31. paul79+QQ[view] [source] [discussion] 2025-01-22 07:42:54
>>liamwi+RO
Sure but Microsof has the expertise and they own 49 percent of Open AI if I'm not mistaken. Open AI uses their expertise and access to hardware to create a GPT branded AI phone.

I can see your point re: run locally but no reason Open AI can't release version 0.1 and how many times are u left without an internet connection on ur current phone?

Overall I hate Apple now it's so stale compared to GPT's iPhone app. I nerd rage at dumbass Siri.

◧◩
32. anonzz+lS[view] [source] [discussion] 2025-01-22 08:00:35
>>HarHar+05
Don't they say in the article that it is also for scaling up power and datacenters? That's the big cost here.
replies(1): >>HarHar+Yn1
◧◩
33. rhubar+rS[view] [source] [discussion] 2025-01-22 08:01:44
>>famous+Hf
The problem is, they are hugely incentivised to hype to raise funding. It’s not whether they are “wrong”, it’s whether they are being realistic.

The argument presented in the quote there is: “everyone in AI foundation companies are putting money into AI, therefore we must be near AGI.”

The best evaluation of progress is to use the tools we have. It doesn’t look like we are close to AGI. It looks like amazing NLP with an enormous amount of human labelling.

replies(3): >>LeftHa+MB2 >>sandsp+SI3 >>famous+vv4
◧◩◪◨
34. __loam+lV[view] [source] [discussion] 2025-01-22 08:30:56
>>famous+qO
We're talking about Masayoshi Son here lol.
◧◩◪
35. __loam+zV[view] [source] [discussion] 2025-01-22 08:33:23
>>rhubar+sM
Well the contrived benchmarks the industry selling the models made up seem to be improving.
replies(1): >>drdaem+382
◧◩◪◨⬒
36. __loam+QV[view] [source] [discussion] 2025-01-22 08:36:23
>>dark_g+VE
A state with famously stable power infrastructure.
replies(1): >>nejsjs+mj1
◧◩
37. skrebb+SV[view] [source] [discussion] 2025-01-22 08:36:24
>>famous+Hf
> there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there.

I don't immediately disagree with you but you just accidentally also described all crypto/NFT enthusiasts of a few years ago.

replies(3): >>Heatra+801 >>rglove+hO1 >>famous+Iy4
38. tim333+XW[view] [source] 2025-01-22 08:46:07
>>TheAce+(OP)
>AI development has figured out enough to brute force a path towards AGI?

I think what's been going on is compute/$ has been exponentially rising for decades in a steady way and has recently passed the point that you can get human brain level compute for modest money. The tendency has been once the compute is there lots of bright PhDs get hired to figure algorithms to use it so that bit gets sorted in a few years. (as written about by Kurzweil, Wait But Why and similar).

So it's not so much brute forcing AGI so much that exponential growth makes it inevitable at some point and that point is probably quite soon. At least that seems to be what they are betting.

The annual global spend on human labour is ~$100tn so if you either replace that with AGI or just add $100tn AGI and double GDP output, it's quite a lot of money.

◧◩◪
39. rf15+VX[view] [source] [discussion] 2025-01-22 08:53:18
>>philom+kq
because you put your own people on the receiving end too AND invite others to join your spending spree.
40. smartm+nZ[view] [source] 2025-01-22 09:07:42
>>TheAce+(OP)
I see it somewhat differently. It is not that technology has reached a level where we are close to AGI, we just need to throw in a few more coins to close the final gap. It is probably the other way around. We can see and feel that human intelligence is being eroded by the widespread use of LLMs for tasks that used to be solved by brain work. Thus, General Human Intelligence is declining and is approaching the level of current Artificial Intelligence. If this process can be accelerated by a bit of funding, the point where Big Tech can overtake public opinion making will be reached earlier, which in turn will make many companies and individuals richer faster, also the return on investment will be closer.
◧◩◪
41. Heatra+801[view] [source] [discussion] 2025-01-22 09:14:38
>>skrebb+SV
NFTs couldn't pass the Turing test, something I didn't expect to witness in my lifetime.

The two are qualitatively different.

replies(4): >>sander+1S1 >>aylmao+hf2 >>root_a+XN2 >>skrebb+QS3
◧◩◪◨
42. madspi+941[view] [source] [discussion] 2025-01-22 09:46:06
>>aithro+Yt
Isn't this because AGI is defined something like $100 billions of profits (yearly?) in their contract with Microsoft?
◧◩◪◨
43. cibyr+z41[view] [source] [discussion] 2025-01-22 09:51:39
>>famous+qO
The Manhattan project cost only $2 billion (about $30 billion adjusting for inflation to today).
replies(1): >>zeroon+Gd3
◧◩◪
44. nhinck+O71[view] [source] [discussion] 2025-01-22 10:23:18
>>paul79+vC
Sorry, you live in a different world, google glasses were aggressively lame, the ray bans only slightly less so.

But pulling out your phone to talk to it like a friend...

replies(1): >>paul79+pd1
◧◩◪◨
45. paul79+pd1[view] [source] [discussion] 2025-01-22 11:23:28
>>nhinck+O71
Well I use GPT daily to get things done and use it as a knowlegebase. I text and talk to it throughout the day, as well I think it's called "chat"GPT for a reason because it will evolve to the point where you feel like you are talking to a human. Tho this human is your assistant and does everything for you and interfaces with other AI agents to book travel, learn your friends/family schedules and anything you now do on the web there will be AI agent for that your AI agent interfacing with.

Maybe you have not seen the 2013 movie "H.E.R.?" Scarlett Johansan starred in it (her voice was the AI) and Sam Altman asked her to be the voice of chatGPT.

Overall this is what I see happening and excited for some of it or possibly all of it to happen. Yet time will tell :-) and it sounds like your betting none of it will happen ... we'll see :)

replies(1): >>shmeee+Kb4
◧◩◪
46. lm2846+4g1[view] [source] [discussion] 2025-01-22 11:49:36
>>paul79+vC
I still fail to see who desire that, how it benefits humanity, or why we need to invest 500b to get to this
replies(1): >>paul79+r02
◧◩
47. lm2846+sg1[view] [source] [discussion] 2025-01-22 11:53:05
>>dauhak+Lb
Yeah that's called wishful thinking when it's not straight up pipe dreams. All these people have horses in the race
◧◩
48. nejsjs+Ii1[view] [source] [discussion] 2025-01-22 12:10:34
>>famous+Hf
I am hoping it is just the usual ponzi thing.
replies(1): >>ajmurm+rQ1
◧◩◪◨⬒⬓
49. nejsjs+mj1[view] [source] [discussion] 2025-01-22 12:16:20
>>__loam+QV
$50B is to pay miners not to mine.
◧◩◪
50. HarHar+Yn1[view] [source] [discussion] 2025-01-22 12:47:17
>>anonzz+lS
There's the servers and data center infrastructure (cooling, electricity) as well as the GPUs of course, but if we're talking $10B+ of GPUs in a single datacenter, it seems that would dominate. Electricity generation is also a big expense, and it seems nuclear is the most viable option although multi-GW solar plants are possible too in some locations. The 1GW ~ $10B number I suggested is in the right ballpark.
◧◩
51. DebtDe+lo1[view] [source] [discussion] 2025-01-22 12:51:06
>>famous+Hf
>Maybe they really are all wrong

All? Quite a few of the best minds in the field, like Yann LeCun for example, have been adamant that 1) autoregressive LLMs are NOT the path to AGI and 2) that AGI is very likely NOT just a couple of years away.

replies(4): >>skepti+XH1 >>anthon+RR1 >>whipla+9C2 >>famous+KY2
52. AdamN+3p1[view] [source] 2025-01-22 12:56:13
>>TheAce+(OP)
Yes that is exactly what the big Aha! moment was. It has now been shown that doing these $100MM+ model builds is what it takes to have a top-tier model. The big moat is not just the software, the math, or even the training data, it's the budget to do the giant runs. Of course having a team that is iterating on these 4 regularly is where the magic is.
◧◩◪
53. varske+FB1[view] [source] [discussion] 2025-01-22 14:16:39
>>paul79+vC
Very insightful take on agents interacting with agents thanks for sharing.

Re H.E.R phone - I see people already trying to build this type of product, one example: https://www.aphoneafriend.com

54. sesm+YG1[view] [source] 2025-01-22 14:50:33
>>TheAce+(OP)
To me it looks like a strategic investment in data center capacity, which should drive domestic hardware production, improvements in electrical grid, etc. Putting it all under AI label just makes it look more exciting.
◧◩◪
55. skepti+XH1[view] [source] [discussion] 2025-01-22 14:55:26
>>DebtDe+lo1
You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.

So the statement becomes tautological “all researchers who believe that AGI is imminent believe that AGI is imminent”.

And of course, OpenAI and the other labs don’t perform actual science any longer (if science requires some sort of public sharing of information), so they win every disagreement by claiming that if you could only see what they have behind closed doors, you’d become a true believer.

replies(2): >>mrguyo+6x2 >>famous+nZ2
◧◩◪
56. rglove+hO1[view] [source] [discussion] 2025-01-22 15:32:12
>>skrebb+SV
It's identical energy. A significant number of people are attaching their hopes and dreams to a piece of technology while deluding themselves about the technical limitations of that technology. It's all rooted in greed. Relatively few are in it to push humanity forward, most are just trying to "get theirs."
◧◩◪
57. ajmurm+rQ1[view] [source] [discussion] 2025-01-22 15:45:49
>>nejsjs+Ii1
How would this be a Ponzi scheme? Who are the leaf nodes ending up holding the bag?
replies(2): >>sander+2U1 >>nejsjs+8U1
◧◩
58. sander+ER1[view] [source] [discussion] 2025-01-22 15:50:51
>>famous+Hf
I think it will be in between, like most things end up being. I don't think they are charlatans at all, but I think they're probably a bit high on their own supply. I think it's true that "everything is about to change", but I think that change will look more like the status quo than the current hype cycle suggests. There are a lot of periods in history when "everything changed", and I believe we're already a number of years into one of those periods now, but in all those cases, despite "everything" changing, a perhaps surprising number of things remained the same. I think this will be no different than that. But it's hard, impossible really, to accurately predict where the chips will land.
◧◩◪
59. anthon+RR1[view] [source] [discussion] 2025-01-22 15:52:04
>>DebtDe+lo1
I'm inclined to agree with Yann about true AGI, but he works at Meta and they seem to think current LLM's are sufficiently useful to be dumping preposterous amounts of money at them as well.

It may be a distinction thats not worth making if the current approach is good enough to completely transform society and make infinite money

replies(1): >>sander+eT1
◧◩◪◨
60. sander+1S1[view] [source] [discussion] 2025-01-22 15:53:06
>>Heatra+801
You're missing the point. The hype is the same, because the incentives are the same.

I agree with you that there is significantly more there there with AI, but I agree with the parent that the hype cycles are essentially indistinguishable.

◧◩◪◨
61. sander+eT1[view] [source] [discussion] 2025-01-22 16:01:05
>>anthon+RR1
Yeah, in my mind, the distinction worth making is where the inflection point from exponential growth to plateau in the s-curve of usefulness is. Have we already hit it? Are we going to hit it soon? Is it far in the future? Or is it exponential from here straight to "the singularity"?

Hard to predict!

If we've already hit it, this has already been a very short period of time during which we've seen incredibly valuable new technology commercialized, and that's nothing to sneeze at, and fortunes have and will be rightly made from it.

If it's in the near future, then a lot of people might be over-investing in the promise of future growth that won't materialize to the extent they hoped. Some people will lose their shirts, but we're still left with incredibly useful new technology.

But if we have a long (or infinite) way to go before hitting that inflection point, then the hype is justified.

◧◩◪◨
62. sander+2U1[view] [source] [discussion] 2025-01-22 16:06:44
>>ajmurm+rQ1
Investors, mostly private - eg. SoftBank and all the other deep pockets funneling money into this - but also public, because lots of people are invested in Nvidia, Microsoft, and Google, who will be directly affected if the bubble bursts, and just everyone invested in the markets generally, as this bubble bursting would already probably be more broadly damaging than even the dot com bust was.

Personally, I do expect a big correction at some point, even if it never reaches the point of bubble bursting. But I have no idea when I expect it to happen, so this isn't, like, an investable thesis.

replies(1): >>ajmurm+8W1
◧◩◪◨
63. nejsjs+8U1[view] [source] [discussion] 2025-01-22 16:07:25
>>ajmurm+rQ1
https://www.aboutamazon.com/news/aws/amazon-invests-addition...

Not this specificay but this kinda thing. If I am getting billions like this, I wanna keep this gravy going. And it comes from shareholders ultimately.

replies(1): >>ajmurm+AW1
◧◩
64. viccis+QV1[view] [source] [discussion] 2025-01-22 16:15:58
>>Davidz+Y5
>reasoning capabilities in latest models are rapidly approaching superhuman levels and continue to scale with compute

I still have a pretty hard time getting it to tell me how many sisters Alice has. I think this might be a bit optimistic.

replies(1): >>Sketch+yG2
◧◩◪◨⬒
65. ajmurm+8W1[view] [source] [discussion] 2025-01-22 16:16:44
>>sander+2U1
So unlike with a regular Ponzi scheme most of the money just is wasted?
replies(2): >>nejsjs+SX1 >>sander+O62
◧◩◪◨⬒
66. ajmurm+AW1[view] [source] [discussion] 2025-01-22 16:18:09
>>nejsjs+8U1
It's just being spent though, no? Sounds more like a potential waste of money than a Ponzi scheme.
◧◩◪◨⬒⬓
67. nejsjs+SX1[view] [source] [discussion] 2025-01-22 16:23:14
>>ajmurm+8W1
Well Madoff funnelled it into lifestyle.

Technically you are correct. A ponzi is a single entity paying returns from new marks. It is a straight con.

But some systems can be ponzi-like in that they require more and more investment and people get rich by selling into that. Bitcoin is an example.

68. MetaWh+IZ1[view] [source] 2025-01-22 16:32:24
>>TheAce+(OP)
> I don't know how to make sense of this level of investment.

The thing about investments, specifically in the world of tech startups and VC money, is that speculation is not something you merely capitalize on as an investor, it's also something you capitalize on as a business. Investors desperately want to speculate (gamble) on AI to scratch that itch, to the tune of $500 billion, apparently.

So this says less about, 'Are we close to AGI?' or, 'Is it worth it?' and more about, 'Are people really willing to gamble this much?'. Collectively, yes, they are.

replies(1): >>mistri+pG5
◧◩◪◨
69. paul79+r02[view] [source] [discussion] 2025-01-22 16:35:59
>>lm2846+4g1
Do you use chatGPT many times throughout your day (i do)? If so did you ever want it to find the best hotel and book it for you? With chatGPT you can not do this now as all the travel websites do not have their own AI Agents for GPT to communicate with. Once they do you can type to GPT back and forth or talk to it to get anything and everything you now use the web to do. Yet your human like friend (AI friend/agent) will do for you.. U dont have to talk to it but if it's just like a human why not talk to it to do everything for you & use it as a knowledgebase. If you aren't aware you can now have a full back n forth conversation with chatGPT(it's not dumbass Siri).

All technological advances that are adopted are ones that made life easier and for some cooler then what they were once using (cell phone to iPhone put the web in our pocket but using your iPhone while driving is dangerous but talking to your human like friend isnt). Check out the movie H.E.R. as what Im describing is mostly what i describe above.

Time will tell if any of what im saying comes to fruition, but Silicon Valley is all a buzz about AI Agents in the last month or two and going forward.

replies(1): >>lm2846+pO3
◧◩◪
70. dauhak+e62[view] [source] [discussion] 2025-01-22 17:07:05
>>rhubar+ZM
Yeah obviously motivations are murky and all over the place, no one's free of bias. I'm not taking a strong stance on whether they're right or not or how much of it is motivated reasoning, I just think at least quite a bit is genuine (I'm mainly basing this off researchers I know who have a track record of being very sober and "boring" rather than the flashy Altman types)

To your point, yeah the models still suck in some surprising ways, but again it's that thing of they're the worst they're ever going to be, and I think in particular on the reasoning issue a lot of people are quite excited that RL over CoT is looking really really promising for this.

I agree with your broader point though that I'm not sure how close we are and there's an awful lot of noise right now

replies(1): >>rhubar+wD5
◧◩◪◨⬒⬓
71. sander+O62[view] [source] [discussion] 2025-01-22 17:09:07
>>ajmurm+8W1
Not sure what you mean by "wasted"? Like a regular Ponzi scheme, there are many opportunities for the people at the top to extract value out into cash, while people who "got in" on the scheme later are left holding the bag when the bubble bursts.
replies(1): >>ajmurm+Lh2
◧◩◪◨
72. drdaem+382[view] [source] [discussion] 2025-01-22 17:15:14
>>__loam+zV
Well, it's a huge jump, but it's still a jump from "it generates utter illogical nonsense when it tries to simulate reason" to "it makes some correct guesses that start to resemble reasoning if we squint at it really hard."

Which is - no doubt - an astonishing achievement, but absolutely not like the "AI" hype train people try to paint it.

The "rapidly approaching" part is true in terms of the velocity, but all of this are just baby steps while walking upright properly is way beyond the horizon.

I wouldn't mind being wrong about this, of course.

◧◩
73. bcrosb+za2[view] [source] [discussion] 2025-01-22 17:27:07
>>famous+Hf
So they're either wrong or building Skynet.
◧◩◪◨
74. aylmao+hf2[view] [source] [discussion] 2025-01-22 17:52:57
>>Heatra+801
Worth pointing out; the Turing test is pretty much just a thought experiment. Turing never considered it a test of "intelligence", or any other human quality. Many people have criticized its use as a measure of such.

[1] https://en.wikipedia.org/wiki/Turing_test#Weaknesses

replies(1): >>tallda+f23
◧◩◪◨⬒⬓⬔
75. ajmurm+Lh2[view] [source] [discussion] 2025-01-22 18:04:06
>>sander+O62
Yes, usually people at the top extract the cash. Like Bernie Madoff just spent the money for his enjoyment. In this case the money goes to people building the data centers, providing resources and the engineers at OpenAI who are actually working for the money.
replies(1): >>sander+yI2
◧◩◪◨
76. mrguyo+6x2[view] [source] [discussion] 2025-01-22 19:36:39
>>skepti+XH1
Doesn't OpenAI explicitly have a "definition" of AGI that's just "it makes some money"?
◧◩◪
77. LeftHa+MB2[view] [source] [discussion] 2025-01-22 20:05:26
>>rhubar+rS
Absolutely. Look at how Sam Altman speaks.

If you've taken a couple of lectures about AI, you've probably been taught not to anthropomorphize your own algorithms, especially given how the masses think of AI (in terms of Skynet, Cortana, "Her", Ex Machina, etc). It encourages people to mistake the capabilities of the models and ascribe to them all of the traits of AI they've seen in TV and movies.

Sam has ignored that advice, and exploited the hype that can be generated by doing so. He even tried to mimic the product in "Her", down to the voice [0]. The old board said his "outright lying" made it impossible to trust him [1]. That behavior raises eyebrows, even if he's got a legitimate product.

[0]: https://www.wired.com/story/openai-gpt-4o-chatgpt-artificial...

[1]: https://www.theverge.com/2024/5/28/24166713/openai-helen-ton...

replies(1): >>dinkum+Mb6
◧◩◪
78. whipla+9C2[view] [source] [discussion] 2025-01-22 20:07:54
>>DebtDe+lo1
Who says they will stick to autoregressive LLMs?
◧◩
79. sgt101+vC2[view] [source] [discussion] 2025-01-22 20:11:15
>>Davidz+Y5
What is the evidence for 1) ? I thought that the latest models were getting "somewhere" with fairly trivial reasoning tests like ARC-1
replies(1): >>sealec+1ve
◧◩
80. ca_tec+bF2[view] [source] [discussion] 2025-01-22 20:29:13
>>famous+Hf
I am not qualified to make any assumptions but I do wonder if a massive investment into computing infrastructure serves national security purposes beyond AI. Like building subway stations that also happen to serve as bomb shelters.

Are there computing and cryptography problems that the infrastructure could be (publicly or quietly) reallocated to address if the United States found itself in a conflict? Any cryptographers here have a thought on whether hundreds of thousands of GPUs turned on a single cryptographic key would yield any value?

replies(1): >>misswa+tt6
◧◩◪
81. Sketch+yG2[view] [source] [discussion] 2025-01-22 20:37:39
>>viccis+QV1
They plugged the hole for "how many 'r''s in 'strawberry'", but I just asked it how many "l"s in "lemolade" (spelling intentional) and it told me 1. If you make it close to, but not exactly a word it would be expecting it falls over.
replies(1): >>tawm+FQ2
◧◩◪◨⬒⬓⬔⧯
82. sander+yI2[view] [source] [discussion] 2025-01-22 20:50:30
>>ajmurm+Lh2
A lot of it does - and that's great! - but a lot of it accrues to the owners of the businesses involved.

If this is a bubble and it bursts in a few years, a lot of investors in specific companies, and in the market broadly, will lose a lot of money, but Sam Altman and Jensen Huang will remain very wealthy.

I'm a capitalist and I think there are good reasons for wealth to accrue to those who take risks and drive toward technological progress. But it just also is the case that they are incentivized to hype their companies, even if it risks getting out over their skis and leads to a bubble which eventually bursts. There are just have lots of ways to extract wealth prior to a bubble bursting, so the downsides of unwarranted hype are not as acute as they might otherwise be.

◧◩◪◨
83. root_a+XN2[view] [source] [discussion] 2025-01-22 21:23:19
>>Heatra+801
I'm not so sure it passes the turing test since you can trivially determine that the conversation partner is a machine by asking it a trick question or offering it a "jailbreak" style prompt.
◧◩◪◨
84. tawm+FQ2[view] [source] [discussion] 2025-01-22 21:43:37
>>Sketch+yG2
I wonder if those special cases are handled by a bunch of if/else statements wrapped around the model :)
replies(1): >>Sketch+nR2
◧◩◪◨⬒
85. Sketch+nR2[view] [source] [discussion] 2025-01-22 21:49:50
>>tawm+FQ2
That's my wonder too.
◧◩◪
86. famous+KY2[view] [source] [discussion] 2025-01-22 22:48:11
>>DebtDe+lo1
It's obviously not taken to mean literally everybody.

Whatever LeCun says and really even he has said "AGI is possible in 5 to 10 years" as recently as 2 months ago (so if that's the 'skeptic' opinion, you can only imagine what a lot of people are thinking), Meta has and is pouring a whole lot of money into LLM development. "Put your money where your mouth is" as they say. People can say all sorts of things but what they choose to focus their money on tells a whole lot.

◧◩◪◨
87. famous+nZ2[view] [source] [discussion] 2025-01-22 22:52:57
>>skepti+XH1
>You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.

When the old gang at Open ai was together, Sutskever, not Sam was easily the most hypey of them all. And if you ask Norvig today, AGI is already here. 2 months ago, Lecun said he believes AGI could be here in 5 to 10 years and this is supposed to be the skeptic. This is the kind of thing i'm talking about. The idea that it's just the non academics caught in the hype is just blatantly false.

No, it doesn't have to be literally everybody to make the point.

replies(1): >>skepti+193
◧◩◪◨⬒
88. tallda+f23[view] [source] [discussion] 2025-01-22 23:18:42
>>aylmao+hf2
Yep - the "value" of a Turing-capable system has been questioned for a while now. We watched Markov chains and IRC bots clear the Turing test on a regular basis in the mid-2000s, and all we got out of that was better automated scamming.

Even now, as we have fully capable conversational models we don't really have any great immediate applications. Our efforts at making them "think" is yielding marginal returns.

◧◩◪◨⬒
89. skepti+193[view] [source] [discussion] 2025-01-23 00:10:38
>>famous+nZ2
Here's why I know that OpenAI is stuck in a hype cycle. For all of 2024, the cry from employees was "PhD level models are coming this year; just imagine what you can do when everyone has PhD level intelligence at their beck and call". And, indeed, PhD level models did arrive...if you consider GPQA to be a benchmark that is particularly meaningful in the real world. Why should I take this year's pronouncements seriously, given this?

OpenAI is what you get when you take Goodhart's Law to the extreme. They are so focused on benchmarks that they are completely blind to the rate of progress that actual matters (hint...it's not model capability in a vacuum).

Yann indeed does believe that AGI will arrive in a decade, but the important thing is that he is honest that this is an uncertain estimate and is based off of extrapolation.

◧◩◪◨⬒
90. zeroon+Gd3[view] [source] [discussion] 2025-01-23 00:51:25
>>cibyr+z41
It would probably be more reasonable to adjust for US GDP. That would put $2 billion back then at around the same as $250 billion today. So only about 2x off.
◧◩◪
91. sandsp+SI3[view] [source] [discussion] 2025-01-23 06:17:00
>>rhubar+rS
The newest US president announced this within the 48 hours of assuming office. Hype alone couldn't set such a big wheel in motion.
replies(1): >>rhubar+hV3
◧◩◪◨⬒
92. lm2846+pO3[view] [source] [discussion] 2025-01-23 07:19:39
>>paul79+r02
Every single point you listed seems like a worst version of what I already have.

I want to interact with real people, not bots, I'm already spending most of my time wasting my life in front of a fucking screen for work

replies(1): >>paul79+se5
◧◩◪◨
93. skrebb+QS3[view] [source] [discussion] 2025-01-23 08:07:00
>>Heatra+801
I agree, AI is fucking spectacular and NFTs have no substance. But at the same time, neither AI nor NFTs have substantially affected my life so far, so I experience a very weird cognitive dissonance when the AI hype crowd gasps on twitter. This is exactly the same feeling I had when I felt like I was the only one in my twitter bubble who didn't think NFTs were the shizzle.

I mean, AI the tech can be spectacular and the hype can be overblown, right? I'm not even sure that the hype is overblown, but it sure feels like the kind of hype that we'll say, a few years from now, was overblown.

◧◩◪◨
94. rhubar+hV3[view] [source] [discussion] 2025-01-23 08:32:07
>>sandsp+SI3
Again, that’s a discussion about enthusiasm not technology.

I just want an objective opinion from someone who has a deep understanding of the cutting edge.

It’s maddening to try to plan for a future which everyone is incentivised and permitted to fabricate.

◧◩◪◨⬒
95. shmeee+Kb4[view] [source] [discussion] 2025-01-23 11:32:08
>>paul79+pd1
I suppose most anybody talking about this topic has seen Her by now (and if they haven't, they should, it's both a good movie and very relevant). The problem is rather that not everybody shares your enthusiasm about the utopia it depicts.

This is because it's also a dystopia in disguise. It's a social criticism and a cautionary tale about the way fetishizing technology is emotionally crippling us as individuals in a society. It kind of amazes me that this aspect seems to go over some people's heads.

It's obviously true what Booker said: What one person considers an ideal dream might to another person seem a nightmare.

replies(1): >>paul79+3m5
◧◩◪
96. famous+vv4[view] [source] [discussion] 2025-01-23 14:15:37
>>rhubar+rS
>The problem is, they are hugely incentivised to hype to raise funding.

Hype is extremely normal. Everyone with a business gets the chance to hype for the purpose of funding. That alone isn't going to get several of the biggest tech giants in the world to pour billions.

Satya just said, "he has his 80 billion ready". Is Microsoft an "AI foundation company" ? Is Google ? Is Meta ?

The point is the old saying - "Put your money where your mouth is". People can say all sorts of things but what they choose to spend their money on says a whole lot.

And I'm not saying this means the investment is guaranteed to be worth it.

◧◩◪
97. famous+Iy4[view] [source] [discussion] 2025-01-23 14:36:57
>>skrebb+SV
Well Crypto had nowhere near the uptake [0] and investment (even leaving this announcement aside, several of the biggest tech giants are pouring billions into this).

At any rate, I'm not saying this means that all this investment is guaranteed to pay off.

[0] With 300 million weekly active users/1 billion messages per day and #8 in visits worldwide the last few months just 2 years after release, ChatGPT is the software product with the fastest adoption ever.

◧◩◪◨⬒⬓
98. paul79+se5[view] [source] [discussion] 2025-01-23 19:12:43
>>lm2846+pO3
Sure but the AI genie is out of the bottle and the tech billionaires are barreling us towards AI friends/agents/assistants. If we stop the momentum China picks it up and American prosperity and security is at risk.

What i describe is much like the movie H.E.R. which Sam Altman chatGPT's CEO asked Scarlett Johansan (voice of the AI in that movie) to be the voice of GPT. GPT is now a little like H.E.R the movie as you can have a full conversation with it unlike Siri. Just atm you dont see how GPT looks .. it doesn't look like a FaceTime call with human AI friend/Assistant (how your AI Assistant/Friend looks.. could look and sound like a deceased loved one .. that's my own crazy idea not from the movie H.E.R). Yet maybe in the future it will.. I'm betting it will, but it's only a guess and time will tell.

Im awaiting your downvote :) but will revisit this thread in a few years or more. Well if im right ;)

replies(1): >>lm2846+lU6
◧◩◪◨⬒⬓
99. paul79+3m5[view] [source] [discussion] 2025-01-23 20:00:16
>>shmeee+Kb4
Indeed yet chatGPT is already like H.E.R. yet there's no human like face to it ATM.

Im just jumping ahead utilizing what was seen in H.E.R. to envision where we are headed (possibly) as well adding my own crazy ... your AI Assistant Friend seen on your lock screen via a Facetime UI/UX call looks and sounds like a deceased loved one. Mom still guiding you through life.

replies(1): >>shmeee+ZO5
◧◩◪◨
100. rhubar+wD5[view] [source] [discussion] 2025-01-23 22:08:32
>>dauhak+e62
Thanks, that’s helpful.

“The worst they’re going to be” line is a bit odd. I hear it a lot, but surely it’s true of all tech? So why are we hearing it more now? Perhaps that is a sign of hype?

replies(1): >>dauhak+B97
◧◩
101. mistri+pG5[view] [source] [discussion] 2025-01-23 22:29:50
>>MetaWh+IZ1
note that the $500 Billion number is bravado / aspirational. MSFT Nadella said he has $80B to contribute? It is a live auction horse race in some ways, it seems.
◧◩◪◨⬒⬓⬔
102. shmeee+ZO5[view] [source] [discussion] 2025-01-23 23:55:54
>>paul79+3m5
That sounds creepy as hell to me. Are you serious, or is that an idea for a horror movie?

Edit: aaaand right after posting I stumble across a documentary running on TV in this very moment, in which a dying guy trained an AI on himself to accompany his widow after his death. Seems you're not the only one to find that desirable...

◧◩◪◨
103. dinkum+Mb6[view] [source] [discussion] 2025-01-24 05:03:06
>>LeftHa+MB2
Oh, surely Larry Ellison is a trustworthy sort of fellow, right? :)
◧◩◪
104. misswa+tt6[view] [source] [discussion] 2025-01-24 09:51:01
>>ca_tec+bF2
I'm not a cryptographer, nor am I good with math (actually I suck badly; consider yourself warned...), but am I curious about how threatened password hashes should feel if the 'AI juggernauts' suddenly fancy themselves playing on the red team, so I quickly did some (likely poor) back-of-the-napkin calculations.

'Well known' password notwithstanding, let's use the following as a password:

correct-horse-battery-staple

This password is 28 characters long, and whilst it could be stronger with uppercase letters, numbers, and special characters, it still shirtfronts a respectable ~1,397,958,111 decillion (1.39 × 10^42) combinations for an unsuspecting AI-turned-hashcat cluster to crack. Let's say this password was protected by SHA2-256 (assuming no cryptographic weaknesses exist (I haven't checked, purely for academic purposes)), and that at least 50% of hashes would need to be tested before 'success' flourishes (lets try to make things a bit exciting...).

I looked up a random benchmark for hashcat, and found an average of 20 gigahashs/second (GH/s) for a single RTX 4090.

If we throw 100 RTX 4090s at this hashed password, assuming a uniform 20 GH/s (combined firepower of 2,000 GH/s) and absolutely perfect running conditions, it would take at least eleven-nonillion-fifty octillion (1.105 x 10^31) years to crack. Earth will be long gone by the time that rolls around.

Turning up the heat (perhaps literally) by throwing 1,000,000 RTX 4090s at this hashed password, assuming the same conditions, doesn't help much (in terms of Earth's lifespan): two-octillion-two-hundred-ten septillion (2.21 x 10^27) years.

Using some recommended password specifications from NIST - 15 characters comprised of upper and lower-case letters, numbers, and special characters - lets try:

dXIl5p*Vn6Gt#BH

Despite the higher complexity, this password only just eeks out a paltry ~ 41 sextillion (4.11 × 10^22) possible combinations. Throwing 100 RTX 4090s at this password would, rather worryingly, only take around three hundred twenty-six billion seven hundred thirteen million two hundred seventeen thousand (326,713,217,000) years to have a 50% chance of success. My calculator didn't even turn my answer into a scientific number!

More alarming still, is when 1,000,000 RTX 4090s get sic'ed on the shorter hashed password: around thirty-two million six hundred seventy-one thousand (32,671,000) years to knock down half of this hashed password's strength.

I read a report that suggested Microsoft aimed to have 1.8 million GPUs by the end of 2024. We'll probably be safe for at least the next six months or so. All bets are off after that.

All I dream about is the tital wave of cheap high-performance GPUs flooding the market when the AI bubble bursts, so I can finally run Farcry at 25 frames per second for less than a grand.

◧◩◪◨⬒⬓⬔
105. lm2846+lU6[view] [source] [discussion] 2025-01-24 14:42:41
>>paul79+se5
> If we stop the momentum China picks it up and American prosperity and security is at risk.

How so ?

replies(1): >>paul79+Vec
◧◩◪◨⬒
106. dauhak+B97[view] [source] [discussion] 2025-01-24 16:01:00
>>rhubar+wD5
Yeah that's a fair point! It's def a more general tech thing, but I think there are a couple specific reasons why it comes up more here though. Firstly, I think most tech does not improve at the insane rate that AI has been historically, so people's perception of capabilities become out of date just incredibly rapidly here (think about how long people we're banging on about "AI can't draw hands!" well after better models came out that could). If you think of the line as a way to say "don't anchor on what it can do today!" then it feels more appropriate to go on about this more for a more rapidly-changing field

Secondly, I think there's a tendency in AI for some ppl to look at failures of models and attribute it to some fundamental limitation of the approach, rather than something that future models will solve. So I think the line also gets used as short-hand for "Don't assume this limitation is inherent to the approach". I think in other areas of tech there's less of a tendency to try to write off entire areas because of present-day limitations, hence the line coming up more often

So you're right that the line is kind of universally applicable in tech, I guess I just think the kinds of bad arguments that warrant it as a rejoinder are more common around AI?

◧◩◪
107. srouss+5k8[view] [source] [discussion] 2025-01-25 02:26:08
>>rhubar+ZM
Summarizing is quite difficult. You need to keep the salient points and facts.

If anyone has experience on getting this right, I would like to know how you do it.

◧◩◪◨⬒⬓⬔⧯
108. paul79+Vec[view] [source] [discussion] 2025-01-26 20:49:12
>>lm2846+lU6
The biggest new technology where America controls/drives it and all other countries follow along .. we are & continue to be number one in AI technology and all other countries come to and or follow us that brings America billions to trillions of dollars ... the more money we have the more secure & prosperous America is & continues to be. As well creating new AI tech and using it to further protect America from adversaries.

The genie is out of the bottle and America must keep it's momentum in AI up .. ahead of all other countries for it's continued prosperity and security!

◧◩◪
109. sealec+1ve[view] [source] [discussion] 2025-01-27 16:13:47
>>sgt101+vC2
It may be that you can just find the solution for these tests by interpolating from a very large dataset.
[go to top]