zlacker

[parent] [thread] 25 comments
1. samrus+(OP)[view] [source] 2025-07-24 21:13:29
Agentic AI? How exactly does a chip manufacturer focus on agentic AI? Thats software. How does riding that hype bubble help them male better chips?
replies(11): >>wicked+B >>ZiiS+C >>einrea+X >>wmf+A1 >>curiou+d4 >>bongod+f4 >>xt00+Fa >>lbrito+Qe >>wrs+0f >>insane+Cl >>mhh__+Ou
2. wicked+B[view] [source] 2025-07-24 21:17:09
>>samrus+(OP)
For the same reason Kodak presented KodakCoin... The stock price boost of joining the herd.
3. ZiiS+C[view] [source] 2025-07-24 21:17:11
>>samrus+(OP)
Wanting to be Nvidia is fairly understandable.
replies(3): >>zahllo+08 >>aleph_+I8 >>sidewn+IM
4. einrea+X[view] [source] 2025-07-24 21:18:16
>>samrus+(OP)
Will be answered by the ASI - soon. /s

But to be serious: Intel creates software too (compilers for example). Nvidia does provide software too. Its also nice to sell licenses.

5. wmf+A1[view] [source] 2025-07-24 21:21:25
>>samrus+(OP)
This sounds like reassurance that they aren't chasing last year's trends. On a technical level I don't know of any difference between inference and agentic inference.
replies(1): >>samrus+p5
6. curiou+d4[view] [source] 2025-07-24 21:34:48
>>samrus+(OP)
Oh, I don't know. Maybe build chips that do things 10x more efficiently and sell them a lower cost to compete?

It _is_ a hype bubble but it is also an S-curve. Intel has missed the AI boat so far, if they are trying to catch up, I would encourage them to try. Building marginally better x86 chips might not cut it anymore.

replies(1): >>samrus+i6
7. bongod+f4[view] [source] 2025-07-24 21:34:53
>>samrus+(OP)
Are you suggesting that agentic AI is only used to write software?
replies(1): >>samrus+d5
◧◩
8. samrus+d5[view] [source] [discussion] 2025-07-24 21:39:19
>>bongod+f4
No im saying that agnetic AI itself is software. The parts that require hardware is just the neural network. The agentic part is all in software. So how does intel focus on agentic AI beyond just making better datacenter gpus? That like a farmer saying they're gonna focus on wedding cakes; they make wheat, the wehat will get used for whatever demand exists down the line, they can only focus on making the best wheat they can?

Is this just marketing?

◧◩
9. samrus+p5[view] [source] [discussion] 2025-07-24 21:40:30
>>wmf+A1
There is none. Thats what im saying. There is literally no difference to the gpu between running a nueral network for a normal LLM and for agentic AI.
◧◩
10. samrus+i6[view] [source] [discussion] 2025-07-24 21:43:59
>>curiou+d4
Thats fine. Great even. But thats just normal nueral net inference. Why mention agentic AI over just AI? The gpu doesnt care if the inference is being done for object detection or chain of thought. Intel can only make gpus, their products dont care about the software at the app level.

Maybe they mean the more vram needed for agentic AI? but then the sane thing to say would be that theyll offer more compute for AI.

its just an unhinged thing for a chip manufacturer to say.

replies(1): >>martin+uv
◧◩
11. zahllo+08[view] [source] [discussion] 2025-07-24 21:52:01
>>ZiiS+C
And Intel do make GPUs.
◧◩
12. aleph_+I8[view] [source] [discussion] 2025-07-24 21:55:55
>>ZiiS+C
I am not sure whether NVidia's luck will continue to last as it is currently.
replies(3): >>roboro+0j >>Zigurd+Bj >>electr+fJ
13. xt00+Fa[view] [source] 2025-07-24 22:07:36
>>samrus+(OP)
Inference / Agentic AI implies "running models performantly using CPU cores" most likely (maybe with some optimizations / special AVX512 stuff) -- so essentially "welp, no sense in trying to build GPU's, we are too far behind nvidia to catch up".
14. lbrito+Qe[view] [source] 2025-07-24 22:38:09
>>samrus+(OP)
Isn't hardware software-designed though?
15. wrs+0f[view] [source] 2025-07-24 22:39:24
>>samrus+(OP)
Intel makes a lot of software. In addition to internal tools, they try to make things go faster on their chips (e.g. the Intel C++ compiler), and try to accelerate new areas of software so people will need more chips (this focus varies over time, of course).
◧◩◪
16. roboro+0j[view] [source] [discussion] 2025-07-24 23:08:08
>>aleph_+I8
Is driving down the roads you paved luck?
replies(1): >>hx8+1p
◧◩◪
17. Zigurd+Bj[view] [source] [discussion] 2025-07-24 23:12:20
>>aleph_+I8
NVidia is a great company, with a phenomenal product that they competently developed and made dominant. There was a big element of luck in that, but it won't continue forever no matter how smart and diligent they are.

It feels like the whole magnificent seven thing and the the way they are holding up an overvalued stock market is driving some desperate decision-making. Like 12 more billion dollars for XAI to buy more Nvidia GPUs does XAI have any revenue at all?

replies(1): >>webdev+zp2
18. insane+Cl[view] [source] 2025-07-24 23:25:27
>>samrus+(OP)
focusing its chip + software designs on inference and running ai agents, not on training models

software tools are essential to driving chip adoption; it's one reason why Nvidia got ahead (CUDA)

◧◩◪◨
19. hx8+1p[view] [source] [discussion] 2025-07-24 23:54:20
>>roboro+0j
There's a lot of luck involved in the Nvidia story. Twenty five years ago no one was assuming there would be a need for $100M LLM training projects. No one expected the crypto mining demand either. To be honest they are lucky that CPU manufacturing didn't move into SIMD space and that the gaming market grew so much between 2000-2012.

Yeah they paved the road, but it's a surprise that it lead to many good destinations.

replies(1): >>mrheos+bJ
20. mhh__+Ou[view] [source] 2025-07-25 00:43:45
>>samrus+(OP)
They're trying to make money not chips.

And besides I think you can probably guess that AI is probably the most hardware oriented the industry has been in decades

◧◩◪
21. martin+uv[view] [source] [discussion] 2025-07-25 00:48:28
>>samrus+i6
But "agentic AI" (and LLMs in general) are far less about compute than everyone talks about IMO. I know what you mean FWIW but she does have a point I think.

1) Context memory requirements scale quadratically with length. 2) "Agentic" AI requires a shittonne of context IME. Like a horrifying amount. Tool definitions alone can add up to thousands upon thousands of tokens, plus schemas and a lot of 'back and forth' context use between tool(s). If you just import a moderately complicated OpenAPI/Swagger schema and use it "as is" you will probably run into the hundreds of thousands of tokens within a few tool calls. 3) Finally, compute actually isn't the bottleneck, its memory bandwidth.

There is a massive opportunity for someone to snipe nvidia for inference at least. Inference is becoming pretty 'standardized' at least with the current state of play. If someone can come along with a cheaper GPU with a lot of VRAM and a lot of memory bandwidth, NVidia's moat is far less software wise than it is for CUDA as a whole. I think AMD are very close to reaching that FWIW.

I suspect training and R&D will remain more in NVidias sphere but if Intel got its act together there is definitely room for competition here.

◧◩◪◨⬒
22. mrheos+bJ[view] [source] [discussion] 2025-07-25 02:49:07
>>hx8+1p
The crypto craze benefited both AMD and Nvidia(more for AMD because early day only AMD gpu could be used for mining).
◧◩◪
23. electr+fJ[view] [source] [discussion] 2025-07-25 02:50:01
>>aleph_+I8
Gaming, crypto, AI 36000 employees There are many others I would bet against instead of Nvidia.
replies(1): >>Jensso+DS
◧◩
24. sidewn+IM[view] [source] [discussion] 2025-07-25 03:23:22
>>ZiiS+C
Intel should just reposition itself as a hedge fund focused on making investments in Nvidia. They've got enough assets to leverage for capital and the market would eat it up
◧◩◪◨
25. Jensso+DS[view] [source] [discussion] 2025-07-25 04:37:22
>>electr+fJ
You aren't betting against Nvidia, you are betting against the burden of trillions of profit the hype has put in Nvidia. Can Nvidia really generate that much profits? If not its overhyped and you should bet against it.
◧◩◪◨
26. webdev+zp2[view] [source] [discussion] 2025-07-25 17:08:07
>>Zigurd+Bj
its unironically going to the moon. the mag7 are doing better than ever:

- google ai-maxxed and wiping the floor after a slow start

- apple owning the smartphone space right as desktops are becoming genuinely irrelevant

- microsoft owning the corporate space right as local and on-site is becoming genuinely irrelevant

- nvidia building gigashovels for the gold rush

- meta... well, we'll see how the 'superintelligence' turns out. meta is the weakest bet of the bunch.

- aws... doing cloud stuff, i guess. also not a clear story.

if anyone got lucky, its zuck. between vr, metaverse, and now his thick-rimmed goggles he appears to be genuinely clueless, riding the waves of his facebook moonshot.

the mag7 are gonna be called the giga7 by 2030.

you think the stocks are overheated? you haven't seen nothing. what we call a 'bubble' today, will appear to have been a gentle ramp-up for the real bubble that's waiting for us in 2030.

the printing will continue, btw. and the debt will hit $100 trillion by 2030. and the US will be even more powerful than previously thought possible.

long pax americana, long orange fool, long whoever succeeds him, long the deep state.

[go to top]