zlacker

[parent] [thread] 6 comments
1. MontyC+(OP)[view] [source] 2023-03-05 06:35:34
>Voodoo Graphics and GLide were the standard in the PC graphics space for a time. 3dfx created an industry that is going strong today, and that industry has affected far more than just gaming. GPUs now power multiple functions in our computers, and they enable AI work as well.

>This tale brings up many “what ifs.”

What if 3dfx had realized early on that GPUs were excellent general purpose linear algebra computers, and had incorporated GPGPU functionality into GLide in the late 90s?

Given its SGI roots, this is not implausible. And given how NVidia still has a near stranglehold on the GPGPU market today, it’s also plausible that this would have kept 3dfx alive.

replies(3): >>monoca+Db >>aldric+ri >>zozbot+7m
2. monoca+Db[view] [source] 2023-03-05 09:28:03
>>MontyC+(OP)
I don't think so. Part of what made consumer GPUs viable at that gate count was their laser like precision on their specific rasterization workloads. More general linear algebra solutions wouldn't have been viable on the marketplace. You wouldn't see the viability of more general hardware until the bro layer 90s advent of register combiner hardware.
3. aldric+ri[view] [source] 2023-03-05 11:07:52
>>MontyC+(OP)
History is funny because at that time in the 90s there was a company called Bitboys Oy. That company was founded by some Finnish demoscene members and was developing a series of graphics cards, Pyramid3D and Glaze3D, with a programmable pipeline around 1997-1999 [1]. This was at around 5 years before the first commercial shader capable card was released.

Even though Wikipedia classifies it as vaporware, there are prototype cards and manuals floating around showing that these cards were in fact designed and contained programmable pixel shaders, notably:

- The Pyramid3D GPU datasheet: http://vgamuseum.info/images/doc/unreleased/pyramid3d/tr2520...

- The pitch deck: http://vgamuseum.info/images/doc/unreleased/pyramid3d/tritec...

- The hardware reference manual: http://vgamuseum.info/images/doc/unreleased/pyramid3d/vs203_... (shows even more internals!)

(As far the companies go: VLSI Solution Oy / TriTech / Bitboys Oy were all related here.)

They unfortunately busted before they could release anything, due to a wrong bet in memory type choice (RDRAM, I think) and letting their architecture rely on that, then running out of money, perhaps some other problems. In the end their assets were bought by ATI.

As for 3dfx, I would highly recommend watching the 3dfx Oral History Panel video from the Computer History Museum with 4 key people involved in 3dfx at the time [2]. Its quite fun as it shows how 3dfx got ahead of the curve by using very clever engineering hacks and tricks to get more out of the silicon and data buses.

It also suggests that their strategy was explicitly about squeezing as much performance out of the hardware, and making sacrifices (quality, programmability) there, which made sense at the time. I do think they would've been pretty late to switch to the whole programmable pipeline show, for to that reason alone. But who knows!

[1] https://en.wikipedia.org/wiki/BitBoys

[2] https://www.youtube.com/watch?v=3MghYhf-GhU

4. zozbot+7m[view] [source] 2023-03-05 11:49:40
>>MontyC+(OP)
Early 3dfx cards did not do any linear algebra or even 3D rendering. They accelerated triangle rasterization in screen coordinates, everything else was done by the CPU. So, 2.5D at most really.
replies(1): >>NovaDu+YJ2
◧◩
5. NovaDu+YJ2[view] [source] [discussion] 2023-03-06 05:58:39
>>zozbot+7m
It is funny playing titles from that era on modern hardware as it is apparent just how much the CPU was doing. Mostly due to the lack of optimization of the geometry routines past basic use of MMX - frame rate are still high just nowhere near as high as you would expect.

I remember when a friend 1st got an i7 machine and we decided to see just how fast Turok 2 would go. I mean seeing Quake 3 go from barely 30fps to up near 1,000 FPS over the same time period, we figured it would be neat to see. Turns out it could barely break the 200 FPS mark even though it was a good 8 times the clock rate compared with the PC we originally played it on at near 60fps.

No use of SSE, no use of T&L units or Vertex/Pixel shaders. It is all very much just plane rasterisation at work.

replies(1): >>rasz+7R3
◧◩◪
6. rasz+7R3[view] [source] [discussion] 2023-03-06 15:34:39
>>NovaDu+YJ2
> past basic use of MMX

MMX is fixed point and shares register space with FPU. Afaik not a single real shipped game ever used MMX for geometry. Intel did pay some game studios to fake MMX support. One was 1998 Ubisoft POD with a huge "Designed for Intel MMX" banner on all boxes https://www.mobygames.com/game/644/pod/cover/group-3790/cove... while MMX was used by one optional audio filter :). Amazingly someone working in Intel "developer relations group" at the time is on HN and chimed in https://news.ycombinator.com/item?id=28237085

"I can tell you that Intel gave companies $1 million for "Optimized" games for marketing such."

$1 million for one optional MMX optimized sound effect. And this scammy marketing worked! Multiple youtube reviewers remember vividly how POD "runs best/fastest on MMX" to this day (LGR is one example).

replies(1): >>NovaDu+g9j
◧◩◪◨
7. NovaDu+g9j[view] [source] [discussion] 2023-03-10 21:03:24
>>rasz+7R3
I was using MMX as just an off the top of my head example but you are completely right. SSE would have been a better example. ;)

Also had no idea about them paying for those optimizations but I am not surprised one bit. It is very in character for Intel. ;)

[go to top]