zlacker

3dfx: So powerful it’s kind of ridiculous

submitted by BirAda+(OP) on 2023-03-05 04:13:32 | 671 points 372 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
4. mk_stj+N4[view] [source] 2023-03-05 05:36:30
>>BirAda+(OP)
The retail box design [1] of the Voodoo 3 2000 is still burned into my memory even after 23 years. The purple woman's face, 6 million triangles per second... just, sold. 1999 was just an amazing time.

[1]: https://archive.ph/BNLiX/f3757388e1ee7008f0bad22261c625f1dcf...

◧◩
7. random+d5[view] [source] [discussion] 2023-03-05 05:46:38
>>wolpol+W4
A quick web search finds: https://hothardware.com/news/3dfx-rampage-gpu-performance-re...

But you could argue that the Rampage drivers weren’t optimized.

9. M4v3R+r5[view] [source] 2023-03-05 05:52:13
>>BirAda+(OP)
I have fond memories of playing Quake 2 for some time and then buying a Voodoo card. It suddenly looked like a totally different game. It wasn’t just the resolution and texture filtering - Quake 2 in GL mode used a totally different, dynamic lightning system and back then it was simply stunning.

Comparison pics: https://www.marky.ca/3d/quake2/compare/content.html

10. userbi+w5[view] [source] 2023-03-05 05:54:14
>>BirAda+(OP)
Not too long ago someone recreated a Voodoo5 6000: https://news.ycombinator.com/item?id=32960140

The (leaked?) datasheets and miscellaneous information for the GPUs seem to be widely available, but that's still quite impressive for a single person; on a similar level to making your own motherboard: https://news.ycombinator.com/item?id=29273829

◧◩
13. greggs+Q5[view] [source] [discussion] 2023-03-05 05:59:54
>>cronix+p4
Interestingly, the transistor density of GPUs has been following a roughly logarithmic curve since 2000, compared to the linear increase in x86 processors [1].

I totally agree that the incremental innovations observed in earlier GPU platforms felt much, much more ‘obvious’ though.

It’s as if the ‘wow factor’ of graphics hardware doesn’t scale at the same rate as density.

Or perhaps releases were more spread out than they are today (compared to the annual release cycle expected today) making the jumps more obvious.

[1] https://www.researchgate.net/figure/Comparison-of-NVIDIA-gra...

25. djmips+S7[view] [source] 2023-03-05 06:24:08
>>BirAda+(OP)
This is a really great retrospective on 3dfx from four of the founders.

3dfx Oral History Panel with Ross Smith, Scott Sellers, Gary Tarolli, and Gordon Campbell (Computer History Museum)

https://www.youtube.com/watch?v=3MghYhf-GhU

◧◩
35. js2+59[view] [source] [discussion] 2023-03-05 06:39:42
>>Teslaz+n7
1998 interview with Hook:

https://www.quakewiki.net/profile-retro-interview-brian-hook...

36. rl3+h9[view] [source] 2023-03-05 06:43:54
>>BirAda+(OP)
I'm just here to post old 3dfx commercials:

https://www.youtube.com/watch?v=ooLO2xeyJZA

https://www.youtube.com/watch?v=JIOYoZGoXsw

https://www.youtube.com/watch?v=43qp2TUNEFY

The print ads were similarly incredible:

http://www.x86-secret.com/pics/divers/v56k/histo/1999/commer...

https://www.purepc.pl/files/Image/artykul_zdjecia/2012/3DFX_...

https://fcdn.me/813/97f/3d-pc-accelerators-blow-dryer-ee8eb6...

◧◩◪
39. MontyC+u9[view] [source] [discussion] 2023-03-05 06:46:21
>>Razeng+K5
My guess is that it’s much harder to develop rendering algorithms (e.g. shaders) for NURBSes. It’s easy and efficient to compute and interpolate surface normals for polygons (the Phong shader is dead simple [0], and thus easy to extend). Basic shading algorithms are much more complicated for a NURBS [1], and thus sufficiently computationally inefficient that you might as well discretize the NURBS to a polygonal mesh (indeed, this is what 3D modeling programs do). At that point, you might as well model the polygonal mesh directly; I don’t think NURBS-based modeling is significantly easier than mesh-based modeling for the 3D artist.

[0] https://cs.nyu.edu/~perlin/courses/fall2005ugrad/phong.html

[1] https://www.dgp.toronto.edu/public_user/lessig/talks/talk_al...

52. app4so+dc[view] [source] 2023-03-05 07:27:16
>>BirAda+(OP)
> Voodoo2, Diamond Monster 3D, image from [Russian site]

Why the heck this image claimed being taken from [Russian site] if that image is from Buyee?[0]

[0] https://buyee.jp/mercari/item/m78940760663

◧◩
62. egeozc+je[view] [source] [discussion] 2023-03-05 07:54:08
>>mk_stj+N4
My brand-new Voodoo graphics card in its shiny package was the only thing I took with me as a teenager when I was leaving the flat in a hurry after the big earthquake in 1999[0]. I remember noticing the missing silhouettes of distant buildings with some fires burning behind where they used to stand and thinking, oh boy we surely won't have electricity soon.

You tend to have different priorities in those ages, I guess.

It's a dark memory, sure, but probably the packaging somehow made me get attached to a "stupid computer part" (not my words) and that's interesting.

[0]: https://en.wikipedia.org/wiki/1999_%C4%B0zmit_earthquake

◧◩
70. russdi+Ff[view] [source] [discussion] 2023-03-05 08:11:14
>>M4v3R+r5
I really liked that the Pure3D card [1] had a composite video output which allowed me to play Quake2 on my Apple //c monitor. Very bizarre retro future feel.

1: https://www.tomshardware.com/reviews/3d-accelerator-card-rev...

◧◩◪◨
98. pjmlp+Tm[view] [source] [discussion] 2023-03-05 10:08:38
>>wazoox+Gj
As if SGI didn't had their share in Farenheit's failure.

https://en.wikipedia.org/wiki/Fahrenheit_(graphics_API)

◧◩
116. aldric+br[view] [source] [discussion] 2023-03-05 11:07:52
>>MontyC+K8
History is funny because at that time in the 90s there was a company called Bitboys Oy. That company was founded by some Finnish demoscene members and was developing a series of graphics cards, Pyramid3D and Glaze3D, with a programmable pipeline around 1997-1999 [1]. This was at around 5 years before the first commercial shader capable card was released.

Even though Wikipedia classifies it as vaporware, there are prototype cards and manuals floating around showing that these cards were in fact designed and contained programmable pixel shaders, notably:

- The Pyramid3D GPU datasheet: http://vgamuseum.info/images/doc/unreleased/pyramid3d/tr2520...

- The pitch deck: http://vgamuseum.info/images/doc/unreleased/pyramid3d/tritec...

- The hardware reference manual: http://vgamuseum.info/images/doc/unreleased/pyramid3d/vs203_... (shows even more internals!)

(As far the companies go: VLSI Solution Oy / TriTech / Bitboys Oy were all related here.)

They unfortunately busted before they could release anything, due to a wrong bet in memory type choice (RDRAM, I think) and letting their architecture rely on that, then running out of money, perhaps some other problems. In the end their assets were bought by ATI.

As for 3dfx, I would highly recommend watching the 3dfx Oral History Panel video from the Computer History Museum with 4 key people involved in 3dfx at the time [2]. Its quite fun as it shows how 3dfx got ahead of the curve by using very clever engineering hacks and tricks to get more out of the silicon and data buses.

It also suggests that their strategy was explicitly about squeezing as much performance out of the hardware, and making sacrifices (quality, programmability) there, which made sense at the time. I do think they would've been pretty late to switch to the whole programmable pipeline show, for to that reason alone. But who knows!

[1] https://en.wikipedia.org/wiki/BitBoys

[2] https://www.youtube.com/watch?v=3MghYhf-GhU

◧◩◪◨
130. smcl+cw[view] [source] [discussion] 2023-03-05 12:04:09
>>flohof+Op
I feel like 3dfx still had a slight edge until the tail end of the Voodoo3 era (when the GF256 came out and blew everything away). But I think it depends what you prioritise.

Like here, the V3 seems to have a pretty handy lead in most cases https://www.anandtech.com/show/288/14 - but that's a TNT2 (not TNT2 Ultra) and it's all in 16 bit colour depth (not supported by the V3).

It was certainly an interesting time, and as a V3 owner I did envy that 32 bit colour depth on the TNT2 and the G400 MAX's gorgeous bump mapping :D

147. tiffan+8K[view] [source] 2023-03-05 14:11:31
>>BirAda+(OP)
In 1998, Tom’s Hardware had a long article on “NVIDIA vs 3DFX - The Wind Of Change“

https://www.tomshardware.com/reviews/nvidia,87.html

It’s a great article that predicted a lot of things.

Side note: love that an article 25-years old is still accessible.

◧◩◪
162. tysam_+0T[view] [source] [discussion] 2023-03-05 15:16:09
>>echees+lO
We are in the 4 Petaflops on a single card age currently, my friend: https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor...

It is quite insane. Now, getting to use all of them is difficult, but certainly possible with some clever planning. Hopefully as the tech matures we'll see higher and higher utilization rates (I think we're moving as fast as we were in the 90's in some ways, but some parts of how big the industry is hides the absolutely insane rate of progress. Also, scale, I suppose).

I remember George Hotz nearly falling out of his chair for example at a project that was running some deep learning computations at 50% peak GPU efficiency (i.e. used flops vs possible flops) (locally, one GPU, with some other interesting constraints). I hadn't personally realized how hard that is apparently to hit, for some things, though I guess it makes sense as there are few efficient applications that _also_ use every single available computing unit on a GPU.

And FP8 should be very usable too in the right circumstances. I myself am very much looking forward to using it at some point in the future once proper support gets released for it. :)))) :3 :3 :3 :))))

◧◩◪◨⬒
169. Maursa+7U[view] [source] [discussion] 2023-03-05 15:22:14
>>ChuckN+es
> 3D gaming at 1024x768 in 1998 would be like 8K gaming today.

Whatever. In late 1996, I got a PowerMac 8500/180DP (PowerPC 604e) and a 1024x768 monitor. The 8500 didn't even have a graphics card, but had integrated/dedicated graphics on the motherboard with 4MB VRAM (also S-video and composite video in and out). It came bundled with Bungie's Marathon[1] (1994) which filled the screen in 16-bit color.

[1] https://en.wikipedia.org/wiki/Marathon_(video_game)

◧◩◪◨⬒⬓
212. dahart+dd1[view] [source] [discussion] 2023-03-05 17:08:44
>>startu+H61
Oh yeah excellent point, I should not draw lines between graphics and ML — graphics has will continue to see more and more ML applications. I hope none of my coworkers see this.

I guess Megatron is a language model framework https://developer.nvidia.com/blog/announcing-megatron-for-tr...

◧◩
221. jasode+tl1[view] [source] [discussion] 2023-03-05 17:52:16
>>ChuckM+25
>In my opinion, Direct X was what killed it most. OpenGL was well supported on the Voodoo cards and Microsoft was determined to kill anyone using OpenGL

I don't know the details myself but as a FYI... this famous answer covering the OpenGL vs DirectX history from StackExchange disagrees with your opinion and says OpenGL didn't keep up (ARB committee). It also mentions that the OpenGL implementation in Voodoo cards was incomplete and only enough to run Quake:

https://softwareengineering.stackexchange.com/questions/6054...

The author of that answer is active on HN so maybe he'll chime in.

◧◩◪◨⬒
228. adastr+Ws1[view] [source] [discussion] 2023-03-05 18:34:41
>>somat+of
So just for the record, I've actually written a software 3D rasterizer for a video game back in the 90's, and did a first pass at porting the engine to Glide using the Voodoo 2 and Voodoo 3 hardware. I'm pulling on decades-old knowledge, but it was a formative time and I am pretty sure my memory here is accurate.

At the point of rasterization in the pipeline you need some way to turn your 3D surface into actual pixels on the screen. What actual pixels do you fill in, and with what color values? For a triangle this is pretty trivial: project the three points to screen-space, then calculate the slope between the points (as seen on the 2D screen), and then run down the scanlines from top to bottom incrementing or decrementing the horizontal start/top pixels for each scanline by those slope values. Super easy stuff. The only hard part is that to get the colors/texture coords right you need to apply a nonlinear correction factor. This is what "perspective-correct texturing" is, support for which was one of 3dfx's marketing points. Technically this approach scales to any planar polygon as well, but you can also break a polygon into triangles and then the hardware only has to understand triangles, which is simpler.

But how do you rasterize a Bézier curve or NURBS surface? How do you project the surface parameters to screen-space in a way that doesn't distort the shape of the curve, then interpolate that curve down scanlines? If you pick a specific curve type of small enough order it is doable, but good god is it complicated. Check out the code attached the main answer of this stack overflow question:

https://stackoverflow.com/questions/31757501/pixel-by-pixel-...

I'm not sure that monstrosity of an algorithm gets perspective correct texturing right, which is a whole other complication on top.

On the other hand, breaking these curved surfaces into discrete linear approximations (aka triangles) is exactly what the representation of these curves is designed around. Just keep recursively sampling the curve at its midpoint to create a new vertex, splitting the curve into two parts. Keep doing this until each curve is small enough (in the case of Pixar's Reyes renderer used for Toy Story, they keep splitting until the distance between vertices is less than 1/2 pixel). Then join the vertices, forming a triangle mesh. Simple, simple, simple.

To use an analogy from a different field, we could design our supercomputer hardware around solving complex non-linear equations directly. But we don't. We instead optimize for solving linear equations (e.g. BLAS, LINPACK) only. We then approximate non-linear equations as a whole lot of many-weighted linear equations, and solve those. Why? Because it is a way easier, way simpler, way more general method that is easier to parallelize in hardware, and gets the same results.

This isn't an accidental historical design choice that could have easily gone a different way, like the QWERTY keyboard. Rendering complex surfaces as triangles is really the only viable way to achieve performance and parallelism, so long as rasterization is the method for interpolating pixel values. (If we switch to ray tracing instead of rasterization, a different set of tradeoffs come into play and we will want to minimize geometry then, but that's a separate issue.)

◧◩◪◨
234. rasz+ex1[view] [source] [discussion] 2023-03-05 19:03:31
>>flohof+Op
>Quake II on a Riva TNT running in 1024x768

40 fps, 70 fps on V2 sli

https://www.bluesnews.com/benchmarks/081598.html

◧◩◪◨⬒⬓
235. rasz+Gx1[view] [source] [discussion] 2023-03-05 19:06:17
>>orbita+bN
>nobody expected 100fps+

but thats exactly what you got on Voodoo2 with P2 450 back then in 640x480 https://www.bluesnews.com/benchmarks/081598.html

◧◩
265. throwa+d92[view] [source] [discussion] 2023-03-05 22:57:36
>>ChuckM+25
"OpenGL was well supported on the Voodoo cards "

Definitely not the case.

Voodoo cards were notorious for not supporting OpenGL properly. They supported GLide instead.

3dfx also provided a "minigl" which implemented the bare minimum functions designed around particular games (like Quake) -- because they did not provide a proper OpenGL driver.

https://en.wikipedia.org/wiki/MiniGL

◧◩
272. rasz+3n2[view] [source] [discussion] 2023-03-06 00:34:02
>>fabien+R42
Diablo 2 did some weird things. Here is a post about its fake perspective mode https://simonschreibt.de/gat/dont-starve-diablo-parallax-7/
◧◩◪
289. moosed+WJ2[view] [source] [discussion] 2023-03-06 04:02:05
>>dehrma+pF2
In retrospect, it reminds me of the Amazon logo [0]. The orange swoopy thing evokes the Amazon A-to-Z "smile" (2000 onwards, although they had a more generic orange swoopy thing before that). Never noticed that similarity before.

I also had a Voodoo 3 with that box design back then, but different colors (Voodoo 3 3000 model). Actually still have the card...

[0] https://blog.logomyway.com/history-amazon-logo-design/

◧◩◪◨
298. pjmlp+nU2[view] [source] [discussion] 2023-03-06 06:17:20
>>anthk+Rb1
There is enough 3D over here,

https://en.m.wikipedia.org/wiki/TMS34010

https://en.m.wikipedia.org/wiki/Sega_Saturn

https://en.m.wikipedia.org/wiki/PlayStation_(console)

https://en.wikipedia.org/wiki/Nintendo_64

All predate 3dfx Voodoo launch in October 7, 1996.

◧◩◪◨⬒⬓⬔⧯
330. rasz+Mw3[view] [source] [discussion] 2023-03-06 12:57:58
>>synthe+QF2
>GSync vs Freesync which is a feature for making a tear-free dynamic refresh rate, something that was simply impossible in the CRT days

fun fact: the very same technique used by Freesync, delaying the vsync, works with CRTs

Genericness of Variable Refresh Rate (VRR works on any video source including DVI and VGA, even on MultiSync CRT tubes) https://forums.blurbusters.com/viewtopic.php?f=7&t=8889

◧◩◪
334. rasz+wz3[view] [source] [discussion] 2023-03-06 13:19:16
>>datpif+5f3
reminds me of PowerVR main selling point being tiled rendering, but they tried pushing some proprietary "infinite planes" BS

https://vintage3d.org/pcx1.php

"Thanks to volumes defined by infinite planes, shadows and lights can be cast from any object over any surface."

◧◩◪◨
336. rasz+lB3[view] [source] [discussion] 2023-03-06 13:30:39
>>johnwa+5j3
Nvidia 16 bit color depth up to GF256 was rendered internally at 16bit precision and looked really bad, while their 32bit was just a marketing bullet point due to 100% performance hit when enabled. 3dfx had 16bit frame buffer, but internally rendered at 32bits and output was dithered resulting in >20bit color https://www.beyond3d.com/content/articles/59/

"Voodoo2 Graphics uses a programmable color lookup table to allow for programmable gamma correction. The 16-bit dithered color data from the frame buffer is used an an index into the gamma-correction color table -- the 24-bit output of the gamma-correction color table is then fed to the monitor or Television."

◧◩◪◨
346. rasz+RZ3[view] [source] [discussion] 2023-03-06 15:34:39
>>NovaDu+IS2
> past basic use of MMX

MMX is fixed point and shares register space with FPU. Afaik not a single real shipped game ever used MMX for geometry. Intel did pay some game studios to fake MMX support. One was 1998 Ubisoft POD with a huge "Designed for Intel MMX" banner on all boxes https://www.mobygames.com/game/644/pod/cover/group-3790/cove... while MMX was used by one optional audio filter :). Amazingly someone working in Intel "developer relations group" at the time is on HN and chimed in https://news.ycombinator.com/item?id=28237085

"I can tell you that Intel gave companies $1 million for "Optimized" games for marketing such."

$1 million for one optional MMX optimized sound effect. And this scammy marketing worked! Multiple youtube reviewers remember vividly how POD "runs best/fastest on MMX" to this day (LGR is one example).

◧◩◪◨
349. djmips+675[view] [source] [discussion] 2023-03-06 19:41:07
>>Infern+1X3
I know about it because I visited Nvidia several times in the nineties as I was a driver engineer implementing their chips on OEM cards.

I tried to find the information and the best I could find is this better than average discussion/podcast on the history of Nvidia.

They briefly touch on the chip emulation software that they felt they desperately needed to get back into the game after the NV1 was relegated.

The NV3 (Riva 128) was designed rapidly (six months) with the use of their what I called their super computer - a cluster of PCs or workstations most likely - running the proprietary chip emulation software. This advantage continued on further evolution of Nvidia hardware generations.

IIRC the chip emulation startup was started by a university friend of Jensen. The podcast says they failed later which is unfortunate.

https://www.acquired.fm/episodes/nvidia-the-gpu-company-1993...

◧◩◪◨⬒
350. rzzzt+d85[view] [source] [discussion] 2023-03-06 19:45:42
>>datpif+AB3
I was also wondering about this. I only remember two vaguely related mentions, both are from a bit later time and one of them turned out not to be NURBS but Bézier patches :)

- Curved surfaces in Quake 3: https://www.gamedeveloper.com/programming/implementing-curve...

- Rhino 3D support: https://www.rhino3d.com/features/nurbs/

◧◩◪◨⬒⬓⬔⧯▣
352. jamesf+GG5[view] [source] [discussion] 2023-03-06 22:33:40
>>kalleb+C43
Ah yeah, and it looks like this is what they're using: https://github.com/microsoft/D3D9On12 And it looks like DirectX 11 to DirectX 12 translation exists as well: https://github.com/microsoft/D3D11On12
◧◩◪◨⬒⬓⬔⧯▣▦▧
355. Nursie+Lc6[view] [source] [discussion] 2023-03-07 02:20:58
>>justso+r93
Nobody said they had to stick with Symbian. Nobody said they were on any sort of right track.

But there were all sorts of options to restructure a company that big that had only just been surpassed by android at the time of the memo. Tanking what was left of the company and selling out to MS was probably the worst of them.

It's quite funny, the guardian article on the memo, that reproduces it in full, is here - https://www.theguardian.com/technology/blog/2011/feb/09/noki...

First comment below the line "If Nokia go with MS rather than Android they are in even bigger trouble."

Everyone could see it, apart from Stephen Elop, who was determined to deliver the whole thing to MS regardless of how stupid a decision it was.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
372. willyw+KtK[view] [source] [discussion] 2023-03-18 02:41:37
>>justso+gg7
Take a seat. The US was the most primitive mobile market in the world before the iPhone, with people drooling over the frigging Moto RAZR - a feature phone FFS - just because it was thin and flipped open. And imagine paying for incoming calls, or buying your phone from the operator with all useful features like copying your own ringtones disabled so that you were forced to buy whatever lame ones the operator offered. Symbian in the meanwhile was a full featured, multitasking OS with apps, themes, video calling and other features that took their time to reach iOS and Android. Nokia already knew that Symbian was on the way out, and they bought Qt to act as a bridge for developers between Symbian and Meego - it was to be the default app toolkit for Meego. Around 2009 onwards, Qt versions of popular Symbian apps started to appear. The first Meego device, the N9, had rave reviews but was intentionally hobbled by Elop choosing to go with dead in the water Windows Mobile and refusing to allow more production. This piece from back in the day is a detailed analysis of the fiasco - https://communities-dominate.blogs.com/brands/2013/09/the-fu...
[go to top]