[1]: https://archive.ph/BNLiX/f3757388e1ee7008f0bad22261c625f1dcf...
But you could argue that the Rampage drivers weren’t optimized.
Comparison pics: https://www.marky.ca/3d/quake2/compare/content.html
The (leaked?) datasheets and miscellaneous information for the GPUs seem to be widely available, but that's still quite impressive for a single person; on a similar level to making your own motherboard: https://news.ycombinator.com/item?id=29273829
I totally agree that the incremental innovations observed in earlier GPU platforms felt much, much more ‘obvious’ though.
It’s as if the ‘wow factor’ of graphics hardware doesn’t scale at the same rate as density.
Or perhaps releases were more spread out than they are today (compared to the annual release cycle expected today) making the jumps more obvious.
[1] https://www.researchgate.net/figure/Comparison-of-NVIDIA-gra...
3dfx Oral History Panel with Ross Smith, Scott Sellers, Gary Tarolli, and Gordon Campbell (Computer History Museum)
https://www.quakewiki.net/profile-retro-interview-brian-hook...
https://www.youtube.com/watch?v=ooLO2xeyJZA
https://www.youtube.com/watch?v=JIOYoZGoXsw
https://www.youtube.com/watch?v=43qp2TUNEFY
The print ads were similarly incredible:
http://www.x86-secret.com/pics/divers/v56k/histo/1999/commer...
https://www.purepc.pl/files/Image/artykul_zdjecia/2012/3DFX_...
https://fcdn.me/813/97f/3d-pc-accelerators-blow-dryer-ee8eb6...
[0] https://cs.nyu.edu/~perlin/courses/fall2005ugrad/phong.html
[1] https://www.dgp.toronto.edu/public_user/lessig/talks/talk_al...
Why the heck this image claimed being taken from [Russian site] if that image is from Buyee?[0]
You tend to have different priorities in those ages, I guess.
It's a dark memory, sure, but probably the packaging somehow made me get attached to a "stupid computer part" (not my words) and that's interesting.
[0]: https://en.wikipedia.org/wiki/1999_%C4%B0zmit_earthquake
1: https://www.tomshardware.com/reviews/3d-accelerator-card-rev...
Even though Wikipedia classifies it as vaporware, there are prototype cards and manuals floating around showing that these cards were in fact designed and contained programmable pixel shaders, notably:
- The Pyramid3D GPU datasheet: http://vgamuseum.info/images/doc/unreleased/pyramid3d/tr2520...
- The pitch deck: http://vgamuseum.info/images/doc/unreleased/pyramid3d/tritec...
- The hardware reference manual: http://vgamuseum.info/images/doc/unreleased/pyramid3d/vs203_... (shows even more internals!)
(As far the companies go: VLSI Solution Oy / TriTech / Bitboys Oy were all related here.)
They unfortunately busted before they could release anything, due to a wrong bet in memory type choice (RDRAM, I think) and letting their architecture rely on that, then running out of money, perhaps some other problems. In the end their assets were bought by ATI.
As for 3dfx, I would highly recommend watching the 3dfx Oral History Panel video from the Computer History Museum with 4 key people involved in 3dfx at the time [2]. Its quite fun as it shows how 3dfx got ahead of the curve by using very clever engineering hacks and tricks to get more out of the silicon and data buses.
It also suggests that their strategy was explicitly about squeezing as much performance out of the hardware, and making sacrifices (quality, programmability) there, which made sense at the time. I do think they would've been pretty late to switch to the whole programmable pipeline show, for to that reason alone. But who knows!
Like here, the V3 seems to have a pretty handy lead in most cases https://www.anandtech.com/show/288/14 - but that's a TNT2 (not TNT2 Ultra) and it's all in 16 bit colour depth (not supported by the V3).
It was certainly an interesting time, and as a V3 owner I did envy that 32 bit colour depth on the TNT2 and the G400 MAX's gorgeous bump mapping :D
https://www.tomshardware.com/reviews/nvidia,87.html
It’s a great article that predicted a lot of things.
Side note: love that an article 25-years old is still accessible.
It is quite insane. Now, getting to use all of them is difficult, but certainly possible with some clever planning. Hopefully as the tech matures we'll see higher and higher utilization rates (I think we're moving as fast as we were in the 90's in some ways, but some parts of how big the industry is hides the absolutely insane rate of progress. Also, scale, I suppose).
I remember George Hotz nearly falling out of his chair for example at a project that was running some deep learning computations at 50% peak GPU efficiency (i.e. used flops vs possible flops) (locally, one GPU, with some other interesting constraints). I hadn't personally realized how hard that is apparently to hit, for some things, though I guess it makes sense as there are few efficient applications that _also_ use every single available computing unit on a GPU.
And FP8 should be very usable too in the right circumstances. I myself am very much looking forward to using it at some point in the future once proper support gets released for it. :)))) :3 :3 :3 :))))
Whatever. In late 1996, I got a PowerMac 8500/180DP (PowerPC 604e) and a 1024x768 monitor. The 8500 didn't even have a graphics card, but had integrated/dedicated graphics on the motherboard with 4MB VRAM (also S-video and composite video in and out). It came bundled with Bungie's Marathon[1] (1994) which filled the screen in 16-bit color.
I guess Megatron is a language model framework https://developer.nvidia.com/blog/announcing-megatron-for-tr...
I don't know the details myself but as a FYI... this famous answer covering the OpenGL vs DirectX history from StackExchange disagrees with your opinion and says OpenGL didn't keep up (ARB committee). It also mentions that the OpenGL implementation in Voodoo cards was incomplete and only enough to run Quake:
https://softwareengineering.stackexchange.com/questions/6054...
The author of that answer is active on HN so maybe he'll chime in.
At the point of rasterization in the pipeline you need some way to turn your 3D surface into actual pixels on the screen. What actual pixels do you fill in, and with what color values? For a triangle this is pretty trivial: project the three points to screen-space, then calculate the slope between the points (as seen on the 2D screen), and then run down the scanlines from top to bottom incrementing or decrementing the horizontal start/top pixels for each scanline by those slope values. Super easy stuff. The only hard part is that to get the colors/texture coords right you need to apply a nonlinear correction factor. This is what "perspective-correct texturing" is, support for which was one of 3dfx's marketing points. Technically this approach scales to any planar polygon as well, but you can also break a polygon into triangles and then the hardware only has to understand triangles, which is simpler.
But how do you rasterize a Bézier curve or NURBS surface? How do you project the surface parameters to screen-space in a way that doesn't distort the shape of the curve, then interpolate that curve down scanlines? If you pick a specific curve type of small enough order it is doable, but good god is it complicated. Check out the code attached the main answer of this stack overflow question:
https://stackoverflow.com/questions/31757501/pixel-by-pixel-...
I'm not sure that monstrosity of an algorithm gets perspective correct texturing right, which is a whole other complication on top.
On the other hand, breaking these curved surfaces into discrete linear approximations (aka triangles) is exactly what the representation of these curves is designed around. Just keep recursively sampling the curve at its midpoint to create a new vertex, splitting the curve into two parts. Keep doing this until each curve is small enough (in the case of Pixar's Reyes renderer used for Toy Story, they keep splitting until the distance between vertices is less than 1/2 pixel). Then join the vertices, forming a triangle mesh. Simple, simple, simple.
To use an analogy from a different field, we could design our supercomputer hardware around solving complex non-linear equations directly. But we don't. We instead optimize for solving linear equations (e.g. BLAS, LINPACK) only. We then approximate non-linear equations as a whole lot of many-weighted linear equations, and solve those. Why? Because it is a way easier, way simpler, way more general method that is easier to parallelize in hardware, and gets the same results.
This isn't an accidental historical design choice that could have easily gone a different way, like the QWERTY keyboard. Rendering complex surfaces as triangles is really the only viable way to achieve performance and parallelism, so long as rasterization is the method for interpolating pixel values. (If we switch to ray tracing instead of rasterization, a different set of tradeoffs come into play and we will want to minimize geometry then, but that's a separate issue.)
but thats exactly what you got on Voodoo2 with P2 450 back then in 640x480 https://www.bluesnews.com/benchmarks/081598.html
Definitely not the case.
Voodoo cards were notorious for not supporting OpenGL properly. They supported GLide instead.
3dfx also provided a "minigl" which implemented the bare minimum functions designed around particular games (like Quake) -- because they did not provide a proper OpenGL driver.
I also had a Voodoo 3 with that box design back then, but different colors (Voodoo 3 3000 model). Actually still have the card...
https://en.m.wikipedia.org/wiki/TMS34010
https://en.m.wikipedia.org/wiki/Sega_Saturn
https://en.m.wikipedia.org/wiki/PlayStation_(console)
https://en.wikipedia.org/wiki/Nintendo_64
All predate 3dfx Voodoo launch in October 7, 1996.
fun fact: the very same technique used by Freesync, delaying the vsync, works with CRTs
Genericness of Variable Refresh Rate (VRR works on any video source including DVI and VGA, even on MultiSync CRT tubes) https://forums.blurbusters.com/viewtopic.php?f=7&t=8889
https://vintage3d.org/pcx1.php
"Thanks to volumes defined by infinite planes, shadows and lights can be cast from any object over any surface."
"Voodoo2 Graphics uses a programmable color lookup table to allow for programmable gamma correction. The 16-bit dithered color data from the frame buffer is used an an index into the gamma-correction color table -- the 24-bit output of the gamma-correction color table is then fed to the monitor or Television."
MMX is fixed point and shares register space with FPU. Afaik not a single real shipped game ever used MMX for geometry. Intel did pay some game studios to fake MMX support. One was 1998 Ubisoft POD with a huge "Designed for Intel MMX" banner on all boxes https://www.mobygames.com/game/644/pod/cover/group-3790/cove... while MMX was used by one optional audio filter :). Amazingly someone working in Intel "developer relations group" at the time is on HN and chimed in https://news.ycombinator.com/item?id=28237085
"I can tell you that Intel gave companies $1 million for "Optimized" games for marketing such."
$1 million for one optional MMX optimized sound effect. And this scammy marketing worked! Multiple youtube reviewers remember vividly how POD "runs best/fastest on MMX" to this day (LGR is one example).
I tried to find the information and the best I could find is this better than average discussion/podcast on the history of Nvidia.
They briefly touch on the chip emulation software that they felt they desperately needed to get back into the game after the NV1 was relegated.
The NV3 (Riva 128) was designed rapidly (six months) with the use of their what I called their super computer - a cluster of PCs or workstations most likely - running the proprietary chip emulation software. This advantage continued on further evolution of Nvidia hardware generations.
IIRC the chip emulation startup was started by a university friend of Jensen. The podcast says they failed later which is unfortunate.
https://www.acquired.fm/episodes/nvidia-the-gpu-company-1993...
- Curved surfaces in Quake 3: https://www.gamedeveloper.com/programming/implementing-curve...
- Rhino 3D support: https://www.rhino3d.com/features/nurbs/
But there were all sorts of options to restructure a company that big that had only just been surpassed by android at the time of the memo. Tanking what was left of the company and selling out to MS was probably the worst of them.
It's quite funny, the guardian article on the memo, that reproduces it in full, is here - https://www.theguardian.com/technology/blog/2011/feb/09/noki...
First comment below the line "If Nokia go with MS rather than Android they are in even bigger trouble."
Everyone could see it, apart from Stephen Elop, who was determined to deliver the whole thing to MS regardless of how stupid a decision it was.