zlacker

[parent] [thread] 6 comments
1. former+(OP)[view] [source] 2023-03-05 15:43:13
3dfx is another example of "worse is better" or "slope vs intercept". This article doesn't quite spell it out, but as far as I can tell, 3dfx had one GPU architecture that was tweaked from 1995 till their end. It was much better for a few years than what the competitors put out, but the competitors kept iterating. nVidia created several generations of GPUs in the same time frame. 3dfx started higher but had no slope. Everyone else started much lower, but had a lot of slope (nVidia and ATI in particular). Two of these went ahead and started creating a new "fastest ever GPU" every other year for a quarter century, the other tried putting more of the same GPU on bigger boards and folded.
replies(2): >>rasz+dC >>djmips+x41
2. rasz+dC[view] [source] 2023-03-05 19:19:20
>>former+(OP)
One of my favorite facts is about Nvidia release cycle speed. At the peak of nvidia 3dfx war new chips were coming out every 6-9 months:

Riva 128 (April 1997) to TNT (June 15, 1998) took 14 months, TNT2 (March 15, 1999) 8 month, GF256 (October 11, 1999) 7 months, GF2 (April 26, 2000) 6 months, | 3dfx dies here |, GF3 (February 27, 2001) 9 months, GF4 (February 6, 2002) 12 months, FX (March 2003) 13 months, etc ...

Nvidia had an army of hardware engineers always working on 2 future products in parallel, 3dfx had few people in a room.

3. djmips+x41[view] [source] 2023-03-05 22:15:14
>>former+(OP)
Yes, I agree wholeheartedly

Nvidia had a supercomputer and great hardware design software tools that were a trade secret and basically behind an off limits curtain in the center of their office and it helped them get chips out rapidly and on first turn. First turn means the first silicon coming back is good without requiring fixes and another costly turn.

I'd say 3dfx weren't poised to industrialize as well as Nvidia and they just couldn't keep up in the evolutionary race.

I'm not sure I understand where your worse is better idiom fits because 3dfx was better and Nvidia was worse but iterated to get better than 3dfx and won the day. Truly if worse was better in this case 3Dfx would still be around?

On the other hand triangle based rendering is a case of worse is better and Nvidia learned that and switched course from their early attempts with nurb based primitives.

replies(2): >>johnwa+sl2 >>Infern+oZ2
◧◩
4. johnwa+sl2[view] [source] [discussion] 2023-03-06 10:58:46
>>djmips+x41
As soon as other cards had 32 bit color, voodoo cards with 16bit color looked a lot worse.
replies(1): >>rasz+ID2
◧◩◪
5. rasz+ID2[view] [source] [discussion] 2023-03-06 13:30:39
>>johnwa+sl2
Nvidia 16 bit color depth up to GF256 was rendered internally at 16bit precision and looked really bad, while their 32bit was just a marketing bullet point due to 100% performance hit when enabled. 3dfx had 16bit frame buffer, but internally rendered at 32bits and output was dithered resulting in >20bit color https://www.beyond3d.com/content/articles/59/

"Voodoo2 Graphics uses a programmable color lookup table to allow for programmable gamma correction. The 16-bit dithered color data from the frame buffer is used an an index into the gamma-correction color table -- the 24-bit output of the gamma-correction color table is then fed to the monitor or Television."

◧◩
6. Infern+oZ2[view] [source] [discussion] 2023-03-06 15:23:11
>>djmips+x41
Where can I learn more about this supercomputer and design tooling?
replies(1): >>djmips+t94
◧◩◪
7. djmips+t94[view] [source] [discussion] 2023-03-06 19:41:07
>>Infern+oZ2
I know about it because I visited Nvidia several times in the nineties as I was a driver engineer implementing their chips on OEM cards.

I tried to find the information and the best I could find is this better than average discussion/podcast on the history of Nvidia.

They briefly touch on the chip emulation software that they felt they desperately needed to get back into the game after the NV1 was relegated.

The NV3 (Riva 128) was designed rapidly (six months) with the use of their what I called their super computer - a cluster of PCs or workstations most likely - running the proprietary chip emulation software. This advantage continued on further evolution of Nvidia hardware generations.

IIRC the chip emulation startup was started by a university friend of Jensen. The podcast says they failed later which is unfortunate.

https://www.acquired.fm/episodes/nvidia-the-gpu-company-1993...

[go to top]