zlacker

[return to "3dfx: So powerful it’s kind of ridiculous"]
1. ChuckM+25[view] [source] 2023-03-05 05:41:02
>>BirAda+(OP)
My first video accelerator was the Nvidia NV-1 because a friend of mine was on the design team and he assured me that NURBs were going to be the dominant rendering model since you could do a sphere with just 6 of them, whereas triangles needed like 50 and it still looked like crap. But Nvidia was so tight fisted with development details and all their "secret sauce" none of my programs ever worked on it.

Then I bought a 3DFx Voodoo card and started using Glide and it was night and day. I had something up the first day and every day thereafter it seemed to get more and more capable. That was a lot of fun.

In my opinion, Direct X was what killed it most. OpenGL was well supported on the Voodoo cards and Microsoft was determined to kill anyone using OpenGL (which they didn't control) to program games if they could. After about 5 years (Direct X 7 or 8) it had reached feature parity but long before that the "co marketing" dollars Microsoft used to enforce their monopoly had done most of the work.

Sigh.

◧◩
2. useful+t7[view] [source] 2023-03-05 06:19:28
>>ChuckM+25
> Microsoft was determined to kill anyone using OpenGL ... After about 5 years (Direct X 7 or 8) it had reached feature parity but long before that the "co marketing" dollars Microsoft used to enforce their monopoly had done most of the work.

I was acutely aware of the various 3D API issues during this time and this rings very true.

◧◩◪
3. wazoox+Gj[view] [source] 2023-03-05 09:16:51
>>useful+t7
Yup, remember when they "teamed up" with SGI to create "Farenheit"? Embrace, extend, extinguish...
◧◩◪◨
4. pjmlp+Tm[view] [source] 2023-03-05 10:08:38
>>wazoox+Gj
As if SGI didn't had their share in Farenheit's failure.

https://en.wikipedia.org/wiki/Fahrenheit_(graphics_API)

◧◩◪◨⬒
5. wazoox+UH[view] [source] 2023-03-05 13:52:28
>>pjmlp+Tm
Holy cow I found a nest of Microsoft fans. From your link:

> By 1999 it was clear that Microsoft had no intention of delivering Low Level; although officially working on it, almost no resources were dedicated to actually producing code.

No kidding...

Also the CEO of SGI in the late 90s was an ex-Microsoft and bet heavily on weird technical choices (remember the SGI 320 / 540? I do) that played no small role in sinking the boat. Extremely similar to the infamous Nokia suicide in the 2010s under another Microsoft alumni. I think the similarity isn't due to chance.

◧◩◪◨⬒⬓
6. Keyfra+nA1[view] [source] 2023-03-05 19:22:47
>>wazoox+UH
TBH SGI 320/540, well namely Cobalt were rather interesting tech-wise. Not sure if they could've gone against Microsoft at the time (NT/Softimage for example) and rise of OpenGL2 (with 3dlabs and all).
◧◩◪◨⬒⬓⬔
7. wazoox+7a3[view] [source] 2023-03-06 09:12:35
>>Keyfra+nA1
Yeah, they were really interesting machines but there were lots of weird technical choices : the non-PC EPROM that made them incompatible with any other OS than a special release of Windows 2000, the 3.3V PCI slots (at the time incompatible with 95% of available cards), the weird connector for the SGI 1600 screen... making the whole idea of "going Intel to conquer a larger market" moot from the start.

Of course the main crime of the ex-Microsoft boss as the time wasn't that, but selling out most of SGI's IP to Microsoft and nVidia for some quick money.

◧◩◪◨⬒⬓⬔⧯
8. Keyfra+Gw3[view] [source] 2023-03-06 12:57:31
>>wazoox+7a3
Yeah, but that's SGI doing SGI things basically. They were used to daylight robbery systems they had ultimate control over. This is nothing out of ordinary from their thinking. The weird thinking was more if you're doing a PC do a PC then, not this.. but maybe they initially didn't want to - doing an SGI workstation with PC components is more like it. They'd be shown by the market ultimately it's not what it wants. Primary cool thing about it in my opinion was cobalt and cpu aharing RAM, but what was weird about it was distribution of how much CPU gets and how much GPU was not dynamic but rather static which you had to set up manually before boot. Dynamic sharing is what only now Apple is doing. Something AMD also explored if you had their vertical (cpu, mobo, gpu) but only for fast path movement of data. I'd like to see more of that.
◧◩◪◨⬒⬓⬔⧯▣
9. wazoox+lQ3[view] [source] 2023-03-06 14:49:09
>>Keyfra+Gw3
The shared memory architecture came directly from the SGI O2 in 1996. The O2 had dynamic sharing, but it was impossible to make it work in Windows.

O2 dynamic memory sharing allowed things impossible on all other machines with its infinite texture memory, like mapping several videos seamlessly on moving 3D objects (also thanks to the built-in MJPEG encoding/decoding hardware).

[go to top]