zlacker

[return to "3dfx: So powerful it’s kind of ridiculous"]
1. ChuckM+25[view] [source] 2023-03-05 05:41:02
>>BirAda+(OP)
My first video accelerator was the Nvidia NV-1 because a friend of mine was on the design team and he assured me that NURBs were going to be the dominant rendering model since you could do a sphere with just 6 of them, whereas triangles needed like 50 and it still looked like crap. But Nvidia was so tight fisted with development details and all their "secret sauce" none of my programs ever worked on it.

Then I bought a 3DFx Voodoo card and started using Glide and it was night and day. I had something up the first day and every day thereafter it seemed to get more and more capable. That was a lot of fun.

In my opinion, Direct X was what killed it most. OpenGL was well supported on the Voodoo cards and Microsoft was determined to kill anyone using OpenGL (which they didn't control) to program games if they could. After about 5 years (Direct X 7 or 8) it had reached feature parity but long before that the "co marketing" dollars Microsoft used to enforce their monopoly had done most of the work.

Sigh.

◧◩
2. Razeng+K5[view] [source] 2023-03-05 05:58:18
>>ChuckM+25
> he assured me that NURBs were going to be the dominant rendering model

Wow, this sounds like those little cases where a few different decisions could have easily led us down into an alternate parallel world :)

Can someone expand on why NURBs didn't/don't win out against polygons?

Could this be like AI/ML/VR/Functional Programming, where the idea had been around for decades but could only be practically implemented now after we had sufficient hardware and advances in other fields?

◧◩◪
3. rektid+07[view] [source] 2023-03-05 06:13:59
>>Razeng+K5
Because it's exactly like the parent said: Nvidia has always Nvidia & always has been, a tightfisted tightwad that makes everything they do ultra-proprietary. Nvidia never creates standards or participates.

Sometimes, like with CUDA, they just have an early enough lead that they entrench.

Vile player. They're worse than IBM. Soulless & domineering to the max, to every extent possible. What a sad story.

◧◩◪◨
4. mschue+R8[view] [source] 2023-03-05 06:37:27
>>rektid+07
> Sometimes, like with CUDA, they just have an early enough lead that they entrench.

The problem in case of CUDA isn't just that NVIDIA was there early, it's that AMD and Khronos still offer no viable alternative after more than a decade. I've switched to CUDA half a year ago after trying to avoid it for years due to being proprietary. Unfortunately I discovered that CUDA is absolutely amazing - It's easy to get started, developer friendly in that it "just works" (which is never the case for Khronos APIs and environments), and it's incredibly powerful, kind of like programming C++17 for 80 x 128 SIMD processors. I wish there was a platform independent alternative, but OpenCL, Sycl, ROCm aren't it.

◧◩◪◨⬒
5. ribs+af[view] [source] 2023-03-05 08:04:09
>>mschue+R8
I keep hearing that ROCm is DOA, but there’s a lot of supercomputing labs that are heavily investing in it, with engineers who are quite in favor of it.
◧◩◪◨⬒⬓
6. pjmlp+9n[view] [source] 2023-03-05 10:11:33
>>ribs+af
Some random HPC lab with weight to have a AMD team drop by isn't the same thing as average joe and jane developer.
[go to top]