Wow, this sounds like those little cases where a few different decisions could have easily led us down into an alternate parallel world :)
Can someone expand on why NURBs didn't/don't win out against polygons?
Could this be like AI/ML/VR/Functional Programming, where the idea had been around for decades but could only be practically implemented now after we had sufficient hardware and advances in other fields?
Sometimes, like with CUDA, they just have an early enough lead that they entrench.
Vile player. They're worse than IBM. Soulless & domineering to the max, to every extent possible. What a sad story.
I suppose you could direct rasterize a projected 3D curved surface, but the math for doing so is hideously complicated, and it is not at all obvious it’d be faster.
The problem in case of CUDA isn't just that NVIDIA was there early, it's that AMD and Khronos still offer no viable alternative after more than a decade. I've switched to CUDA half a year ago after trying to avoid it for years due to being proprietary. Unfortunately I discovered that CUDA is absolutely amazing - It's easy to get started, developer friendly in that it "just works" (which is never the case for Khronos APIs and environments), and it's incredibly powerful, kind of like programming C++17 for 80 x 128 SIMD processors. I wish there was a platform independent alternative, but OpenCL, Sycl, ROCm aren't it.
Nvidia seems utterly disinterested in learning these lessons, decades in now: they just gets more and more competitive, less and less participatory. It wild. On the one hand they do a great job maintaining products like the Nvidia Shield TV. On the other hand, if you try anything other than Linux4Tegra (l4t) on most of their products (the Android devices wont work at all for anything but Android btw) it probably wont work at all or will be miserable.
Nvidia has one of the weirdest moats, of being open source like & providing ok-ish open source mini-worlds, but you have to stay within 100m of the keep or it all falls apart. And yea, a lot of people simply dont notice. Nvidia has attracted a large camp-followers group, semi-tech folk, that they enable, but who dont really grasp the weird limited context they are reserved on.
[0] https://cs.nyu.edu/~perlin/courses/fall2005ugrad/phong.html
[1] https://www.dgp.toronto.edu/public_user/lessig/talks/talk_al...
Source: Was burned by ATI, Matrox, 3dlabs before finallly coughing up the cash for Nvidia.
I had a Riva TNT 2 card. The only "accelerated" thing it could do in X was DGA (direct graphics access). Switched to Ati and never looked back. Of course you could use the proprietary driver. If you had enough time to solve instalation problems and didn't mind frequent crashes.
nvidia being the only viable solution for 3d on linux is a bit of an exaggeration imo (source: i did it for 5 years), but that was a long time ago: we have amdgpu, which is far superior to nvidia’s closed source driver.
I'm not really convinced curves are that useful as a modelling scheme for non-CAD/design stuff (i.e. games and VFX/CG): while you can essentially evaluate the limit surface, it's not really worth it once you start needing things like displacement that actually moves points around, and short of doing things like SDF modulations (which is probably possible, but not really artist-friendly in terms of driving things with texture maps), keeping things as micropolygons is what we do in the VFX industry and it seems that's what game engines are looking at as well (Nanite).
Compared to the official Nvidia driver.
> If you had enough time to solve instalation problems and didn't mind frequent crashes
I used Nvidia GPUs from ~2001 to ~2018 on various machines with various GPUs and i never had any such issues on Linux. I always used the official driver installer and it worked perfectly fine.
Another reason might have been: early 3D games usually implemented a software rasterization fallback. Much easier and faster to do for triangles than nurbs.
It's like having a basic VM, high-level languages are compiled to the intermediate representation where things are simpler and various optimizations can be applied.
The MGA Millennium had unprecedented image quality, and its RAMDAC was in a league of its own. The G200 had the best 3D image quality when it was released, but it was really slow and somewhat buggy outside of Direct3D where it shined. However, even with my significant discount and my fanboyism, when the G400 was released, I defected to NVIDIA since its relative performance was abysmal.
At the point of rasterization in the pipeline you need some way to turn your 3D surface into actual pixels on the screen. What actual pixels do you fill in, and with what color values? For a triangle this is pretty trivial: project the three points to screen-space, then calculate the slope between the points (as seen on the 2D screen), and then run down the scanlines from top to bottom incrementing or decrementing the horizontal start/top pixels for each scanline by those slope values. Super easy stuff. The only hard part is that to get the colors/texture coords right you need to apply a nonlinear correction factor. This is what "perspective-correct texturing" is, support for which was one of 3dfx's marketing points. Technically this approach scales to any planar polygon as well, but you can also break a polygon into triangles and then the hardware only has to understand triangles, which is simpler.
But how do you rasterize a Bézier curve or NURBS surface? How do you project the surface parameters to screen-space in a way that doesn't distort the shape of the curve, then interpolate that curve down scanlines? If you pick a specific curve type of small enough order it is doable, but good god is it complicated. Check out the code attached the main answer of this stack overflow question:
https://stackoverflow.com/questions/31757501/pixel-by-pixel-...
I'm not sure that monstrosity of an algorithm gets perspective correct texturing right, which is a whole other complication on top.
On the other hand, breaking these curved surfaces into discrete linear approximations (aka triangles) is exactly what the representation of these curves is designed around. Just keep recursively sampling the curve at its midpoint to create a new vertex, splitting the curve into two parts. Keep doing this until each curve is small enough (in the case of Pixar's Reyes renderer used for Toy Story, they keep splitting until the distance between vertices is less than 1/2 pixel). Then join the vertices, forming a triangle mesh. Simple, simple, simple.
To use an analogy from a different field, we could design our supercomputer hardware around solving complex non-linear equations directly. But we don't. We instead optimize for solving linear equations (e.g. BLAS, LINPACK) only. We then approximate non-linear equations as a whole lot of many-weighted linear equations, and solve those. Why? Because it is a way easier, way simpler, way more general method that is easier to parallelize in hardware, and gets the same results.
This isn't an accidental historical design choice that could have easily gone a different way, like the QWERTY keyboard. Rendering complex surfaces as triangles is really the only viable way to achieve performance and parallelism, so long as rasterization is the method for interpolating pixel values. (If we switch to ray tracing instead of rasterization, a different set of tradeoffs come into play and we will want to minimize geometry then, but that's a separate issue.)
It's not that hard--you must provide a way to use CUDA on your hardware. Either support it directly, transcompile it, emulate it, provide shims, anything. After that, you can provide your own APIs that take advantage of every extra molecule of performance.
And neither AMD nor Intel have thrown down the money to do it. That's all it is. Money. You have an army of folks in the space who would love to use anything other than Nvidia who would do all the work if you just threw them money.
fglrx has always been a terrible experience indeed, so AMD was no match for nvidia closed source driver.
So, once upon a time (I'd say 2000-2015) the best Linux driver for discrete GPUs was nVidia closed source one. Nowadays it's the amd open source one. Intel has always been good, but doesn't provide the right amount of power.