zlacker

[parent] [thread] 40 comments
1. Razeng+(OP)[view] [source] 2023-03-05 05:58:18
> he assured me that NURBs were going to be the dominant rendering model

Wow, this sounds like those little cases where a few different decisions could have easily led us down into an alternate parallel world :)

Can someone expand on why NURBs didn't/don't win out against polygons?

Could this be like AI/ML/VR/Functional Programming, where the idea had been around for decades but could only be practically implemented now after we had sufficient hardware and advances in other fields?

replies(8): >>rektid+g1 >>ChuckM+y1 >>rzzzt+b2 >>adastr+o2 >>MontyC+K3 >>flohof+Fj >>jmisko+wt >>random+Ey1
2. rektid+g1[view] [source] 2023-03-05 06:13:59
>>Razeng+(OP)
Because it's exactly like the parent said: Nvidia has always Nvidia & always has been, a tightfisted tightwad that makes everything they do ultra-proprietary. Nvidia never creates standards or participates.

Sometimes, like with CUDA, they just have an early enough lead that they entrench.

Vile player. They're worse than IBM. Soulless & domineering to the max, to every extent possible. What a sad story.

replies(5): >>jemmyw+Y2 >>mschue+73 >>rabf+h4 >>DeathA+T9 >>agumon+Re
3. ChuckM+y1[view] [source] 2023-03-05 06:17:31
>>Razeng+(OP)
Given the available resources today it should be possible to create a NURB based renderer on something like the ECP5 FPGA. Not a project I have time for but something to think about.
4. rzzzt+b2[view] [source] 2023-03-05 06:24:39
>>Razeng+(OP)
Was it NURBs or quads? Maybe both.
5. adastr+o2[view] [source] 2023-03-05 06:28:01
>>Razeng+(OP)
How do you direct render a curved surface? The most straightforward, most flexible way is to convert it into a polygon mesh.

I suppose you could direct rasterize a projected 3D curved surface, but the math for doing so is hideously complicated, and it is not at all obvious it’d be faster.

replies(2): >>somat+E9 >>pixele+df
◧◩
6. jemmyw+Y2[view] [source] [discussion] 2023-03-05 06:34:38
>>rektid+g1
I think any company who feels they are in the lead with something competitive would do the same. The ones who open their standards were behind to begin with and that's their way of combating the proprietary competition.
replies(1): >>rektid+z3
◧◩
7. mschue+73[view] [source] [discussion] 2023-03-05 06:37:27
>>rektid+g1
> Sometimes, like with CUDA, they just have an early enough lead that they entrench.

The problem in case of CUDA isn't just that NVIDIA was there early, it's that AMD and Khronos still offer no viable alternative after more than a decade. I've switched to CUDA half a year ago after trying to avoid it for years due to being proprietary. Unfortunately I discovered that CUDA is absolutely amazing - It's easy to get started, developer friendly in that it "just works" (which is never the case for Khronos APIs and environments), and it's incredibly powerful, kind of like programming C++17 for 80 x 128 SIMD processors. I wish there was a platform independent alternative, but OpenCL, Sycl, ROCm aren't it.

replies(1): >>ribs+q9
◧◩◪
8. rektid+z3[view] [source] [discussion] 2023-03-05 06:44:18
>>jemmyw+Y2
Belief in your own technology, even if it is good, as it turns out, is often insufficient to really win. At some point, in computing, you need some ecosystem buy in, and you almost certainly will not be able to go it alone.

Nvidia seems utterly disinterested in learning these lessons, decades in now: they just gets more and more competitive, less and less participatory. It wild. On the one hand they do a great job maintaining products like the Nvidia Shield TV. On the other hand, if you try anything other than Linux4Tegra (l4t) on most of their products (the Android devices wont work at all for anything but Android btw) it probably wont work at all or will be miserable.

Nvidia has one of the weirdest moats, of being open source like & providing ok-ish open source mini-worlds, but you have to stay within 100m of the keep or it all falls apart. And yea, a lot of people simply dont notice. Nvidia has attracted a large camp-followers group, semi-tech folk, that they enable, but who dont really grasp the weird limited context they are reserved on.

replies(2): >>fud101+78 >>bsder+aJ1
9. MontyC+K3[view] [source] 2023-03-05 06:46:21
>>Razeng+(OP)
My guess is that it’s much harder to develop rendering algorithms (e.g. shaders) for NURBSes. It’s easy and efficient to compute and interpolate surface normals for polygons (the Phong shader is dead simple [0], and thus easy to extend). Basic shading algorithms are much more complicated for a NURBS [1], and thus sufficiently computationally inefficient that you might as well discretize the NURBS to a polygonal mesh (indeed, this is what 3D modeling programs do). At that point, you might as well model the polygonal mesh directly; I don’t think NURBS-based modeling is significantly easier than mesh-based modeling for the 3D artist.

[0] https://cs.nyu.edu/~perlin/courses/fall2005ugrad/phong.html

[1] https://www.dgp.toronto.edu/public_user/lessig/talks/talk_al...

◧◩
10. rabf+h4[view] [source] [discussion] 2023-03-05 06:54:54
>>rektid+g1
Nvidia has had driver parity for linux, freebsd and windows for many many years. No other graphics card manufacturer has come close to the quality of their software stack accross platforms. For that they have my gratitude.
replies(1): >>foxhil+C5
◧◩◪
11. foxhil+C5[view] [source] [discussion] 2023-03-05 07:11:57
>>rabf+h4
DLSS was windows only for some time.

linux’s amdgpu is far better than the nvidia-driver.

replies(3): >>rabf+n6 >>alanfr+P7 >>Gordon+R8
◧◩◪◨
12. rabf+n6[view] [source] [discussion] 2023-03-05 07:25:03
>>foxhil+C5
ATI drivers were a horror show for the longest time on windows never mind linux. What Nvidia did was have have basically the same driver code for all operating systems with a compatibility shim. If you were using any sort of professioinal 3d software over the previous 2 decades Nvidia were the only viable solution.

Source: Was burned by ATI, Matrox, 3dlabs before finallly coughing up the cash for Nvidia.

replies(2): >>foxhil+Ld >>nick__+E01
◧◩◪◨
13. alanfr+P7[view] [source] [discussion] 2023-03-05 07:44:46
>>foxhil+C5
amdgpu is better now. But was terrible for years, probably 2000-2015. That’s what gp is saying.
replies(2): >>hulitu+nc >>foxhil+nd
◧◩◪◨
14. fud101+78[view] [source] [discussion] 2023-03-05 07:48:07
>>rektid+z3
What do they get right with shield?
◧◩◪◨
15. Gordon+R8[view] [source] [discussion] 2023-03-05 07:57:14
>>foxhil+C5
Except it doesn't do GPU compute stuff, so it's no use for anything except games.
replies(1): >>foxhil+sd
◧◩◪
16. ribs+q9[view] [source] [discussion] 2023-03-05 08:04:09
>>mschue+73
I keep hearing that ROCm is DOA, but there’s a lot of supercomputing labs that are heavily investing in it, with engineers who are quite in favor of it.
replies(4): >>mschue+Eb >>pixele+Ud >>doikor+jg >>pjmlp+ph
◧◩
17. somat+E9[view] [source] [discussion] 2023-03-05 08:06:37
>>adastr+o2
I think the idea is that polygon meshes are the only way things are done on all existing graphics cards and as such that is the only primitive used and that is the only primitive optimized for. Personally I suspect that triangle meshes were the correct way to go. but you can imagine an alternate past where we optimized for csg style solid primitives(pov-ray), or perhaps we optimized for drawing point clouds(voxels), or perhaps spline based patches(nurbs). just figure out how to draw the primitive and build hardware that is good at it. right now the hardware is good at drawing triangle meshes so that is the algorithm used.
replies(1): >>adastr+cn1
◧◩
18. DeathA+T9[view] [source] [discussion] 2023-03-05 08:10:42
>>rektid+g1
How is NVIDIA different from Apple?
replies(1): >>verall+8W
◧◩◪◨
19. mschue+Eb[view] [source] [discussion] 2023-03-05 08:40:22
>>ribs+q9
I hope it takes off, a platform independent alternative to CUDA would be great. But if they want it to be successfully outside of supercomputing labs, it needs to be as easy to use as CUDA. And I'd say being successfull outside of supercomputer labs is important for overall adoption and success. For me personally, it would also need fast runtime compilation so that you can modify and hot-reload ROCm programs at runtime.
◧◩◪◨⬒
20. hulitu+nc[view] [source] [discussion] 2023-03-05 08:51:51
>>alanfr+P7
Huh ? Compared to open source nvidia driver which could do nothing ?

I had a Riva TNT 2 card. The only "accelerated" thing it could do in X was DGA (direct graphics access). Switched to Ati and never looked back. Of course you could use the proprietary driver. If you had enough time to solve instalation problems and didn't mind frequent crashes.

replies(3): >>badsec+Gh >>onphon+zA >>anthk+6R
◧◩◪◨⬒
21. foxhil+nd[view] [source] [discussion] 2023-03-05 09:06:49
>>alanfr+P7
amdgpu is new. you may be thinking about fglrx: a true hell.
replies(1): >>alanfr+Woa
◧◩◪◨⬒
22. foxhil+sd[view] [source] [discussion] 2023-03-05 09:08:17
>>Gordon+R8
it doesn’t do CUDA, but it does do opencl, and vulkan compute
replies(1): >>Gordon+3q
◧◩◪◨⬒
23. foxhil+Ld[view] [source] [discussion] 2023-03-05 09:13:18
>>rabf+n6
yes, i am very familiar with that pain. fglrx was hell compared to nvidia.

nvidia being the only viable solution for 3d on linux is a bit of an exaggeration imo (source: i did it for 5 years), but that was a long time ago: we have amdgpu, which is far superior to nvidia’s closed source driver.

◧◩◪◨
24. pixele+Ud[view] [source] [discussion] 2023-03-05 09:15:34
>>ribs+q9
If you want to run compute on AMD GPU hardware on Linux, it does work - however it's not as portable as CUDA as you practically have to compile your code for every AMD GPU architecture, whereas with CUDA the nvidia drivers give you an abstraction layer (ish, it's really PTX which provides it, but...) which is forwards and backwards compatible, which makes it trivial to support new cards / generations of cards without recompiling anything.
◧◩
25. agumon+Re[view] [source] [discussion] 2023-03-05 09:31:46
>>rektid+g1
Some say the nurbs model was also not fit with culture at the time and not supported either on modeling tools or texturing. Game dev would get faster results with triangles than with nurbs. Not sure who should have footed the bill, game studios or nvidia.
◧◩
26. pixele+df[view] [source] [discussion] 2023-03-05 09:36:41
>>adastr+o2
You'd probably convert it to bicubic patches or something, and then rasterise/ray-intersect those...

I'm not really convinced curves are that useful as a modelling scheme for non-CAD/design stuff (i.e. games and VFX/CG): while you can essentially evaluate the limit surface, it's not really worth it once you start needing things like displacement that actually moves points around, and short of doing things like SDF modulations (which is probably possible, but not really artist-friendly in terms of driving things with texture maps), keeping things as micropolygons is what we do in the VFX industry and it seems that's what game engines are looking at as well (Nanite).

◧◩◪◨
27. doikor+jg[view] [source] [discussion] 2023-03-05 09:54:16
>>ribs+q9
With supercomputers you write your code for that specific supercomputer. In such an environment ROCm works ok. Trying to make a piece of ROCm code work on different cards/setups is real pain (and not that easy with CUDA either if you want good performance)
◧◩◪◨
28. pjmlp+ph[view] [source] [discussion] 2023-03-05 10:11:33
>>ribs+q9
Some random HPC lab with weight to have a AMD team drop by isn't the same thing as average joe and jane developer.
◧◩◪◨⬒⬓
29. badsec+Gh[view] [source] [discussion] 2023-03-05 10:16:07
>>hulitu+nc
> Compared to open source nvidia driver which could do nothing ?

Compared to the official Nvidia driver.

> If you had enough time to solve instalation problems and didn't mind frequent crashes

I used Nvidia GPUs from ~2001 to ~2018 on various machines with various GPUs and i never had any such issues on Linux. I always used the official driver installer and it worked perfectly fine.

30. flohof+Fj[view] [source] 2023-03-05 10:41:47
>>Razeng+(OP)
My guess is: 'brute force and fast' always wins against 'elegant but slow'. And both the 3dfx products and triangle rasterization in general were 'brute force and fast'. Early 3D accelerator cards of different vendors were full of such weird ideas to differentiate themselves from the competitors, thankfully all went the way of the Dodo (because for game devs it was a PITA to support such non-standard features).

Another reason might have been: early 3D games usually implemented a software rasterization fallback. Much easier and faster to do for triangles than nurbs.

◧◩◪◨⬒⬓
31. Gordon+3q[view] [source] [discussion] 2023-03-05 12:00:10
>>foxhil+sd
Maybe, but nothing really uses that, at least for video.
32. jmisko+wt[view] [source] 2023-03-05 12:31:48
>>Razeng+(OP)
NURBs are more high-level compared to triangles. Single triangle primitive cannot be ill-defined and is much easier to rasterize. There are other high level contenders - for example SDFs and voxels. Instead of branching out the HW to offer acceleration for each of these, they can all be reduced to triangles and made to fit in modern graphics pipeline.

It's like having a basic VM, high-level languages are compiled to the intermediate representation where things are simpler and various optimizations can be applied.

◧◩◪◨⬒⬓
33. onphon+zA[view] [source] [discussion] 2023-03-05 13:36:36
>>hulitu+nc
Did people not try the nvidia driver back then? Even as a casual user at the time it was miles ahead - but it wasn’t open source
◧◩◪◨⬒⬓
34. anthk+6R[view] [source] [discussion] 2023-03-05 15:38:12
>>hulitu+nc
DGA and later XV.
◧◩◪
35. verall+8W[view] [source] [discussion] 2023-03-05 16:07:34
>>DeathA+T9
Nvidia makes superior graphics cards which are for dirty gamers while Apple makes superior webshit development machines.
◧◩◪◨⬒
36. nick__+E01[view] [source] [discussion] 2023-03-05 16:34:02
>>rabf+n6
I was a big Matrox fan, mostly because I knew someone there, and was able to upgrade their products at a significant discount. This was important for me as a teenager whose only source of income was power washing eighteen-wheelers and their associated semi-trailers. It was a dirty and somewhat dangerous job, but I fondly remember my first job. Anyway, I digress, so let's get back to the topic of Matrox cards.

The MGA Millennium had unprecedented image quality, and its RAMDAC was in a league of its own. The G200 had the best 3D image quality when it was released, but it was really slow and somewhat buggy outside of Direct3D where it shined. However, even with my significant discount and my fanboyism, when the G400 was released, I defected to NVIDIA since its relative performance was abysmal.

replies(1): >>antod+MV1
◧◩◪
37. adastr+cn1[view] [source] [discussion] 2023-03-05 18:34:41
>>somat+E9
So just for the record, I've actually written a software 3D rasterizer for a video game back in the 90's, and did a first pass at porting the engine to Glide using the Voodoo 2 and Voodoo 3 hardware. I'm pulling on decades-old knowledge, but it was a formative time and I am pretty sure my memory here is accurate.

At the point of rasterization in the pipeline you need some way to turn your 3D surface into actual pixels on the screen. What actual pixels do you fill in, and with what color values? For a triangle this is pretty trivial: project the three points to screen-space, then calculate the slope between the points (as seen on the 2D screen), and then run down the scanlines from top to bottom incrementing or decrementing the horizontal start/top pixels for each scanline by those slope values. Super easy stuff. The only hard part is that to get the colors/texture coords right you need to apply a nonlinear correction factor. This is what "perspective-correct texturing" is, support for which was one of 3dfx's marketing points. Technically this approach scales to any planar polygon as well, but you can also break a polygon into triangles and then the hardware only has to understand triangles, which is simpler.

But how do you rasterize a Bézier curve or NURBS surface? How do you project the surface parameters to screen-space in a way that doesn't distort the shape of the curve, then interpolate that curve down scanlines? If you pick a specific curve type of small enough order it is doable, but good god is it complicated. Check out the code attached the main answer of this stack overflow question:

https://stackoverflow.com/questions/31757501/pixel-by-pixel-...

I'm not sure that monstrosity of an algorithm gets perspective correct texturing right, which is a whole other complication on top.

On the other hand, breaking these curved surfaces into discrete linear approximations (aka triangles) is exactly what the representation of these curves is designed around. Just keep recursively sampling the curve at its midpoint to create a new vertex, splitting the curve into two parts. Keep doing this until each curve is small enough (in the case of Pixar's Reyes renderer used for Toy Story, they keep splitting until the distance between vertices is less than 1/2 pixel). Then join the vertices, forming a triangle mesh. Simple, simple, simple.

To use an analogy from a different field, we could design our supercomputer hardware around solving complex non-linear equations directly. But we don't. We instead optimize for solving linear equations (e.g. BLAS, LINPACK) only. We then approximate non-linear equations as a whole lot of many-weighted linear equations, and solve those. Why? Because it is a way easier, way simpler, way more general method that is easier to parallelize in hardware, and gets the same results.

This isn't an accidental historical design choice that could have easily gone a different way, like the QWERTY keyboard. Rendering complex surfaces as triangles is really the only viable way to achieve performance and parallelism, so long as rasterization is the method for interpolating pixel values. (If we switch to ray tracing instead of rasterization, a different set of tradeoffs come into play and we will want to minimize geometry then, but that's a separate issue.)

38. random+Ey1[view] [source] 2023-03-05 19:48:12
>>Razeng+(OP)
Nah, NURBS are a dead end. They are difficult to model with and difficult to animate and render. Polygon-based subdivision surfaces entirely replaced NURBS as soon as the Pixar Renderman patents on subdivision surfaces expired.
◧◩◪◨
39. bsder+aJ1[view] [source] [discussion] 2023-03-05 20:58:56
>>rektid+z3
As much as I hate Nvidia, AMD and Intel have done themselves zero favors in the space.

It's not that hard--you must provide a way to use CUDA on your hardware. Either support it directly, transcompile it, emulate it, provide shims, anything. After that, you can provide your own APIs that take advantage of every extra molecule of performance.

And neither AMD nor Intel have thrown down the money to do it. That's all it is. Money. You have an army of folks in the space who would love to use anything other than Nvidia who would do all the work if you just threw them money.

◧◩◪◨⬒⬓
40. antod+MV1[view] [source] [discussion] 2023-03-05 22:12:43
>>nick__+E01
One usecase Matrox kept doing well was X11 multimonitor desktops. The G400 era was about the time I was drifting away from games and moving to full time Linux, so they suited me at least.
◧◩◪◨⬒⬓
41. alanfr+Woa[view] [source] [discussion] 2023-03-08 10:08:42
>>foxhil+nd
No, I was thinking about amdgpu. amdgpu, the open source driver, since 4-5 years is better than nvidia closed source driver (excluding the cuda vs opencl/rocm debacle ofc).

fglrx has always been a terrible experience indeed, so AMD was no match for nvidia closed source driver.

So, once upon a time (I'd say 2000-2015) the best Linux driver for discrete GPUs was nVidia closed source one. Nowadays it's the amd open source one. Intel has always been good, but doesn't provide the right amount of power.

[go to top]