zlacker

[parent] [thread] 3 comments
1. adastr+(OP)[view] [source] 2023-03-05 06:28:01
How do you direct render a curved surface? The most straightforward, most flexible way is to convert it into a polygon mesh.

I suppose you could direct rasterize a projected 3D curved surface, but the math for doing so is hideously complicated, and it is not at all obvious it’d be faster.

replies(2): >>somat+g7 >>pixele+Pc
2. somat+g7[view] [source] 2023-03-05 08:06:37
>>adastr+(OP)
I think the idea is that polygon meshes are the only way things are done on all existing graphics cards and as such that is the only primitive used and that is the only primitive optimized for. Personally I suspect that triangle meshes were the correct way to go. but you can imagine an alternate past where we optimized for csg style solid primitives(pov-ray), or perhaps we optimized for drawing point clouds(voxels), or perhaps spline based patches(nurbs). just figure out how to draw the primitive and build hardware that is good at it. right now the hardware is good at drawing triangle meshes so that is the algorithm used.
replies(1): >>adastr+Ok1
3. pixele+Pc[view] [source] 2023-03-05 09:36:41
>>adastr+(OP)
You'd probably convert it to bicubic patches or something, and then rasterise/ray-intersect those...

I'm not really convinced curves are that useful as a modelling scheme for non-CAD/design stuff (i.e. games and VFX/CG): while you can essentially evaluate the limit surface, it's not really worth it once you start needing things like displacement that actually moves points around, and short of doing things like SDF modulations (which is probably possible, but not really artist-friendly in terms of driving things with texture maps), keeping things as micropolygons is what we do in the VFX industry and it seems that's what game engines are looking at as well (Nanite).

◧◩
4. adastr+Ok1[view] [source] [discussion] 2023-03-05 18:34:41
>>somat+g7
So just for the record, I've actually written a software 3D rasterizer for a video game back in the 90's, and did a first pass at porting the engine to Glide using the Voodoo 2 and Voodoo 3 hardware. I'm pulling on decades-old knowledge, but it was a formative time and I am pretty sure my memory here is accurate.

At the point of rasterization in the pipeline you need some way to turn your 3D surface into actual pixels on the screen. What actual pixels do you fill in, and with what color values? For a triangle this is pretty trivial: project the three points to screen-space, then calculate the slope between the points (as seen on the 2D screen), and then run down the scanlines from top to bottom incrementing or decrementing the horizontal start/top pixels for each scanline by those slope values. Super easy stuff. The only hard part is that to get the colors/texture coords right you need to apply a nonlinear correction factor. This is what "perspective-correct texturing" is, support for which was one of 3dfx's marketing points. Technically this approach scales to any planar polygon as well, but you can also break a polygon into triangles and then the hardware only has to understand triangles, which is simpler.

But how do you rasterize a Bézier curve or NURBS surface? How do you project the surface parameters to screen-space in a way that doesn't distort the shape of the curve, then interpolate that curve down scanlines? If you pick a specific curve type of small enough order it is doable, but good god is it complicated. Check out the code attached the main answer of this stack overflow question:

https://stackoverflow.com/questions/31757501/pixel-by-pixel-...

I'm not sure that monstrosity of an algorithm gets perspective correct texturing right, which is a whole other complication on top.

On the other hand, breaking these curved surfaces into discrete linear approximations (aka triangles) is exactly what the representation of these curves is designed around. Just keep recursively sampling the curve at its midpoint to create a new vertex, splitting the curve into two parts. Keep doing this until each curve is small enough (in the case of Pixar's Reyes renderer used for Toy Story, they keep splitting until the distance between vertices is less than 1/2 pixel). Then join the vertices, forming a triangle mesh. Simple, simple, simple.

To use an analogy from a different field, we could design our supercomputer hardware around solving complex non-linear equations directly. But we don't. We instead optimize for solving linear equations (e.g. BLAS, LINPACK) only. We then approximate non-linear equations as a whole lot of many-weighted linear equations, and solve those. Why? Because it is a way easier, way simpler, way more general method that is easier to parallelize in hardware, and gets the same results.

This isn't an accidental historical design choice that could have easily gone a different way, like the QWERTY keyboard. Rendering complex surfaces as triangles is really the only viable way to achieve performance and parallelism, so long as rasterization is the method for interpolating pixel values. (If we switch to ray tracing instead of rasterization, a different set of tradeoffs come into play and we will want to minimize geometry then, but that's a separate issue.)

[go to top]