I think its really quite interesting how we often have to select the trade-off between computation required and quality of what we can render! Is there a standardized upper bound for the number of mesh elements usually rendered, at least in modern-day industries?
jeffreychen24
I find it really interesting that most of the triangles are being used for the fine details of the skull. With only 300 triangles, we see the skull shape already. The skull with 3000 triangle already looks pretty good! From afar, it's hard to distinguish from the 30000 triangle skull. It's just mind boggling that we need an extra order of magnitude of triangles just to make the skull look nice and realistic.
rileylyman184
This has some clear parallels to mipmapping ideas we learned earlier in the class. For instance, meshes which are further away from the camera can be replaced with their more simplified counterparts in order to ease the amount of computation required to represent the scene.
But how do we choose the level of simplification to use for a given mesh? In other words, what is the analog to computing the mipmap level D in this case? My thinking is that we would probably have some measure of how much space we want a triangle in a given mesh to take up on the screen. Then, if we have a mesh that is really far away whose triangles do not take up the required amount of space, we choose the simplified version of that mesh which satisfies our heuristic most closely. I would be interested if anyone knows what is used in practice.
brian-stone
@rileylyman184 It looks like the 3D modeling analogue to mipmaps is known as a "Level of Detail" (LOD) algorithm (https://en.wikipedia.org/wiki/Level_of_detail), and it can vastly reduce the amount of vertex transformations needed to render a scene. Lower level-of-detail geometry can be either manually or automatically generated.
camrankolahdouz
When seeing the differences in samples affecting the curve and then thinking about how this all matters in the context of where the object (the skull here) is going to be on the screen space. At what point in the pipeline is it decided to simplify a mesh or not and it seems to me that would heavily depend on the artist's intent.
I think its really quite interesting how we often have to select the trade-off between computation required and quality of what we can render! Is there a standardized upper bound for the number of mesh elements usually rendered, at least in modern-day industries?
I find it really interesting that most of the triangles are being used for the fine details of the skull. With only 300 triangles, we see the skull shape already. The skull with 3000 triangle already looks pretty good! From afar, it's hard to distinguish from the 30000 triangle skull. It's just mind boggling that we need an extra order of magnitude of triangles just to make the skull look nice and realistic.
This has some clear parallels to mipmapping ideas we learned earlier in the class. For instance, meshes which are further away from the camera can be replaced with their more simplified counterparts in order to ease the amount of computation required to represent the scene.
But how do we choose the level of simplification to use for a given mesh? In other words, what is the analog to computing the mipmap level D in this case? My thinking is that we would probably have some measure of how much space we want a triangle in a given mesh to take up on the screen. Then, if we have a mesh that is really far away whose triangles do not take up the required amount of space, we choose the simplified version of that mesh which satisfies our heuristic most closely. I would be interested if anyone knows what is used in practice.
@rileylyman184 It looks like the 3D modeling analogue to mipmaps is known as a "Level of Detail" (LOD) algorithm (https://en.wikipedia.org/wiki/Level_of_detail), and it can vastly reduce the amount of vertex transformations needed to render a scene. Lower level-of-detail geometry can be either manually or automatically generated.
When seeing the differences in samples affecting the curve and then thinking about how this all matters in the context of where the object (the skull here) is going to be on the screen space. At what point in the pipeline is it decided to simplify a mesh or not and it seems to me that would heavily depend on the artist's intent.