You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 8: Mesh Representations and Geometry Processing (90)
rsha256

Oof what happened at n=30n=30? I expected it to decrease but I feel like you can still do better with 30...

Zc0in

If you only have 30 mesh elements, it is no way for you to have a skull. Our mesh is not curved surface but plane, the head will need lots of small meshes to have a arc.

hukellyy

I wonder how you would compute the complexity of an object such that you can optimize for the least number of mesh elements that will still preserve the desired shape. In the case of the skill, it seems that there are multiple object components for the mesh to cover (the skull body, eyes, nose, individual teeth, etc.) so as the number of elements decrease, the aforementioned components disappear and become simplified — for example, in the n=300 model, there seems to be not enough mesh to cover the teeth. However, if we just want to cover the surface of a sphere or simple prism with no arcs, I'd imagine to be easier to reduce the nubmer of mesh elements.

gabeclasson

The problem you pose reminds me a lot of of the Nyquist theorem we covered earlier in the class. In many ways, a simple sphere or prism is akin to a low frequency image while the many teeth are akin to a high frequency one. I wonder if a similar theorem/principle exists to both quantify the "frequency space" of a three dimensional object and to determine how many mesh elements are necessary for them.

ShaamerKumar

I feel like we can merge this question with some algorithms covered in EECS127 to find the "right" amount of mesh elements while maintaining the "best" quality image of skull. Especially for video games etc, I wonder what they use to optimise this for different settings like high quality, lower quality etc.

joeyzhao123

I'm curious if mesh simplification is used in a similar way to mipmaps where depending on how far an object is, we use different amounts of triangles since at a distance, we can't tell the features but we know what it represents.

Staffjamesfong1

@joeyzhao123 That's exactly right! This kind of mesh simplification is the main work-horse behind drawing enormous scenes in real-time. The industry term for this is "level-of-detail". Gamers will instead call it "pop-in" when they want to complain about poor transitions between different LOD's.

camacho-david

This reminds me of Epic Games' geometry system for Unreal Engine 5, Nanite, used to represent pixel-accurate detail to perceivable objects. Although the documentation is up for this, I recommend this article for an overall summary : https://80.lv/articles/a-deep-dive-into-unreal-engine-s-5-nanite/

prannaypradeep999

Is there a range of mesh triangle values for known model types? For instance, is there a range of mesh values that is industry standard for anyone designing a 3D model of a car?

Staffjamesfong1

@prannaypradeep999 In most cases, there is no "correct" answer to how many polygons is appropriate. It really depends on how much accuracy you need when representing a surface. More triangles affords more accuracy, but is more compute-heavy.

bbcd0921

When n is even smaller like less than 30, it looks like the original shape of the mesh would not maintain

You must be enrolled in the course to comment