Like with farther away textures being drawn with higher level mipmaps, would lower resolution models be drawn farther away/is there an analogous process for models?
Is Mesh Downsampling useful when we are modifying a model for 3D printing?
@zhezhaosp19 Speaking from my limited 3D printing experience, mesh downsampling is indeed useful for speeding up 3D prints. If a model has more straight edges, the printing speed is faster.
I think it's interesting that not only can downsampling be more efficient for 3D printing, but it can also lend a unique aesthetic quality to the final product. It feels like a lot of computer graphics aims to model the real world as closely and as efficiently as possible, but it can be powerful to think about how graphics that deviate from 'realistic' standards can provide different perspectives and styles. (Another ex: 8-bit art)
After downsampling a mesh is it possible to upsample it to get the original mesh or is some information always lost? Are there specific downsampling / upsampling algorithms that have these properties?
I read an interesting article related to essentially using AI in order to transfer low-res meshes into high-resolution data using systems that are pre-trained on a subject's shape, texture etc.
@gavinmak Using low resolution models in the distance where mesh resolution is not as easily resolved is a good intuition; here's a Wikipedia article on the subject https://en.wikipedia.org/wiki/Level_of_detail
@kalebblack Some information is always destroyed in the downsampling process; we can try to reconstruct the original algorithm as best we can with algorithms like loop subdivision, but it probably won't be exact