One thing I was curious about was the tradeoff between this and efficiency. Intuitively, it seems that most of the time, we would want to do subdivision at much as we can.
kingdish
I think one more thing that should be taken into consideration is the memory space to store these vertices and edges. It looks like the memory needed to display the mesh in real-time grows exponentially as subdivision level increases.
gprechter
This example reminds me of an image processing problem I read a paper about once called Single Image Super Resolution. The idea was to take a single low resolution image and recover a high resolution version of that image. In this case, the original mesh seems to be the low resolution version, while the subdivided version is the high resolution counterpart. I believe the paper discussed a complex method of interpolating the pixels to recover a higher resolution patch of the image, but what stood out to me were the applications of Deep Learning to solve the problem. I'm curious to see if deep learning can also be applied to Mesh Upsampling to yield superior results.
Pinbat
@jeromylui, there's always a tradeoff between rendering speed and quality, and for projects like pre-rendered images and scenes you'd be right (though every artist wishes for instant rendering!). However, for real-time graphics we'd see more simplification than subdivision.
keirp
@gprechter It also might be possible to render the low resolution mesh at a low resolution and then use deep learning to generate a higher resolution version of the rendered image.
One thing I was curious about was the tradeoff between this and efficiency. Intuitively, it seems that most of the time, we would want to do subdivision at much as we can.
I think one more thing that should be taken into consideration is the memory space to store these vertices and edges. It looks like the memory needed to display the mesh in real-time grows exponentially as subdivision level increases.
This example reminds me of an image processing problem I read a paper about once called Single Image Super Resolution. The idea was to take a single low resolution image and recover a high resolution version of that image. In this case, the original mesh seems to be the low resolution version, while the subdivided version is the high resolution counterpart. I believe the paper discussed a complex method of interpolating the pixels to recover a higher resolution patch of the image, but what stood out to me were the applications of Deep Learning to solve the problem. I'm curious to see if deep learning can also be applied to Mesh Upsampling to yield superior results.
@jeromylui, there's always a tradeoff between rendering speed and quality, and for projects like pre-rendered images and scenes you'd be right (though every artist wishes for instant rendering!). However, for real-time graphics we'd see more simplification than subdivision.
@gprechter It also might be possible to render the low resolution mesh at a low resolution and then use deep learning to generate a higher resolution version of the rendered image.