how would someone be able to model geometry built out of a point cloud?
KevinXu02
This usually can be done with a 3d scanning device (LiDAR) or from a set of imgaes with Structure-from-Motion pipeline (https://colmap.github.io/index.html). And mesh can be reconstructed from point clouds with methods like marching cubes.
lycorisradiatu
In addition, there's a process called surface reconstruction, where the points in the cloud are used to generate a representation of the underlying surface, that can be used for modeling geometry from a point cloud.
AnikethPrasad
What are the tradeoffs between the different explicit representations? Point clouds seem much more computationally intensive compared to polygon meshes so I would assume that point clouds are only necessary when we want precise details for complex and irregular shapes. Would augmented reality be a potential use-case for point clouds?
AlsonC
How do generative text-to-image models like stable diffusion decide what the best way to represent the geometry decided by the text is? Also wondering if there are tradeoffs for each of these representations in terms of efficiency, similar to how some algorithms are preferred over others, or whether we just choose the representation that makes the shape the most realistic or similar to our imagined desired outcome.
stang085
I wonder how much data would each of these geometries would take up compared to pictures and other types of data. The particle cloud also seems complicated, but I assume that you would only need to store coordinates of each particle.
llejj
Would point clouds always model convex surfaces, or is there a way to tell where the borders should be?
S-Muddana
What are some examples of when we would use point clouds over a polygon mesh?
aravmisra
@S-Muddana that's a great question! As far as I understand it, and based on this helpful article: https://www.vrgeoscience.com/point-clouds-meshes-tiles/#:~:text=Mesh%20data%20is%20much%20easier,precision%20than%20a%20point%20cloud.
One example is when we need highly accurate data (as mentioned above, perhaps sourced from LIDAR) where we are certain that the point represents an exact position, rather than an interpolated value at a certain part of the mesh. One example I can think of might be AV where that level of accuracy may be necessary for certain safety features! Anyone else, please feel free to jump in with other ideas or examples.
Alescontrela
As we approach extremely high compute budgets I wonder if there's any value in using implicit geometry representations.
stephanie-fu
It seems that, in some cases there can be more value in implicit geometry representations with a high compute budget, as point-wise queries become cheaper (unless storage cheapens even more quickly).
ttalati
I feel like these methods were glossed over, so there is some value in describing what each of these terms mean.
Point cloud: basically we are storing individual points on the 3D space and in collection they help describing the entire geometry (naturally we can imagine needing a lot of memory to store high definition point cloud geometries).
A polygon mesh is something we have been mostly focusing on in class where we have triangles (or other polygons) describe the entire geometry.
Subdivision/NURBS are mathematical functions/algorithms to describe the geometry and is what the rest of lecture focuses on.
In terms of implicit: level sets represent a slice in the 3D plane and we can add up those slices to get the entire geometry.
Honestly I am not too sure what distance function and algebraic surfaces mean and could not find much online.
Hsong159
How does advancement of modern day real time rendering techniques influence our choice between explicit and implicit geometric representations?
how would someone be able to model geometry built out of a point cloud?
This usually can be done with a 3d scanning device (LiDAR) or from a set of imgaes with Structure-from-Motion pipeline (https://colmap.github.io/index.html). And mesh can be reconstructed from point clouds with methods like marching cubes.
In addition, there's a process called surface reconstruction, where the points in the cloud are used to generate a representation of the underlying surface, that can be used for modeling geometry from a point cloud.
What are the tradeoffs between the different explicit representations? Point clouds seem much more computationally intensive compared to polygon meshes so I would assume that point clouds are only necessary when we want precise details for complex and irregular shapes. Would augmented reality be a potential use-case for point clouds?
How do generative text-to-image models like stable diffusion decide what the best way to represent the geometry decided by the text is? Also wondering if there are tradeoffs for each of these representations in terms of efficiency, similar to how some algorithms are preferred over others, or whether we just choose the representation that makes the shape the most realistic or similar to our imagined desired outcome.
I wonder how much data would each of these geometries would take up compared to pictures and other types of data. The particle cloud also seems complicated, but I assume that you would only need to store coordinates of each particle.
Would point clouds always model convex surfaces, or is there a way to tell where the borders should be?
What are some examples of when we would use point clouds over a polygon mesh?
@S-Muddana that's a great question! As far as I understand it, and based on this helpful article: https://www.vrgeoscience.com/point-clouds-meshes-tiles/#:~:text=Mesh%20data%20is%20much%20easier,precision%20than%20a%20point%20cloud.
One example is when we need highly accurate data (as mentioned above, perhaps sourced from LIDAR) where we are certain that the point represents an exact position, rather than an interpolated value at a certain part of the mesh. One example I can think of might be AV where that level of accuracy may be necessary for certain safety features! Anyone else, please feel free to jump in with other ideas or examples.
As we approach extremely high compute budgets I wonder if there's any value in using implicit geometry representations.
It seems that, in some cases there can be more value in implicit geometry representations with a high compute budget, as point-wise queries become cheaper (unless storage cheapens even more quickly).
I feel like these methods were glossed over, so there is some value in describing what each of these terms mean.
Point cloud: basically we are storing individual points on the 3D space and in collection they help describing the entire geometry (naturally we can imagine needing a lot of memory to store high definition point cloud geometries).
A polygon mesh is something we have been mostly focusing on in class where we have triangles (or other polygons) describe the entire geometry.
Subdivision/NURBS are mathematical functions/algorithms to describe the geometry and is what the rest of lecture focuses on.
In terms of implicit: level sets represent a slice in the 3D plane and we can add up those slices to get the entire geometry.
Honestly I am not too sure what distance function and algebraic surfaces mean and could not find much online.
How does advancement of modern day real time rendering techniques influence our choice between explicit and implicit geometric representations?