In lecture, Professor O'Brien mentioned by rotating the camera around in different positions, you get the "grid of cameras" effect that allows for 4D rendering of images. Is it possible to take a lesser amount of these images and artificially interpolate the perspectives between the actual images to create a similarly realistic 4D rendering? This reminds me a little of sampling and how more samples = better rendering.
JefferyYC
Would like to confirm my understanding of the 4 parameters. x and y refer to the camera's position in space (no z value since the ray has constant luminance along z axis). Do theta and phi refer to the angle of the ray with respect to x axis and y axis? Also on the other slide the other 2 parameters are u and v. Are they the same as theta and phi?
ryurtyn
How do you capture all the images with constant light source without having a shadow get in the way?
imjal
@JeffereyYC I think you're correct about x and y. I am going to guess that theta and phi are spherical coordinates, and the radius is constant, so you would be sampling the sphere enclosing the object.
NKJEW
This process of using multiple camera angles to generate a single "volume" seems quite similar to how photoscans work in terms of overall procedure, and yet they seem to operate under a different conceptual basis. For these light fields, it seems to be about just capturing the light, and the fact that an "object" becomes realizable is simply because we can interpret that light as such. But for a photoscan, there's an actual 3D mesh generated (and texture mapping), resulting in a "physical" object being created.
Is there any connection at all between these two processes, or are they only connected by this high level need to capture images at many different angles?
nobugnohair
I am thinking that this spherical model could also be used to construct VR/AR world when used inversely (light hitting the surface from the outside). So we could sample pixels of light on the outer surface and interpolate them to display on the viewer's headset...
In lecture, Professor O'Brien mentioned by rotating the camera around in different positions, you get the "grid of cameras" effect that allows for 4D rendering of images. Is it possible to take a lesser amount of these images and artificially interpolate the perspectives between the actual images to create a similarly realistic 4D rendering? This reminds me a little of sampling and how more samples = better rendering.
Would like to confirm my understanding of the 4 parameters. x and y refer to the camera's position in space (no z value since the ray has constant luminance along z axis). Do theta and phi refer to the angle of the ray with respect to x axis and y axis? Also on the other slide the other 2 parameters are u and v. Are they the same as theta and phi?
How do you capture all the images with constant light source without having a shadow get in the way?
@JeffereyYC I think you're correct about x and y. I am going to guess that theta and phi are spherical coordinates, and the radius is constant, so you would be sampling the sphere enclosing the object.
This process of using multiple camera angles to generate a single "volume" seems quite similar to how photoscans work in terms of overall procedure, and yet they seem to operate under a different conceptual basis. For these light fields, it seems to be about just capturing the light, and the fact that an "object" becomes realizable is simply because we can interpret that light as such. But for a photoscan, there's an actual 3D mesh generated (and texture mapping), resulting in a "physical" object being created.
Is there any connection at all between these two processes, or are they only connected by this high level need to capture images at many different angles?
I am thinking that this spherical model could also be used to construct VR/AR world when used inversely (light hitting the surface from the outside). So we could sample pixels of light on the outer surface and interpolate them to display on the viewer's headset...