You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 23: Special Topics I (5)
adityaramkumar

It's interesting to understand how depth sensors work. Our eyes are also depth sensors in and of themselves. What if we had only one camera, how much depth could we reconstruct using just software/inferring things?

zekailin00

interesting to see the depth sensor data from iphone. Compared to the cameras on iphone, does the depth sensor have a lower resolution? How high should the resolution of depth sensor be so that we can use the data effectively?

showyouramen

In response to reconstructing things using just one camera (or one photo), I recently took a class where we read a paper on it. The idea of the paper was that we would classify objects in the image so that we could have some sort of baseline to depict depth, because otherwise it's pretty difficult to define depth. The intuition for this was based on how we think. If we were given an image, we would imagine the depth in the image by using prior knowledge of the objects in that image. Pretty interesting stuff, and it's cool that it sort of came up in this class as well.

You must be enrolled in the course to comment