DOF is a very important concept to understand in photography and all general camera work. It allows you to keep the background in focus, and decide which parts of an image you want to sharpen. I'm curious to learn how we are going to apply this in our assignments, as I never heard of this concept existing before outside of general photography.
orenazad
This got me thinking of I believe(?) a simple way to implement this into our projects! I'm sure there are much better mathematical ways to get the desired result, but I think this would be simple and work. Plus, I'm not sure if this type of stuff is covered in project 3-2 or beyond.
In CS194 one of the final projects is the lightfield project. Lightfields are ofcourse covered later in the lecture and are super cool. In this project, we used a 17x17 grid of rectified and cropped images to simulate Depth refocus and aperture adjustment! I wonder if this could simply be done with project 3-1 by writing some code to move the camera around in a similiar 17x17 grid and combine all the images after the fact. Might try this if I find the time! (and if it would work, I'm not entirely sure)
tnthi115
In photography lingo, you might hear depth of field associated or even conflated with aperture/f-stop. The lower your aperture/f-stop, the lower your depth of field and the more blurry your background will be. Often times the blurred background/out of focus elements and little blurred circles from light sources are called "bokeh," which comes from the Japanese word boke meaning "blur" or "haze." Bokeh is often very desirable for background-subject separation and general aesthetics, and it quite difficult to find photographs (especially things like portraits) without it these days. To maximize bokeh, increase your focal length, use the smallest f-stop (widest aperture), and focus at the minimum focusing distance of your lens.
rsha256
Is this used for speedups in rendering, like not rendering parts of the image sharply that are outside the DOF?
countermoe
@rsha256 I have no idea if there's a standard optimization for this, but it's extremely common to use shortcuts for things the viewer doesn't have direct sight on, while it's also easy to think about how using a lower poly-count object would be less noticeable while not in-focus. I think this just comes down to how far a user is willing to optimize and take shortcuts.
sZwX74
@rsha256 this kind of reminds me of what we did with mipmaps: there is a similar concept to be discussed regarding rendering less detail when we don't need it. I imagine a similar concept can be applied here if we mark something as in the background/foreground of the DOF.
Staffjamesfong1
@rsha256 Ironically, accurately simulating lens blur typically increases render times. This is because we are now computing an integral over the aperture of the camera. You will get to explore this more in Project 3-2.
DOF is a very important concept to understand in photography and all general camera work. It allows you to keep the background in focus, and decide which parts of an image you want to sharpen. I'm curious to learn how we are going to apply this in our assignments, as I never heard of this concept existing before outside of general photography.
This got me thinking of I believe(?) a simple way to implement this into our projects! I'm sure there are much better mathematical ways to get the desired result, but I think this would be simple and work. Plus, I'm not sure if this type of stuff is covered in project 3-2 or beyond.
https://inst.eecs.berkeley.edu/~cs194-26/fa17/hw/proj5/
In CS194 one of the final projects is the lightfield project. Lightfields are ofcourse covered later in the lecture and are super cool. In this project, we used a 17x17 grid of rectified and cropped images to simulate Depth refocus and aperture adjustment! I wonder if this could simply be done with project 3-1 by writing some code to move the camera around in a similiar 17x17 grid and combine all the images after the fact. Might try this if I find the time! (and if it would work, I'm not entirely sure)
In photography lingo, you might hear depth of field associated or even conflated with aperture/f-stop. The lower your aperture/f-stop, the lower your depth of field and the more blurry your background will be. Often times the blurred background/out of focus elements and little blurred circles from light sources are called "bokeh," which comes from the Japanese word boke meaning "blur" or "haze." Bokeh is often very desirable for background-subject separation and general aesthetics, and it quite difficult to find photographs (especially things like portraits) without it these days. To maximize bokeh, increase your focal length, use the smallest f-stop (widest aperture), and focus at the minimum focusing distance of your lens.
Is this used for speedups in rendering, like not rendering parts of the image sharply that are outside the DOF?
@rsha256 I have no idea if there's a standard optimization for this, but it's extremely common to use shortcuts for things the viewer doesn't have direct sight on, while it's also easy to think about how using a lower poly-count object would be less noticeable while not in-focus. I think this just comes down to how far a user is willing to optimize and take shortcuts.
@rsha256 this kind of reminds me of what we did with mipmaps: there is a similar concept to be discussed regarding rendering less detail when we don't need it. I imagine a similar concept can be applied here if we mark something as in the background/foreground of the DOF.
@rsha256 Ironically, accurately simulating lens blur typically increases render times. This is because we are now computing an integral over the aperture of the camera. You will get to explore this more in Project 3-2.