You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 14: Material Modeling (9)
hershg

On an image like this, I believe we basically have a mesh corresponding to the surface area of our shape, and are projecting a texture image onto this meshgrid. How do we avoid seeing weird projection 'warps' between such texture projections (such as contradicting grains of wood meeting each other when projected to this sphere)? How do we account for the interaction of multiple different texture regions of the meshgrid so we don't over-render anything we dont need to (like dont render the checkerboard pattern behind/under our sphere object, etc...)?

Most importantly, what's the relationship between texture projection (like in proj 1) versus the recent techniques we've been covering to understand the lighting of various shapes? Are both these processes done in conjunction or are they independent of each other etc ...?

sunsarah

In lecture, Prof. JRK mentioned a term "albedo", which I had never heard before, so I googled and found the results pretty interesting. "Albedo" refers to the proportion of incident light reflected by a surface, which is apparently not the same as the "diffuse coefficient" which is the proportion of incident light that is reflected away diffusely. Since "alebdo" also could include specular reflection and other types of reflection, it is not always the same thing as the "diffuse coefficient". (Source: https://computergraphics.stackexchange.com/questions/350/albedo-vs-diffuse)

rahulmalayappan

@hershg I think we avoid texture discontinuities by cleverly choosing the texture coordinates on the sphere; this corresponds to a "UV unwrapping process" in which we place the texture seams on the mesh, and we choose seams that are not visually distracting.

There is an interesting chapter in PBRT(on which Mitsuba is based) that describes texturing and texture antialiasing; similarly to how we calculated screen-space gradients in project 1 based on shifts of 1 pixel in the x and y directions, we can antialias textures in the path tracing model by sending out rays that are shifted by one sample in the x and y direction. The chapter is at http://www.pbr-book.org/.

You must be enrolled in the course to comment