Lecture 6: The Rasterization Pipeline (13)
jbf11

It seems to me that this algorithm requires going through the entire rasterization pipeline for each object in the world, and only deciding after the computations if the object will actually be visible. I imagine this can get very inefficient if many objects are hidden by the current perspective. Is there an efficient way algorithms quickly skip objects/parts of surfaces that may be inaccessible from the current perspective within the Z-Buffer framework? (Thinking of a mix of some overall depth data associated with an object.) Also, how can the Z-Buffer algorithm be adapted for partially transparent triangles?

Boomaa23

@jbf11 interesting thoughts! I also thought along the same lines that this seems very inefficient. I was thinking (I don't know this for sure though) that you could store the minimum z-value for each triangle, that way you can skip the whole triangle if it is entirely behind what is already drawn into the frame buffer. You could also maybe sort the samples beforehand by z-value (not sure how but it might be more efficient in amortized time).

jefforee

In the previous lecture, we learned that standard camera coordinates have the z-axis pointing away from scene and towards the camera (https://cs184.eecs.berkeley.edu/sp24/lecture/4-64/transforms). In this implementation, why are we storing the min z then? Shouldn't we store the max z as z increase going towards the camera?

You must be enrolled in the course to comment