The problem may be that lighting calculations require information of occluded fragments to be calculated. However, I've also seen this "deferred rendering" pipeline that draws everything to different frame buffers first, and then do fragment calculations, so I'm not to sure either.
emarkley
It seems like you could calculate things like surface normals for each pixel prior to z-testing but figure out shading values after saving a few calculations. I wonder if in practice the implementation of this pipeline is a lot more nuanced.
The problem may be that lighting calculations require information of occluded fragments to be calculated. However, I've also seen this "deferred rendering" pipeline that draws everything to different frame buffers first, and then do fragment calculations, so I'm not to sure either.
It seems like you could calculate things like surface normals for each pixel prior to z-testing but figure out shading values after saving a few calculations. I wonder if in practice the implementation of this pipeline is a lot more nuanced.