I wonder how transparent surfaces would be handled (or if it could be handled at all by this algorithm). Additionally, it seems like this method is not very memory efficient; have there been successful optimizations implemented?
I think in the last comment "simple" -> "sample".
This algorithm still has to process each sample for each triangle. The use of the extra zbuffer (adding space complexity) allows us to save on writes to the framebuffer, but we're still doing checks on all triangles regardless of if they're behind some other triangle already processed to the framebuffer. So how does this really help as far as efficiency? Also in worst case if the order of processing triangles just happens to be the worst order (starting from farthest depth to closest), we'd end up anyways rendering each triangle and painting over old pixels with new stuff, reducing to the Painter's Algorithm. So how does z-buffer avoid these pitfalls in practice?
Here's a detailed explanation (with diagrams) of Z-Buffer algorithm on GeeksforGeeks: https://www.geeksforgeeks.org/z-buffer-depth-buffer-method/. Interesting points to note: z values (depth values) are usually normalized to [0, 1]. When z = 0, the surface is Back Clipping Pane and when z = 1, it is Front Clipping Pane. Also, polygons don't have to be pre-sorted for Z-buffer.