You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 27: Inverse Graphics/Research In Angjoo's Group (76)
s-tian

If you're curious about deep learning specifics, one of the contributions in the paper which the authors found to be important for getting high-quality results was to apply a positional encoding to each of the 5 inputs to the MLP, very similar to the positional encoding used to provide location information to Transformer networks. I wonder if this trend of using such encodings to better fit high frequency features in the output can be replicated across a variety of domains.

You must be enrolled in the course to comment