You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 1: Introduction (43)
NicholasJJ

Looking at all these examples it strikes me just how inter-related computer graphics is. A few slides back there was an example of inverse graphics being used to draw a virtual face from a video feed, and I'm guessing the tools used to solve that problem aren't that different from how the campanile texture scans worked. Recovering a VR headset position from the cameras on the headset also seems like a related problem.

Staffyirenng

@NicholasJJ -- yes, I agree there are striking common foundations across much of visual computing. The examples you cited lean more into computer vision techniques, and as I described in class there is a virtuous cycle between research and development between graphics and vision, especially today.

Perhaps the biggest change in computer vision between the time when Paul Debevec worked on the Campanile project and the other example you cited -- recent, realtime inverse graphics to recover the expression of a human face from video -- is the emergence of big-data machine-learning techniques in computer vision. The Campanile project relied much more on classical vision techniques, such as stereo correspondence, as opposed to machine learning in the inverse graphics example.

A leader on inverse graphics at Berkeley is Prof. Angjoo Kanazawa.

You must be enrolled in the course to comment