You are viewing the course site for a past offering of this course. The current offering may be found here.

Comments

sphindle1 in Lecture 19: Introduction to Color Science (9)

It's super trippy to think that the way you see a color may not be the same way someone else sees that color. Really makes you wonder how other people see the world...

sphindle1 in Lecture 15: Cameras and Lenses (43)

One really cool effect I learned in my photography class that can be created from long exposure photography is the ghost effect. This can be created by having a person standing in a area for a while, and then quickly moving out of the camera's view while the camera takes the picture. This creates a transparent image of the person, making them look like a ghost

jeshlee121 in Lecture 7: Geometry And Splines (22)

Do people still use this in the modern day?

jeshlee121 in Lecture 26: VR (Cont) (74)

Tilt Brush is one of the funnest apps I've tried in VR.

jeshlee121 in Lecture 26: VR (Cont) (5)

From my experience, these can be a pain to set up (with getting the right distance between the sensors and between you and the sensors, etc.)

jeshlee121 in Lecture 26: VR (Cont) (3)

Bought myself a google cardboard, but after the initial novelty wore off, it just doesn't seem to be great enough that I want to pick it up and play in it again.

jeshlee121 in Lecture 26: VR (Cont) (21)

@bchee, what did he say is the cause of motion sickness?

jeshlee121 in Lecture 26: VR (Cont) (59)

@dtseng, on the oculus rift, they have a feature where you can adjust the distance between the lenses so that you can find the sweet spot where its focused for you

jeshlee121 in Lecture 26: VR (Cont) (36)

More information about challenges in virtual reality like motion sickness can be found in this article: https://en.wikipedia.org/wiki/Virtual_reality_sickness

jeshlee121 in Lecture 19: Introduction to Color Science (17)

Is it ever possible to see the "invisible" colors?

jeshlee121 in Lecture 19: Introduction to Color Science (41)

Ren talks about this during the 3rd Annual Berkeley AR/VR Symposium re. Oz Vision!

jeshlee121 in Lecture 19: Introduction to Color Science (8)

https://www.youtube.com/watch?v=pJ6YV9-qxmg

Pretty awesome!

xiaoyankang in Lecture 15: Cameras and Lenses (44)

Here's an interesting journal describing an exposure renderer. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0038586

xiaoyankang in Lecture 15: Cameras and Lenses (42)

Here're some amazing long-exposure photographs, and it also explains how to achieve such effects. https://create.adobe.com/2013/10/1/long_exposure_photography_of_toby_harriman.html

mnicoletti15 in Lecture 18: Physical Simulation (63)

It is possible that a crazy alternative to numerical integration for simulation of a physical system could be to simulate a probabilistic model from statistical mechanics that exhibits the correct dynamics in its scaling limit, and then perturb this system slightly to match a real physical scenario

xiaoyankang in Lecture 15: Cameras and Lenses (38)

Here's an interesting guide to high-speed photography. "Points to remember: shoot in dark room; small aperture; manually focus; flashes" https://digital-photography-school.com/high-speed-photography-fundamentals/

mnicoletti15 in Lecture 18: Physical Simulation (59)

One interesting way to simulate such traffic scenarios might be to simulate random particle systems that generate similar dynamics in certain limits. e.g. see http://simulations.lpsm.paris/asep/ and http://www.math.columbia.edu/department/thera/slides/Corwin.pdf

Fascinatingly, these models exhibit beautiful and complicated mathematical physical and mathematical phenomena. Some of their properties could be used to simulate real systems.

xiaoyankang in Lecture 12: Integration (62)

"Pure diffuse surfaces are only theoretical, but they makes a good approximations of what we can find in the real world." This blog offers a detailed explanation of hemisphere uniform sampling, the part discussing rejection sampling is particularly interesting. https://blog.thomaspoulet.fr/uniform-sampling-on-unit-hemisphere/

xiaoyankang in Lecture 15: Cameras and Lenses (10)

"In practice, the vertex of this triangle is rarely located at the mechanical front of the lens, from which working distance is measured, and should only be used as an approximation unless the entrance pupil location is known." https://www.edmundoptics.com/resources/application-notes/imaging/understanding-focal-length-and-field-of-view/

kingdish in Lecture 25: Virtual & Augmented Reality (27)

I’m wondering what the differences between headsets from different companies

xiaoyankang in Lecture 16: Light Field Cameras (8)

https://www.quora.com/Why-is-light-field-4d This discussion forum offers a detailed explanation of why light field is 4D. The light field is 5D, a 4D light field is our visual perception.

xiaoyankang in Lecture 26: VR (Cont) (92)

I found another interesting article (https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53) which explains CNN comprehensively with detailed graphs.

arjunsrinivasan1997 in Lecture 26: VR (Cont) (30)

How do you build sensors that can accurately track the gaze of a person?

arjunsrinivasan1997 in Lecture 26: VR (Cont) (20)

Do current display technologies have a high enough pixel density such that VR is not limited by them?

arjunsrinivasan1997 in Lecture 26: VR (Cont) (9)

How is the lag regarding acquiring marker IDs accounted for

henryzxu in Lecture 23: How GPUs work (7)

In the early days of video games and computer graphics, much of the assembly optimization was done by hand--Michael Abrash's Graphics Programming Black Book provides some incredible insight into the topic, and contains lessons about the benefits and pitfalls of optimizations that are still quite relevant today.

link

ayushsm in Lecture 25: Virtual & Augmented Reality (1)

I was curious as to why this is the definition we define reality by. It seems to me like our reality is defined by a whole host of senses but we like to limit VR to only being defined by our sight. Is it because this is the easiest sense by which we're able to "switch" in and out of virtual and real reality or is it some other reason?

ayushsm in Lecture 25: Virtual & Augmented Reality (12)

For head-mounted displays does it only matter that we engage our sight or is it more so that we are able to immerse ourselves better in that? I'd imagine that the inclusion of our other senses would obviously result in more immersion but what about something like taste or touch? Has there been any research or technology that tries to engage those senses because it seems like that would be the next huge step in VR.

henryzxu in Lecture 23: How GPUs work (5)

I'm not confident, but the "Tex" blocks could be texture mapping units if it's just a representation of the processor itself. VRAM is definitely an important consideration when purchasing a GPU, however. On a related note, is there a reason why VRAM is GDDR5 versus the standard DDR4 for regular RAM?

ayushsm in Lecture 26: VR (Cont) (40)

Is there any ever any point at which us including more and more frames results in loss of efficiency or some sort of tradeoff elsewhere? I understand why we would get closer to the actual value with higher frame rates but I'd imagine that we'd have to have either extremely expensive hardware or something else lost in order to compensate for that?

henryzxu in Lecture 23: How GPUs work (1)

The Unity demo from @fywu85 is incredibly impressive. I'm still in awe regarding the real-time aspect, although real-time when you know all the inputs before hand seems a step removed from real-time in a true video game sense where you can interact with the environment.

jchen12197 in Lecture 25: Virtual & Augmented Reality (58)

For accommodation, I get that the eye is focusing at objects at different distances, but what does "optical power" mean?

shivamparikh in Lecture 26: VR (Cont) (18)

I remember going to a Maker Faire in 2012 and wearing a massive backpack and putting on some crude goggles and holding modified Wii remotes to participate in a makeshift VR experience. Just 3 years later at SIGGRAPH, I got to try the Oculus Crescent Bay Headset and its clear that progress is moving faster than ever.

shivamparikh in Lecture 26: VR (Cont) (8)

While the passive is definitely a less complicated scenario of motion capture when compared to the active, I think the retroreflective markers present obstacles in clearly identifying which marker is being identified and the noise in collection could lead to motion disruptions.

shivamparikh in Lecture 26: VR (Cont) (7)

It seems that blocking the visible light spectrum makes it so there is less noise when trying to identify the tracking dots for motion capture. My main question is whether it is possible to build a sensor that excludes the visible light spectrum without the need for a filter

randyfan in Lecture 4: Transforms (77)

Even though the study of photography and cameras is relatively new (around 200 years old), the study of optics is very old field. In fact, an Arab scholar named Ibn Al-Haytham (945-1040), invented the camera obscura. This was a precursor to the pinhole camera and demonstrates how light can project an image onto a flat surface.

randyfan in Lecture 4: Transforms (25)

Why is affine transformation called “affine”? Affine means preserving parallel relationships. After an affine transformation, an object preserves its points, lines, planes, etc, preserving its set of parallel lines. This geometric distortion is useful in correcting non-ideal cameral angles.

randyfan in Lecture 14: Material Modeling (57)

Subsurface scattering is used describe the light scattering effect that may occur as the light ray passes through a translucent or semi translucent surface. This scattering effect is oftent times used in video games to create realistic material. In fact, Unreal Engine 4 actually offers a special shading tool called subsurface scattering. It is specifically designed for material interaction involving skin or wax.

michellebrier in Lecture 13: Global Illumination and Path Tracing (58)

Why is the light suddenly dark in this image, whereas it was illuminated in the direct illumination image? Is this direct + one bounce, or just the one-bounce illumination isolated?

michellebrier in Lecture 8: Meshes and Geometry Processing (45)

I wonder whether, for particular applications, it's standard to use one mesh type over the other (i.e. gaming, movies). It'd be interesting to learn more about the specific advantages/disadvantages of triangular mesh vs quad mesh.

michellebrier in Lecture 5: Texture Mapping (63)

I found this video to be a good visual introduction to mipmapping, magnification, and minification: https://www.youtube.com/watch?v=8OtOFN17jxM

michellebrier in Lecture 9: Raytracing (73)

I thought this Medium post about a custom acceleration structure project was pretty cool and gave me some insight about tackling this in projects: https://medium.com/@bromanz/how-to-create-awesome-accelerators-the-surface-area-heuristic-e14b5dec6160

michellebrier in Lecture 14: Material Modeling (14)

What's the distinction between BRDF and BSDF?

qqqube in Lecture 8: Meshes and Geometry Processing (75)

One approach is a surface-oriented one, which operates directly on the surface and treats the surface as a set of points/ polygons in space. Another is parameterization-based which is computationally more expensive but works for coarse resolutions.

qqqube in Lecture 18: Physical Simulation (35)

This articles talks about the math behind the forward and backwards euler methods.

http://web.mit.edu/10.001/Web/Course_Notes/Differential_Equations_Notes/node3.html

qqqube in Lecture 17: Intro to Animation, Kinematics, Motion Capture (26)

A principle here that wasn't discussed was "straight ahead and pose to pose animation", which begins with the first drawing and works drawing to drawing to the end of the scene, potentially losing size, volume, and proportions. However, this loss in definite characteristics is compensated in spontaneity, which is preferable for fast action scenes. The "pose to pose" component describes a method of planning out key drawings through various intervals in the scene.

qqqube in Lecture 20: Introduction to Color Science (Cont) (65)

A similar color space is HSL, which stands for hue, saturation, and lightness, the main difference between the two being that HSL is symmetrical to lightness and darkness, making it a bit more accurate for color approximations than HSV.

qqqube in Lecture 22: Image Processing (40)

Another way to improve convolution runtime is to use dilated convolutions, which add an additional parameter to the 2D kernel called the dilation rate (the space between each pixel in the filter).

https://www.saama.com/blog/different-kinds-convolutional-filters/

qqqube in Lecture 25: Virtual & Augmented Reality (3)

I found an interesting research paper that discusses a new image-based rendering technique called "concentric mosaic" for VR apps; the concentric mosaic is basically a 3-D plenoptic function with viewpoints constrained on a plane, allowing users to move freely in a circular region and observe parallax and lighting changes without recovering photometric/geometric scene models:

https://ieeexplore.ieee.org/document/1386244

x-fa19 in Lecture 25: Virtual & Augmented Reality (20)

I'm also curious about what the social impact of VR chat might be, assuming that it gains popularity and becomes a main method of communication. There are several cases of people leaving rude messages and such anonymously online (which they would not necessarily verbalize or say in real life), and I wonder if that might change if there exists the virtual equivalent of "face-to-face" in these interactions instead, or whether it might actually end up being worse.

x-fa19 in Lecture 25: Virtual & Augmented Reality (17)

VR painting definitely appears to be something that's an in-between for painting and sculpting; as noted in previous comments above, it seems to be very versatile, and incredibly useful for designing 3D objects efficiently, given the volumetric painting capabilities.

hannahmcneil in Lecture 25: Virtual & Augmented Reality (55)

If we're talking about actual human eyes, then yes, mis-calibration between the eyes has long-term effects. Strabismus is the condition in which the eyes do not properly align when looking at an object. It can lead to a condition in which the input from one eye is essentially ignored, meaning you can lose depth perception and other binocular cues to some extent (but perhaps it is worth it to keep consistency in visual input?) For more info: https://en.wikipedia.org/wiki/Strabismus

x-fa19 in Lecture 25: Virtual & Augmented Reality (16)

On the mention of the possibility of the virtual camera's movements being uncomfortable -"virtual reality motion sickness" is something that definitely exists and has been noted and observed. More info here: https://www.wareable.com/vr/vr-headset-motion-sickness-solution-777

hannahmcneil in Lecture 25: Virtual & Augmented Reality (17)

To add something else to the comment above, it would be really cool to combine this sort of interactive tool with stuff like 3D models and simulation; for example, you design 3D clothes which can then be placed onto 3D models and simulated with different kinds of material to see what works best. Probably extremely complicated, but intriguing nonetheless!

hannahmcneil in Lecture 26: VR (Cont) (4)

I was reading about these markers and part of the algorithm involves detecting candidate markers in the environment and then accepting or rejecting them. So it seems the speed/lag of such a system would rely heavily on the specifics of the environment, for example if there were a lot of square shapes around (like a particular wallpaper pattern) that could potentially cause there to be much more lag or even corruption if the patterns were too similar.

michellebrier in Lecture 21: Image Sensors (86)

@orkun1675 I think this is because there's a sqrt(lambda) variance with lambda photons and when lambda is smaller (i.e. the photo is darker), sqrt(lambda) is closer to lambda, causing significant noise. Increasing exposure increases lambda and also increases sqrt(lambda) but the relative noisiness is decreased.

cedricnixon in Lecture 26: VR (Cont) (48)

I wonder what additional kinds of noise get introduced when you get a multiple camera array like this. Counteracting that would be an interesting problem.

cedricnixon in Lecture 26: VR (Cont) (42)

It is certainly feasible to see within the next few years. Perhaps an innovative use case is needed to capture the public interest.

cedricnixon in Lecture 26: VR (Cont) (32)

I wonder if we can still think of these as thins lenses.

michellebrier in Lecture 14: Material Modeling (36)

To follow up on the point about the surface normals, anisotropic surfaces have unequal physical properties along different axes. This helped me understand anisotropic vs isotropic a bit more: http://www.neilblevins.com/cg_education/aniso_ref_real_world/aniso_ref_real_world.htm

michellebrier in Lecture 17: Intro to Animation, Kinematics, Motion Capture (29)

I thought this article by Adobe on the different ways to implement keyframe interpolation with Adobe After Effects (https://helpx.adobe.com/after-effects/using/keyframe-interpolation.html) was a pretty cool example of how this is actually done in practice.

michellebrier in Lecture 17: Intro to Animation, Kinematics, Motion Capture (48)

Pretty cool article on this history of motion-capture movies: https://screencrush.com/motion-capture-movies/

horrorsheep in Lecture 16: Light Field Cameras (47)

Since light field camera is a multi-camera array, there may be a tradeoff between the cost of adding lens in the array and getting the more accurate light field.

horrorsheep in Lecture 17: Intro to Animation, Kinematics, Motion Capture (48)

Motion capture is good at face and fingers or subtle expressions.

horrorsheep in Lecture 26: VR (Cont) (21)

I remember that Playstation requires its VR software to run at least 60 fps to ensure VR experience

hershg in Lecture 26: VR (Cont) (15)

Slide 22 of this lecture actually raises and answers questions relevant to ^

hershg in Lecture 26: VR (Cont) (17)

Note that Google Tango is retired, in favor of ARCore

hershg in Lecture 26: VR (Cont) (11)

How can we add stability and reduce errors in our sensor readings that all these calculations are based on? What additional sensors might we use to improve the quality of our pose + roll/pitch/yaw movement readings?

hershg in Lecture 26: VR (Cont) (15)

It would be interesting to see a study of the "rolling" effects/inaccuracies in these sensor measurements due to spatial movements by the wearer. Just like when taking a photo from a camera that sequentially captures from a left to right pattern gives us a "slanted" sensor image, would we see analogous effects here?

hershg in Lecture 26: VR (Cont) (4)

Markers like these are frequently used for sensor calibration and orientation as well, since they present easy to recognize patterns for cameras

hershg in Lecture 26: VR (Cont) (3)

One interesting side effect of google cardboard/daydream type technologies is that you can directly see on the smartphone what your 2D screen is rendering in order to give your eyes the immersive VR perception. It basically looks like the smartphone screen is split in half, with each half looking like a "fish eye" lens of the scene in front of the virtual camera, with a minor difference in position for each eye (giving depth perception)

Caozongkai in Lecture 26: VR (Cont) (28)

It does not always have to be the center, the new HTC vive will actually track your eye movement and focused on rendering the part where you are looking at.

Caozongkai in Lecture 26: VR (Cont) (20)

In VR, to perceive same resolution from a 4K screen actually need a much higher resolution. I think the below forum discuss it really well: https://forums.oculusvr.com/community/discussion/61106/what-resolution-per-eye-in-vr-to-make-it-look-like-a-1080p-monitor

Caozongkai in Lecture 26: VR (Cont) (14)

You probably won't notice it in demo, but when actually it is working in a quite room, the spinning wheels are making some noise and shaking which would amplify the sound.

Caozongkai in Lecture 26: VR (Cont) (12)

From my personal experience, this setup is proving the best tracking quality. especially the signal(IR) would not be blocked from human body since it track from two sides. Also, only one light house would also work, but lower quality.

Caozongkai in Lecture 26: VR (Cont) (7)

If it only detect IR, does that means it is also capable of capturing motions, not just the headset?

Caozongkai in Lecture 26: VR (Cont) (3)

I have both Google cardboard, HTC Vive and Samsung Galaxy. And I am pretty sure that cardboard definitely have a worse experience. But the cost is so low that I think people should really tried it before consider buying a real one.

zehric in Lecture 26: VR (Cont) (73)

I feel like the answer is yes, and the more cameras we have the better job we can do

zehric in Lecture 26: VR (Cont) (21)

@tyleryath Maybe someday when we can directly connect our brains to computers, all our senses can be fully immersed in VR. I think for now, though, even if latency and resolution are taken care of, we need a more intuitive input method for experiences to be better.

zehric in Lecture 25: Virtual & Augmented Reality (14)

It's interesting to remember that AR glasses have largely failed in the market thus far; wondering if maybe Apple can change that!

AnastasiaMegabit in Lecture 25: Virtual & Augmented Reality (20)

This is along the lines of what Facebook has been talking about doing. https://www.wired.com/story/facebook-oculus-codec-avatars-vr/ I am also very curious about the transmission of facial expressions and body language in these more serious situations as the person above mentioned. The video of the face chat in that link is interesting and made me think about the difficulty of the problem of getting the microexpressions right.

AnastasiaMegabit in Lecture 25: Virtual & Augmented Reality (13)

So would Google Glass have been like augmented reality then? I'm curious why they got so much push back to the point that they stopped selling there product when there is clearly still interest in AR. This picture specifically would hit some of the same markers the public complained about with Google Glass.

tyleryath in Lecture 26: VR (Cont) (42)

How long will it take for these changes to get implemented? I hear conflicting opinions on whether or not VR will be the next "platform" similar to what mobile phones did in the early 2000s. I would imagine these improvements would need to be implemented in order for that to happen. Is it feasible within the next few years?

tyleryath in Lecture 26: VR (Cont) (92)

If anyone is interested in getting a clear visual understanding of whats going on in a CNN, check out this video: https://www.youtube.com/watch?v=BFdMrDOx_CM

tyleryath in Lecture 26: VR (Cont) (21)

What are some other requirements for a truly immersive VR experience?

serser11 in Lecture 25: Virtual & Augmented Reality (23)

One eye captures a roughly circular area. Because we have two overlapping fields of vision for depth perception, between our two eyes we capture a roughly elliptical shape

serser11 in Lecture 25: Virtual & Augmented Reality (23)

In computer games and modern game consoles the FOV normally increases with a wider aspect ratio of the rendering resolution.

serser11 in Lecture 25: Virtual & Augmented Reality (23)

The visual field of the human eye spans approximately 120 degrees of arc. However, most of that arc is peripheral vision.

serser11 in Lecture 25: Virtual & Augmented Reality (23)

Our actual FOV is 114 degrees. Binocular vision covers 114 degrees horizontally of the visual field in humans; the remaining peripheral 40 degrees on each side have no binocular vision because only one eye can see those parts of the visual field

eliot1019 in Lecture 26: VR (Cont) (20)

this isn't really scientific but thought it was funny: https://edgy.app/how-lab-animals-are-using-vr

eliot1019 in Lecture 26: VR (Cont) (21)

Interesting article that says adding adding a virtual nose reduces motion sickness. https://www.wired.com/2015/04/reduce-vr-sickness-just-add-virtual-nose/

ellenluo in Lecture 25: Virtual & Augmented Reality (46)

I'm curious if it's possible to wear glasses with VR headsets since most people don't have 20/20 vision. Would this affect how the headset focuses/computes what images to show?

ellenluo in Lecture 26: VR (Cont) (30)

Do any VR headsets today incorporate foveated rendering? Or is this a new concept in VR research? I'm curious as to the side-effects of this if the eye moves very fast (perhaps too fast for the VR headset to respond).

ellenluo in Lecture 26: VR (Cont) (41)

I'm also confused by this. This almost seems worse as it would appear like the image is flashing. Is there a certain frame rate that prevents this from being noticeable?

ellenluo in Lecture 26: VR (Cont) (39)

Is this effect noticeable enough on modern VR headsets to cause discomfort? I'm curious about what refresh rates would cause this to be distracting/uncomfortable for the user.

aparikh98 in Lecture 25: Virtual & Augmented Reality (9)

Is this function just used to theorize about "the set of everything we see". Seems like it is too data and computationally intensive to compute or even approximate

aparikh98 in Lecture 26: VR (Cont) (21)

VR systems range dramatically in the quality of their hardware from full gaming computers with dedicated graphics (HTC Vive) to just on your phone (google cardboard). Perhaps, since latency is not as important this allows for smaller systems without dedicated hardware

aparikh98 in Lecture 25: Virtual & Augmented Reality (26)

This is a really cool experience when used with a phone with a nice display (newer galaxies work well), especially for a short period of time. Its amazing how much a few lenses and some software can do!

quetzacal in Lecture 26: VR (Cont) (18)

It’s crazy to see how far VR has gone from the earliest immersive technology such as Sensorama in the 60’s which attempted to give users the experience of riding a motorcycle by simulating noise, wind, smell and view. There is some pretty interesting current research being done in this kind of immersive technology called Augmented Reality Flavors where people can change the taste of something they’re eating. Heres a demo from the University of Tokyo: https://www.youtube.com/watch?v=qMyhtrejct8&feature=youtu.be

sandykzhang in Lecture 26: VR (Cont) (82)

Helped work on the YI Halo https://vr.google.com/jump/ a couple of years ago. The cameras were basically just like (cheap knockoff) go-pro's in a ring with the video capture very synchronized without that much fancy stuff going on hardware-wise!