Comments

jananisriram in Lecture 11: Radiometry and Photometry (57)

How do we model in what direction light is emitted from surfaces?

jananisriram in Lecture 11: Radiometry and Photometry (50)

Are there shapes other than spheres or hemisphere upon which light can be projected? For example, how do we think about irregularly shaped light shades?

angelinelykk in Lecture 23: Virtual Reality (53)

It operates passively, meaning it doesn't require active monitoring of the eyes' movements. Each eye receives a slightly different view, tailored to its unique perspective. The eyes naturally rotate towards each other (converge) or away from each other (diverge) to adjust for objects that are near or far, aligning the images on the retina to create a sense of depth.

angelinelykk in Lecture 23: Virtual Reality (49)

Stereo vergence involves the coordination of both eyes to focus on an object and align their visual axes toward the same point in space. It is a form of visual processing. This involves eye movement, depth perception, brain processing and focus adjustment.

colinsteidtmann in Lecture 19: Intro To Color Science (141)

I was surprised to learn that average displays only cover 35% of visible colors according to this article, https://www.asus.com/content/understanding-color-gamut-specs-on-laptop-displays/. But you can definitely do better than that. This site, https://www.rtings.com/monitor/tests/picture-quality/color-gamut, lets you see the displays with the largest covering of visible colors and they're all 98% and above.

colinsteidtmann in Lecture 18: Intro to Animation (36)

Animation seems so labor intensive that I decided to do a little more reading. According to this article, https://sites.psu.edu/thebeautyofanimation/2018/03/20/keys-and-in-betweens-the-traditional-animation-process/, animations typically had as many drawings as the frame rate they ran at, which was typically 24 drawings and frames per second. That's 1500 drawings per minute! I thought things would be much faster by now but I looked up how long animation takes these days and according to replies in this reddit post, https://www.reddit.com/r/animationcareer/comments/hounxm/how_long_does_it_take_to_animate_one_minute_of/, a good rule of thumb is "a month per minute." This bewilders me, am I missing something or does it actually take that long to make animations in 2024?

colinsteidtmann in Lecture 23: Virtual Reality (61)

I was curious how VR headsets can track the headset's position in a room so I read this article https://en.wikipedia.org/wiki/Pose_tracking. I learned that have two views, i.e. from two different cameras, can replicate stereoscopic human vision in the sense that the two perspectives give us a clue about how far away objects are. I also learned a little more about inside-out tracking which not only uses cameras all around the outside of the VR device but also uses lasers which lets it send signals and measure distances based on how long it took for the laser light to reach the object and be reflected backwards. This method seems a lot simpler to implement then just using cameras which I'd imagine has pretty complex algorithms behind them.

colinsteidtmann in Lecture 23: Virtual Reality (109)

I was interested in the stitching software behind 360 degree and 180 degree cameras so I read this article from Meta, https://creator.oculus.com/getting-started/getting-started-stitching/. I learned that most manufactures will provide some limited software but of course this won't always be the best, and there are some third part tools like Mistika that you can use. Most interestingly, cameras that shoot 180 degrees with fish eye lens don't need to be stitched at all, they just need to be de-warped.

colinsteidtmann in Lecture 22: Image Processing (8)

@weinatalie, today I learned the reason why we're less sensitive to chromaticity differences than luminance differences. According to https://faculty.washington.edu/chudler/retina.html, our rods (in the eye) detect luminance while our cones (in the eye) detect chromaticity, and more importantly, we have 120 million rods while only 6 million cones!

Liaminamerica2 in Lecture 22: Image Processing (23)

Edge Detection is useful as it can detect objects and extract text and features. Luckily, if anyone needs to use the Canny, Sobel, Laplacian, and Scharr Edge Detection implementations in Python, it is available as a part of the cv2 package. Each implementation has its strengths and weaknesses, so you must consider the tasks you're trying to accomplish before picking one.

JunoLee128 in Lecture 22: Image Processing (56)

Does the PatchMatch algorithm not need to learn about other similar pictures first? (Or are certain parameters essentially trained through data?) I'm interested in the differences between data and non-data driven approaches

JunoLee128 in Lecture 22: Image Processing (54)

It's interesting how some of these "synthetic" textures rely on a very simple algorithm (compared to other DL approaches) but seem very organic and pleasant to the human eye. Even though the algorithm doesn't "logic" about the image as a whole, convolution can get you very far. Nice!

JunoLee128 in Lecture 22: Image Processing (30)

It's interesting how we abstract away this filter (or all filters including box, sobel, etc.) into a simple numerical matrix (don't have to calculate PDF or exponentiate every operation). Very efficent

JunoLee128 in Lecture 22: Image Processing (15)

I think the blockiness is (said to be) "typical" of jpegs (at least in the past, when bandwidth was more limited). It's harder for me to notice the color gradients (color squashing?) effects though on the non-luma channel

JunoLee128 in Lecture 22: Image Processing (3)

It's interesting how this combines numerical compression theory (Shannon theorem? etc) with human perception science (which seems more "subjective" but offers better results). I like how we can combine these concepts into such an oft-used algorithm

myxamediyar in Lecture 22: Image Processing (46)

Responding to GarciaEricS, I think that your filter would need to be quite complex to avoid damaging the image. I imagine that hair is rather high frequency, because of how thin it is. You would need to apply a low-pass filter, but also preserve the sharper edges of the face! I am not sure how you would do that...

myxamediyar in Lecture 20: Intro to Color Science II (45)

I think the coloring scheme in shadows is very fascinating, particularly when you realize how often artists do these 'tricks'. In my small graphic design gig, I remember, I used to color shadows with rather similar colors and just surrounded them by brighter colors. This worked perfectly and saved ink!

myxamediyar in Lecture 20: Intro to Color Science II (38)

The fact that the same color appears so different when surrounded by other colors is very interesting! I think that this is why logos and stickers tend to have a thick outline - to have a uniformity of presentation

myxamediyar in Lecture 28: Conclusion (10)

Wow, this look really cool! I wonder if the difference in the images is due to how Blender handles BTDF - bidirectional scattering distribution function - which is primarily concerned with transparent materials. However, AFAIK, BSDF = BTDF + BRDF, which it seems like you also implemented.

StefanPham17 in Lecture 21: Image Sensors (53)

Something I was curious about was the need for these filters. Having the photoelectric effect being the main driver of the electrons being jettisoned off the metal and as a signal, wouldn't the factor of the number of electrons ejected being a function of the frequency directly inform the camera of the exact color frequency that's hitting the sensor? Is this possibly due to the rate of the electrons being ejected being too variable or difficult to record?

StefanPham17 in Lecture 23: Virtual Reality (5)

I've seen mixed usages of the words Augmented Reality and Virtual Reality with respect to passthrough imaging, particularly when the Apple Vision Pro was unveiled. It'd be nice to pin down an exact definition, but intuitively, I feel what Apple has with this passthrough imaging should be classified as purely VR. The important difference is that for AR, the light that's being received is directly from the surroundings, rather than a recreation of it. I remember when the Google Glass was released and found it fascinating as the first mainstream example of AR, but it didn't really pick up then.

henrykhaung in Lecture 20: Intro to Color Science II (166)

CIELAB basically takes the measurements from CIEXYZ and turns them into numbers that makes sense to humans ie numbers that represent colors that the human eye can see. Therefore, this makes CIELAB a perceptual uniform color space.

henrykhaung in Lecture 23: Virtual Reality (99)

This is such a genius solution. If the VR headset cannot generate new frames fast enough, instead of rendering them, we simply take the last frame and adjust it a bit to match your current head position

jefforee in Lecture 20: Intro to Color Science II (121)

Metamers are very impressive and important as we see it as a solution to color reproduction. It's probably used in so many digital aspects of our lives to "trick" us into seeing a certain color.

jefforee in Lecture 20: Intro to Color Science II (141)

This illustrates the impressive capabilities of various technologies and their color rendering abilities. Equally noteworthy is the fact that the sRGB standard can accurately reproduce many real-life colors without us questioning its ability to do so.

jefforee in Lecture 20: Intro to Color Science II (99)

A negative gamut might be the solution to this issue, but this means in real life that our displays are not able to achieve that particular color and that it is out of our color gamut.

jefforee in Lecture 23: Virtual Reality (118)

It's really impressive how we are able to combine two omni-directional stereo approximations to be one, cohesive approximation. This allows up to be able to view in all directions without gaps.

KevinXu02 in Lecture 14: Material Modeling (44)

I think there are different algorithms to handle hair under different situations, and most of them are costly. For instance https://www.cs.columbia.edu/cg/liquidhair/

KevinXu02 in Lecture 14: Material Modeling (39)

One most simpliest way is to layer textures to create the illusion of fur. As furs are thin and transparent, it is always hard to render them correctly.

KevinXu02 in Lecture 23: Virtual Reality (66)

There's reseaches about using less cameras/ no markers or using acceleration sensor alone to capture human motion. And some of them have good results.

pranavkolluri in Lecture 14: Material Modeling (41)

With hair, would you be using tessellated strands, or an entirely different modeling method? I remember that for a long time, hair ended up being a ton of sheets and the like stacked on top of each other as opposed to individual strands.

KevinXu02 in Lecture 23: Virtual Reality (51)

Some VR devices can adjust the distance between eyes. And I think one most important part is that our brain has ability to adapt what we see in these devices thought slightly different.

pranavkolluri in Lecture 14: Material Modeling (17)

I think in general the idea is that you do use microfacet since it gives you a lot more space to be able to tune the shader.

Rogeryu1234 in Lecture 17: Physical Simulation (73)

We can show this using dρdt=0\frac{d\rho}{dt} = 0, there is no change in the mass density of the flow. But the mass density is a function of space and time. Therefore we can easily obtain the following:

dρdt=ρt+uρ=0\frac{d\rho}{dt} = \frac{\partial \rho}{\partial t} + \vec{u} \cdot \nabla \rho = 0

However, we also have, from the continuity equation.

ρt=(ρu)=uρρu\frac{\partial \rho}{\partial t} = - \nabla \cdot (\rho \vec{u}) = - \vec{u} \cdot \nabla \rho - \rho \nabla \cdot \vec{u}

Therefore, we concludes that a incompressible fliuds, mean that the divergence of the velocity field is zero.

Reference from a question in Physics 105 HW13 Sp24

Rogeryu1234 in Lecture 19: Intro To Color Science (78)

There is always one question, why the sky is blue and the cloud is white. And this slide gives the answer to it from the perspective of color science! It is all illustrated by the spectrum. We can see that for the sky there are more power spectrum around the blue spectrum. For the sun, it is white light, which consisted of seven colors.

Rogeryu1234 in Lecture 19: Intro To Color Science (77)

It is interesting that in Physics 112, we learned about the SPD as well when we talked about Black Body Radiation. This is very important in color science and color reproduction!

Rogeryu1234 in Lecture 19: Intro To Color Science (58)

This is a very interesting example that illustrate the fact that the appearance of color is dependent by the colors around it. If we stare at the black dot, we will see the castle better!

wilrothman in Lecture 17: Physical Simulation (2)

It was interesting to see in this class how basic physics and approximation can be used to model very complicated physical problems like cloths and hair. I especially found self-collision quite interesting.

wilrothman in Lecture 2: Drawing Triangles (65)

It was very interesting to see in this class how a seemingly solution-less problem (aliasing) can be solved using physics and psychology (anti-aliasing). I really enjoyed this class, thank you Prof Ren :)

wilrothman in Lecture 19: Intro To Color Science (138)

It is interesting to see such a natural thing be quantified with mathematics using very standard linear algebra technique. I found the exam questions about this fun.

wilrothman in Lecture 8: Mesh Processing & Geometry Processing (33)

I seriously found this the most interesting topic in all of CS 184. To start, I find math and data structures interesting in general, and it's really cool to see a simple data structure for an artistic technique that can get very complicated for both the programmer and artist. I did find mesh subdivision in Homework 3 really hard.

jinweiwong in Lecture 23: Virtual Reality (110)

@kalebdawit I think the idea is to take a single image and only during post processing do we artificially alter the stereo baseline.

jinweiwong in Lecture 23: Virtual Reality (96)

@helenawsu Maybe when the pixels do emit light it has to emit a brighter one to compensate for all the time that it is off.

noah-ku in Lecture 6: The Rasterization Pipeline (20)

Here we see an explantation of the concept of diffuse reflection, a fundamental aspect of how light interacts with surfaces. Diffuse reflection occurs when light hits a surface and scatters uniformly in all directions, which means the color of the surface appears the same from any viewing angle. This property is described by Lambert's cosine law, indicating that the intensity of light reflected is proportional to the cosine of the angle (θ) between the incident light and the surface normal (n). The illustrations demonstrate this law by showing how a cube's top face receives a certain amount of light directly, while a rotated cube intercepts less light due to the angle, illustrating the cosine relationship between the angle of incidence and the intensity of the reflected light. This principle is vital for creating realistic renderings of objects in computer graphics.

noah-ku in Lecture 6: The Rasterization Pipeline (26)

Here it explains how shiny surfaces produce highlights by reflecting light toward the camera when the half vector (h), which bisects the angle between the viewer direction (v) and light direction (l), is close to the surface normal (n). This proximity is quantified using the dot product, yielding the intensity of specular reflection based on the light's distance and the surface's shininess. The formula given includes a specular coefficient and the angle of incidence to the power of 'p', which controls the shininess, with larger values creating tighter, more concentrated highlights, simulating a glossier surface. This model is a cornerstone in computer graphics, used to add realism to rendered images by simulating the way light interacts with different materials.

noah-ku in Lecture 6: The Rasterization Pipeline (21)

The slide presents the concept of light falloff, illustrating how the intensity of light diminishes with distance. According to the inverse square law, the intensity at a distance 'r' is proportional to 1/r^2, indicating that light spreads out as it travels away from the source, reducing its intensity on surfaces.

noah-ku in Lecture 6: The Rasterization Pipeline (19)

This slide introduces the concept of local shading in computer graphics, which is a technique for calculating the light that is reflected from a surface towards the camera. The inputs for this computation include the viewer's direction (v), the surface normal (n), the direction of the light source (l), and the surface parameters, such as color and shininess. These factors are essential to determine how light interacts with objects in a scene to produce realistic visuals. The diagram illustrates how the angle of light and view direction relative to the surface normal affects the perceived brightness and shading of the surface. This process is fundamental for rendering images that have depth and convey the texture of materials.

rishiskhare in Lecture 23: Virtual Reality (21)

It seems as though the VR headsets tend to heat quickly because the unit has a fan. It makes me concerned whether having radiation so close to the face and eyes is safe for prolonged periods of use. If VR headsets could be made to run with more heat efficiency, perhaps the fan can be removed from the unit and the headset can be made lighter.

rishiskhare in Lecture 12: Monte Carlo Integration (47)

It seems that light source area sampling considerably improves the lighting effect with the same number of random points as solid angle sampling. Is light source area sampling also a cheaper operation/comparable in complexity as sampling the solid angle?

rishiskhare in Lecture 23: Virtual Reality (4)

Is there any way to combine AR and VR capabilities in one headset? I.e. is there a way to make AR more immersive by mirroring the surroundings in a VR headset? It seems like this might make AR more immersive, though it might also lead to accidents if the VR display doesn't accurately depict the surroundings or depth isn't accurate and people try to move around.

rishiskhare in Lecture 23: Virtual Reality (4)

The AR headsets seem so much smaller and less cumbersome than the VR headsets. VR seems to require a more immersive experience, though I'd imagine that VR technology might pick up more relevance and popularity if they were made lighter. From checking on online sources, AR glasses are typically lighter, so I wonder which additional technologies in VR result in more of a heavier headset compared to AR, besides the immersion of a headset that fully covers the eyes.

RyanAlameddine in Lecture 23: Virtual Reality (41)

it seems that the sum of cones + rods is lowest a few mm to the left and right of the fovea. Does this mean we have a ring over weaker eyesight around the fovea?

RyanAlameddine in Lecture 20: Intro to Color Science II (122)

I'm curious as to how we could characterize the set of all metamers of a particular distribution. If we were to correctly arrange the complete set of metamers along some axis, would the constructed surface be continuous?

helenawsu in Lecture 23: Virtual Reality (124)

I understand that light is constant along the ray so we potentially eliminate the z direction. But what about blocking / glass?

helenawsu in Lecture 23: Virtual Reality (96)

I wonder whether the low-persistence would cause the overall image to look darker to the human eye, since the total emitted power is decreased.

jananisriram in Lecture 9: Ray Tracing & Acceleration Structures (12)

Recursive ray tracing seems to help us with recursive bounces of light, which better render an image by adding the reflections of light between different objects. It's interesting that we can use this technique to even model the reflection of different wall colors, for example, on an object, like we implemented in the homework.

ninjab3381 in Lecture 21: Image Sensors (82)

https://hst-docs.stsci.edu/acsdhb/chapter-4-acs-data-processing-considerations/4-3-dark-current-hot-pixels-and-cosmic-rays#id-4.3DarkCurrent,HotPixels,andCosmicRays-4.3.24.3.2CCDHotPixels

I found a cool article that describes how hot pixels can happen not just become of manufacturing defects but also due to radiation in space from the HST! They describe a super dark subtraction method which is similar to Solution#2 in the slideshow. They also try to classify between warm and hot pixels depending on the levels of the dark current. The sensors undergo an annelearing process once a month which reduces the population of hot pixels which is pretty interesting!

ninjab3381 in Lecture 21: Image Sensors (81)

https://www.photometrics.com/learn/advanced-imaging/dark-current#:~:text=Hot%20pixels%20have%20a%20higher,backgrounds%20values%20than%20other%20pixels.

Just wanted to make some clarifications after reading this article. Dark current is basically caused by thermal electrons created on the pixels due to excitations from energy. Hot pixels are basically just pixels with higher than average dark currents. I think this is because when there is a manufacturing defect the electrons generated get stuck and leak into the well. It basically describes how dark current noise is also just the square root of the dark current similar to shot noise.

ninjab3381 in Lecture 23: Virtual Reality (36)

https://www.sas.upenn.edu/~scottds/vision/colorvis.htm From this article I read that the overlap of 2 spectral response curves leads to being able to differentiate wavelength based on color. If only one of the cone cells is activated, its difficult for the eye to tell the wavelength of color but then 2 different cone cells being activated allows one to give a larger spectrum of wavelengths because they combine to form different colors!

andrewn3672 in Lecture 13: Global Illumination & Path Tracing (84)

Russian Roulette is a simple and easy to understand way of termination for global illumination. Because it's impossible to know when we should stop bouncing light, the easiest way would just to have a random probability of terminating the bounce recursion.

andrewn3672 in Lecture 20: Intro to Color Science II (141)

Each device having a different color gamut makes a lot of sense, since hardware will determine what colors and be produced and displayed. It is always a pain to callibrate colors to my liking when I get a new monitor.

andrewn3672 in Lecture 9: Ray Tracing & Acceleration Structures (28)

Bounding volumes is a super simple and intuitive way to understand how to speed up ray tracing. Instead of checking a bunch of empty space, we can easily narrow down on what we actually need to intersect with.

andrewn3672 in Lecture 5: Texture (64)

I originally didn't understand the purpose of mip-mapping, but after looking more into it and completing the assignments, I find it to be very smart and intuitive. We essentially just to make sure we are mapping the texture with the correct ratio.

brandogn in Lecture 22: Image Processing (54)

These ideas remind me a lot of another algorithm called "wave function collapse" which is inspired by quantum mechanics. I'm not entirely sure how it works, but I think it's often used to procedurally generate worlds based on an image and/or rules for what pixels/blocks can be adjacent. Here's a blog post I found on it: https://robertheaton.com/2018/12/17/wavefunction-collapse-algorithm/

brandogn in Lecture 23: Virtual Reality (124)

I never quite understood the part about how everything simplifies to 4D, but I had an aha moment during the exam when I realized that a light can be defined by a ray/line that passes through 2 points on two planes. I think its a pretty cool to realize how useful/powerful it is the capture a 4D light field.

nickjiang2378 in Lecture 23: Virtual Reality (84)

Interesting idea. It'd give the appearance of depth and would probably have to factor in periphery vision because a person can see outside of where they're focusing on; it should just be more blurry

nickjiang2378 in Lecture 23: Virtual Reality (37)

One thing I wonder is how the specific color pixels used in screens (ex. RGB) are chosen. Metamerism allows these pixels to represent the same color with a simpler SPD. But is there some specific benefit to red, green, and blue?

DTanxxx in Lecture 22: Image Processing (54)

Found this article outlining an "example-based synthesis" technique for texture synthesis https://medium.com/embarkstudios/texture-synthesis-and-remixing-from-a-single-example-faf5f4e8a5b8. This can drastically reduce the repetitively work to create hand-drawn textures that look alike but still require some variety.

jonnypei in Lecture 23: Virtual Reality (79)

What is the degree range of a human's "focused" FOV? For example, I can really notice stuff in my peripherals but when looking at a computer screen I can see maybe like half of it relatively well.

Similarly, what does it mean for eyes to only percept detail in 5 degree region? That seems really small.

jonnypei in Lecture 23: Virtual Reality (19)

Does anyone know how much compute is required to run most VR applications/games? If some applications require an ton of compute and require a GPU, how would that be incorporated? into a sleek headset?

jonnypei in Lecture 21: Image Sensors (16)

Is it possible to develop a device to achieve a QE of approximately 1? What makes sCMOS so much better than CCDs or phone cameras? Also, does the low QE affect the color gamut of our human eyes?

jonnypei in Lecture 21: Image Sensors (2)

Something kind of random but relevant to robotics/ML research is that it takes a lot of $$$ to do manipulation/grasping experiments due to how expensive these arms are. The only places that can make big leaps in this stuff are places like Google/Meta etc. or super rich labs (e.g. Goldberg, Malik, etc.).

DTanxxx in Lecture 23: Virtual Reality (77)

Taking into consideration of both rendering quality and user comfort/battery life, perhaps such tradeoffs can be mitigated (or made less severe) via completely different hardware designs? Currently most of the VR headsets exist as "headsets" - one hardware that needs to account for both user comfort and technical capacities (excluding controllers for now). What if the headset is split into 2 hardware components, one specialised for user comfort and another specialised for rendering? I wonder if something like a specialised glasses that are light and easy to wear can be connected via bluetooth to a "powerbank" equivalent of a rendering device that takes input from the glasses and sends output to the glasses via wireless signals? Then the power bank can be stored somewhere else (eg in pockets, backpacks) without having to weigh down on the user's head, and in turn can be made technically more robust without needing to worry about weight? Just some food for thought...

DTanxxx in Lecture 22: Image Processing (18)

In lossy compressions, I wonder if there's a way to decompress such that it restores the information that are lost during compression. Perhaps with the power of generative AI it can be made possible (or if such technology is already mature in a different form)?

brandonlouie in Lecture 17: Physical Simulation (26)

We can see that decreasing our time step would result in a trajectory that is more stable and more closely resembles the solution trajectory. I'm interested to know in what cases a smaller time step is sufficient compared to using Implicit or modified Euler methods

DTanxxx in Lecture 23: Virtual Reality (40)

On top of accommodating for human's visual field of view, I think it can be interesting to leverage the simulation capabilities of VR to present what visual fields of view look like when viewed from other species' perspectives (eg species that have eyes on the side, more than 2 eyes, etc.).

brandonlouie in Lecture 17: Physical Simulation (15)

The spring equation in this slide is useful for computing the force applied to a spring in 2 or more dimensions! For one dimension, though, I find that it is easier to use Hooke's law (F = -kx) so you don't have to deal with the velocity terms (and I believe this should produce the same result).

brandonlouie in Lecture 2: Drawing Triangles (51)

During the implementation of homework 1, I learned that it is also sufficient to check that each of the line tests have the same sign (that is, either they are all > 0 or they are all < 0).

brandonlouie in Lecture 23: Virtual Reality (37)

As a result of metamerism, it is possible for two colors that appear to be the same to have different spectral power distributions. The difference in SPDs would be invisible to us because our cone cells do may not respond to these differences in wavelength

brandonlouie in Lecture 23: Virtual Reality (66)

One really common technique in the VR gaming community to achieve "full-body" tracking at home is to connect an XBOX Kinect to their computers and VR system. I'm not 100% sure about what tracking techniques the Kinect uses, but from experience I've seen that the Kinect is good at identifying a skeleton/bones for a human body to be used in VR programs.

vivek3141 in Lecture 6: The Rasterization Pipeline (15)

@SKwon1220 I think the general idea is that the Z-Buffer algorithm is NOT sorting. It's not possible to use the Z-Buffer algorithm as a subroutine for sorting. An example is if you have a bunch of triangles that are on top of each other, running the Z-Buffer algorithm does not give you the triangles in sorted order. We're essentially getting rid of unnecessary information, which is what gives it the speedup.

vivek3141 in Lecture 7: Bezier Curves & Surfaces (37)

Is there any intuition as to why these curves arise? There's some layer of symmetry here between H_0/H_1 and H_2/H_3.

In addition, I wonder if theres a computational speedup from utilizing the basis functions over computing a closed form solution.

snowshoes7 in Lecture 17: Physical Simulation (74)

I think it's super interesting that this problem (NSE) without a closed-form solution has a meaningful set of applications despite that--it makes you more appreciative of the underlying mathematics at work in graphical simulations

snowshoes7 in Lecture 14: Material Modeling (10)

Pretty interesting, illustrative example of this concept. I have to wonder what the microfacet-like representation of Earth's continents and atmosphere would actually look like? How much would it take to simulate a view like this accurately?

rohan19a in Lecture 21: Image Sensors (81)

Noise seems to be the effect of a preventable issue in the manufacturing process here. Something to explore might be how technological advancements in manufacturing, rather than lens or sensor science might have advanced camera technology.

rohan19a in Lecture 20: Intro to Color Science II (109)

This is interesting, because even if you are colorblind on only one detector, it affects all colors, meaning that even if what you are viewing is not the majority of your colorblind color, you still view something different from a normal person.

rohan19a in Lecture 18: Intro to Animation (65)

I wonder if we could create some new never before seen emotions by combining faces together!

rohan19a in Lecture 6: The Rasterization Pipeline (6)

Interesting. It is interesting that these three elements can capture images.

aravmisra in Lecture 18: Intro to Animation (19)

It's extremely interesting how math-y some of animation is. For example, the ease-in ease-out function is mathematically defined and can be rigorously applied rather than eyeballed while animating movement- just pretty cool that we've translated observational details about how life moves/acts into actionable math to emulate this!

aravmisra in Lecture 18: Intro to Animation (16)

Fun fact- part of the reason that Pixar decided to have their first feature film be Toy Story was because the realism aspect of humans (hair, eyes) were not up to par, but fit "toys", so it was achievable.

aravmisra in Lecture 18: Intro to Animation (10)

Something that particularly is striking is the crazy level of improvement in shutter speed. I mean wow, even 1/1000th of a second is quick- and now it's exponentially faster. Is there something like Moore's law for shutter speed, does anyone know? Will we keep seeing insane shutter speed improvements for years to come?

aravmisra in Lecture 18: Intro to Animation (8)

One thing that strikes me as interesting about this is that it's almost identical to frame-by-frame drawings that animators did (and still do to some extent) for things like movies or cartoons. For example, I watched a video of a Disney animator (in the old days of 2d animated disney movies) where the process was essentially this, except with different tools. Crazy!

zeddybot in Lecture 23: Virtual Reality (16)

Do VR headsets actually provide 360 degree FOV? I imagine that since the human eye cannot even see the full 360 degrees, the headset only needs to provide enough visual information for us to think that the image is occupying our entire visual field. I do wonder if there is any experimental data to suggest what the human eye's FOV actually is.

S-Muddana in Lecture 15: Cameras & Lenses (42)

Here's a really nice playlist of short videos explain camera basics: https://www.youtube.com/playlist?list=PLBWs5dCYykYXo6VmL9EuetvoLSH8bEHlt

S-Muddana in Lecture 15: Cameras & Lenses (48)

I am a little confused as to what it means for the lens aperture to be 'stopped down' to a smaller size. Is this slide essentially saying that although a lens can go to a maximum F number of F/1.4, if it is stopped at F/4, then the resulting photo's F-Number is F/4? Seems logical.

TiaJain in Lecture 23: Virtual Reality (145)

The slide mentions the use of inward-facing cameras for detecting facial expressions for telepresence applications in VR. How do these cameras differentiate between involuntary facial movements and intentional expressions, and how might this technology be calibrated to ensure accurate and reliable input for user interfaces within the VR environment?

GarciaEricS in Lecture 23: Virtual Reality (143)

It's very cool that there is no machine learning involved with this, it is pure optimization. However, it seems to me that that may actually end up hurting this system's practicality for real-time uses? Running an optimization algorithm takes a good bit of time, especially with feasibility constraints, so I wonder if a machine learning model which already has precomputed weights, would be able to perform faster should it be trained properly. Perhaps the model would need to be so large to be useful that it ends up being slower overall.

TiaJain in Lecture 23: Virtual Reality (113)

I'm curious how these computational techniques affect the computational load and what the trade-offs might be.

GarciaEricS in Lecture 23: Virtual Reality (96)

Professor Ng discussed in class that some people are more susceptible to the flicker sensation than others, but overall, you don't notice flicker if it's fast enough. For example light bulbs often have some flicker, or they change their brightness rapidly over the course of a second, but the human eye just isn't built to pick up on changes like that, and the light appears normal. Same thing is happening here, we just don't notice the flicker if it's fast enough.

GarciaEricS in Lecture 23: Virtual Reality (73)

This duality idea that Professor Ng was talking about is very interesting to me. It makes sense that you would only need the same amount of cameras as you do lights in the two systems and vice versa because we are still fundamentally trying to solve for the same 6 degrees of freedom. I wonder if we could view the scenario through the perspective of linear algebra. If we could, I bet the duality would come through as a change of basis.