You are viewing the course site for a past offering of this course. The current offering may be found here.

Comments

bufudash in Lecture 10: Ray Tracing (28)

I feel like for better ray tracing, there should be different strategies to accelerate Ray-Scene Intersection. Wouldn't it be more beneficial if we classify the scene first and then tailor the strategy to it?

mmatiss in Lecture 26: Intro to Virtual Reality 1 (61)

A bit confused on this, is this light field already present in vr? Or is it just a potential improvement on current technology/realism?

mmatiss in Lecture 26: Intro to Virtual Reality 1 (67)

Never realized what the little posts were for, its cool to see how the tech lines up with what we are learning. Fun fact, the little sensor bar on the wii was 2 different ir led's, so if it broke, you could replace it with two candles and the controller would work basically the same

mmatiss in Lecture 26: Intro to Virtual Reality 1 (65)

It seems that environment based support like this could take a long time. Detecting each marker and matching it to its corresponding one in a room with so many markers on the walls could have a really long runtime... Are there any algorithms for a space like this?

ariel-hi in Lecture 22: Color 2 (28)

Plus I had no idea the right image was showing a logo until I viewed the source images.

ariel-hi in Lecture 22: Color 2 (28)

Interesting... I can see the differences in the posted lecture video. However, when I view this slide directly using my Dell XPS 15, the flower looks only slightly different and the red images look the same!

lxjhk in Lecture 16: Cameras and Lenses (79)

A good article explaining this: https://www.picturecorrect.com/tips/the-circle-of-confusion-and-its-impact-on-photography/

Staffirisli in Lecture 16: Cameras and Lenses (76)

To use the app in Chrome, you have to go to chrome://settings/content/flash and click the toggle on the right so that it says "Ask first". That way, Chrome won't auto block flash applets.

Link to the app: https://graphics.stanford.edu/courses/cs178-10/applets/gaussian.html

Chengjie-Z in Lecture 5: Texture Mapping (34)

Texture minification is harder to solve than texture maginification.

Staffjessicajyeh in Lecture 16: Cameras and Lenses (49)

yes, if it's too dark, we want to keep the shutter open for longer so more light can come in. This would mean we want slower shutter movement, which corresponds to larger number for a shutter speed, since it is the duration that the shutter is open.

dangeng184 in Lecture 23: Image Sensors (68)

Ok, I realize nobody is checking these comments anymore, but I just want to make a note of this somewhere...

Birefringence works because there are two different indices of refraction depending on the polarization of light. When light reflects off a surface it becomes polarized. This means that this setup should fail in certain cases (like taking a picture of a sunset on a beach. The sunlight bounces off the waves, becomes polarized and the OLPF doesn't work. And because there's lots of small waves probably very far (all the way to the horizon) away you might get some aliasing because of this). I wonder if you can find this artifact in images taken with this setup.

f16falcona46 in Lecture 16: Cameras and Lenses (128)

On DSLRs, you can only autofocus while your mirror is down. When you're actually capturing the image, your mirror is up, so it doesn't take any light away (but also can't focus).

f16falcona46 in Lecture 16: Cameras and Lenses (110)

@leoadberg Cutting circular silicon dies would waste a lot of space, compared to rectangular ones.

f16falcona46 in Lecture 16: Cameras and Lenses (51)

@c-chang When you have no choice. Indoors at night, with a handheld camera (especially a phone), you can't really increase shutter speed or aperture that much.

f16falcona46 in Lecture 16: Cameras and Lenses (42)

@krentschler When light falls on a pixel on a CMOS sensor, it causes charge to accumulate in a capacitor. This total charge is what is interpreted as the brightness of the pixel. So, for most SLRs, the sensor collects light continuously as long as the shutter is open.

f16falcona46 in Lecture 16: Cameras and Lenses (17)

@YoungNathan in most situations you'll lose quality at longer distances because (1) the turbulence of the air (seeing) blurs the image, and (2) because longer lenses tend to let in less light.

f16falcona46 in Lecture 16: Cameras and Lenses (15)

@Debbieliang9 A single eye can see about 120 degrees FoV, and we unconsciously move our eyes all the time which gives us the illusion of a bigger FoV.

f16falcona46 in Lecture 16: Cameras and Lenses (13)

@frgalvan IMO no, since mass-market cameras will always be limited by the human eye in the end, so any non-perceptible improvement is not going to matter. For telescopes, etc, it's a different story.

f16falcona46 in Lecture 21: Color 1 (38)

@ethanbuttimer, yes, sRGB (the usual color space on computers) cannot replicate all physical colors because they would require a negative color coordinate from R, G, or B.

f16falcona46 in Lecture 6: Rasterization Pipeline (36)

@ImRichardLiu for sure, because sometimes your vertex shading hardware is weak (or you want to have a low-poly model) while your fragment shading hardware is strong. For example, some systems only support 16-bit vertex indices, so you can only have ~64k vertices.

f16falcona46 in Lecture 6: Rasterization Pipeline (35)

@caokevinc

The power of shaders (e.g., in OpenGL ES 2.0) means you can interpolate anything you want. OpenGL does not care if it's a color, a normal, a texture coordinate, a position, etc.

f16falcona46 in Lecture 6: Rasterization Pipeline (32)

@JiaweiChenKodomo

Phong shading usually refers to interpolating normals, then doing per-pixel shading. You can, however, apply the Phong reflection model to per-vertex shading.

v-wangg in Lecture 16: Cameras and Lenses (107)

Does this happen because the lens is curved?

JiaweiChenKodomo in Lecture 23: Image Sensors (68)
  1. A pixel is now able to sample four rays.
  2. A ray will be sampled by all RGGB pixels in this way.
ethanbuttimer in Lecture 21: Color 1 (38)

Does this imply that we cannot actually replicate our perception of wavelengths ~430 - 540 nm using these three RGB wavelengths? "Negative" amounts of light don't actually exist, they would just be changing the color trying to be replicated. Can our computer screens reliably reproduce our perception of these wavelengths?

f16falcona46 in Lecture 14: Intro to Material Modeling (12)

@yanda-li we just assume this by convention because it makes our calculations easier

YoungNathan in Lecture 16: Cameras and Lenses (59)

What is the tolerance of error for these types of mechanical shutters, that tiny errors in the interval can still make a good picture of exposure? For example, deviations in the speed of the top and bottom shutter or windows of exposure

SourMongoose in Lecture 14: Intro to Material Modeling (5)

Do perfectly or near-perfectly diffuse materials actually exist in real life? It seems like any material should at least somewhat reflect light back in the original direction.

SourMongoose in Lecture 14: Intro to Material Modeling (1)

Prof. Ng mentioned atmosphere refraction for the first image; this could be an interesting challenge for a project - calculating refraction through moving air.

SourMongoose in Lecture 13: Global Illumination & Path Tracing (103)

For multiple light sources, it may be useful to sample the lights based on distance from the point or solid angle, so that closer lights tend to have a higher chance of being sampled.

YoungNathan in Lecture 16: Cameras and Lenses (17)

So then for optical zoom are there quality differences in resolution when we take a shot of an image, same size, with one close up with shorter FOV and one farther with longer FOV?

Yanda-Li in Lecture 14: Intro to Material Modeling (12)

Can someone please explain why wi is pointing outward rather than inward?

Yanda-Li in Lecture 16: Cameras and Lenses (84)

I don't really understand the part of zs, where does that equation come from?

Debbieliang9 in Lecture 15: Advanced Topics in Material Modeling (43)

We can see clearly that the fur of this cat lights up from the left to the right. The color of its fur changes and the bright part becomes brighter. However, I don't understand why the cat's eyes don't change. Is it because the change of medulla size impacts different materials differently?

Debbieliang9 in Lecture 15: Advanced Topics in Material Modeling (30)

I looked at my own computer and this image, I am wondering why we can feel the diffusion of light when we see these little dots with green and purple colors. If the surface is receiving the light from the same light source, how can it reflect different points on the spectrum?

Debbieliang9 in Lecture 16: Cameras and Lenses (127)

I think the slide explains the definition of "being in focus," but I still don't understand how this is implemented automatically? Do the cameras just detect if the object that maps to the middle of the sensor is "in focus"?

Debbieliang9 in Lecture 16: Cameras and Lenses (114)

After going over all the slides and now back to this one, I still don't understand how multi-layers of the lens can fix the issue of non-convergent rays. Do the lens get averaged or do the rays somehow get corrected so that we get an approximation of the focal point?

evan1997123 in Lecture 16: Cameras and Lenses (82)

I hope this isn't super dumb, but why are they circles? If we were using pixels, wouldn't they be squares?

leekaling in Lecture 14: Intro to Material Modeling (10)

Recently I am learning how to do basic shading on Maya and I figured out glassy textures like these need a long time to render since there are a lot of bounces for each ray.

leekaling in Lecture 16: Cameras and Lenses (67)

I believe so. The ray doesn't change direction if it passes through the midpoint of the lens. That said, I think it would be better to trace both the parallel and focal ray then use the chief ray to make sure all 3 points intersect at the same point.

leekaling in Lecture 16: Cameras and Lenses (41)

I am guessing the light is blinking at certain rate so only points of light were captured on its path.

leekaling in Lecture 16: Cameras and Lenses (19)

I believe we would change the focal length to blur the front and focus at the back.

samuelchien8979 in Lecture 14: Intro to Material Modeling (12)

Can someone explain the relationship between phi and theta in the left and right diagrams? I understand them by themselves but can't piece them together.

michaelzhiluo in Lecture 16: Cameras and Lenses (55)

We can tie this concept back to the beginning of the course to the Nyquist frequency and the anitaliasing effects. If one has high shutter speed, but at a frequency that is lower or equal to the frequency motion, then we will capture some aliasing artifacts (e.g. helicopter blades stopped in motion). I believe for most cameras shutter speeds should only work in regular cases, or else we would need crazy shutter speeds to capture glass breaking, photon moving, etc.

michaelzhiluo in Lecture 16: Cameras and Lenses (44)

If we look at the units, exposure is a bit more intuitive. Exposure time follow seconds (s) and irradiance is (W/m^2). If we multiply the SI units together, we get W*t/m^2 = J/m^2. It is the total energy falling per area squared! The more energy per area the higher exposure!

michaelzhiluo in Lecture 16: Cameras and Lenses (33)

I would like to add that the 1/3 rule is pretty important for photography (https://digital-photography-school.com/rule-of-thirds/). Its not only that your subject should take 1/3 of the image but also that you should place your objects at positions of 1/3, 2/3 of your screen.

WolfLink in Lecture 10: Ray Tracing (33)

In that case, t does not have a well defined value, because any value of t would result in a point on the plane. If you really needed to handle a case like this, you would probably have to detect the situation and use another algorithm for a ray-line intersection on the bounding edge of the face that is contained by the plane. However, I you would generally not need to worry about this because due to randomly generated rays, extremely specific cases like that will statistically never happen.

Debbieliang9 in Lecture 16: Cameras and Lenses (15)

Do our eyes generate an image in our brain as a camera generating an image with a normal focal length? Why do we sometimes feel that our eyes can see a broader view than the camera that we are holding? I assume that we can see an angle of 180 degrees from our eyes?

dannychuy in Lecture 10: Ray Tracing (75)

@evan199713 If you imagine the ray shining from the bottom up (a ray starting between the "child1" and "child2" text and going up), the ray can hit the blue box first, but hit the yellow triangle before it hits the blue triangle.

DaddyGang in Lecture 16: Cameras and Lenses (127)

Retail digital cameras actually employ this technique! https://www.digitaltrends.com/photography/camera-autofocus-guide/

DaddyGang in Lecture 16: Cameras and Lenses (102)

So what we implemented in assignment 3 was essentially for a pin-hole camera. But this can be easily extended with the same physically-based rendering frame work. This is really powerful, as long as you know the physics, you know how to code it. Here is a reading that I found particularly helpful in understanding tracing a ray with camera lens. https://computergraphics.stackexchange.com/questions/2042/ray-tracing-with-thin-lens-camera

DaddyGang in Lecture 16: Cameras and Lenses (93)

It's interesting to draw the connect here between real world photography and computer graphics. Photographers use depth of field to create focus, and motion blur to tell stories. So do animated films, games and CGI trailers! In modern games, depth of field was also used as a back off from rendering large amount of object. We can just render the main character that is close and all the background elements are blurred out to direct the players' attention and also save texture loading.

DaddyGang in Lecture 16: Cameras and Lenses (34)

I took a film course at UCLA summer session. They also introduced this Dolly-Zoom technique. Surprisingly, this technique is applied in more films than we think. It's just we barely notice it. But it does help in creating smooth transition of the audiences' focal point and achieve desired artistic goals.

wleeym08 in Lecture 16: Cameras and Lenses (24)

@CelticsPwn Amazing work! I really like these monotone photos especially the pikachu (is it?) which looks creepy to me.

wleeym08 in Lecture 16: Cameras and Lenses (82)

@frgalvan Interesting. I guess it's not totally unrelated. But obviously they tweak a lot of stuffs during production. The lens flare effect can be generated quite easily. (You can try to make one using softwares such as Adobe After Effects.)

wleeym08 in Lecture 15: Advanced Topics in Material Modeling (14)

So far we have used naive sampling for path tracing to render objects in the assignments. But it's nearly impossible to render for realistic materials as they have more complex surface properties. Therefore we have to reduce the complexity by using normal mapping together with other techniques/math models for rendering.

AlexTLuo in Lecture 15: Advanced Topics in Material Modeling (59)

Im curious how much of an effect participating media has on increasing the computational complexity of rendering. Intuitively, it feels like taking into account scattering from water or clouds requires a lot of effort.

AlexTLuo in Lecture 16: Cameras and Lenses (65)

How close to this ideal setting do real lenses usually get? Also, is it more problematic if the focal point is too far forward/backward or if the focal point is not centered behind the middle of the lens?

AlexTLuo in Lecture 16: Cameras and Lenses (55)

That's an interesting idea. To me, intuitively it feels difficult, since high quality images contain a lot of details that might not be captured in any of the low quality images. Although I'm guessing that the level of blurriness is a factor.

AlexTLuo in Lecture 16: Cameras and Lenses (18)

What advantages are there to using a fisheye lens to get a wide angle picture compared to how phones take panoramic pictures? I was under the impression that phones stitch together normal images in order to create the wide angle effect.

mylesdomingo in Lecture 15: Advanced Topics in Material Modeling (91)

This reminded me of some examples from codepen using three.js to apply perlin noise to objects.

https://codepen.io/ya7gisa0/pen/vGJvWw

mylesdomingo in Lecture 16: Cameras and Lenses (29)

Is there particularly any reason to use shorter focal lengths in an image like this? I'm struggling to see the purpose for short lenses for human subjects.

mylesdomingo in Lecture 15: Advanced Topics in Material Modeling (49)

For fur models, isn't the physical calculations for even samples/pixel per fiber GPU intensive? I remember that AI rendered scenes were becoming more common --

https://www.youtube.com/watch?v=7wt-9fjPDjQ

reid69 in Lecture 16: Cameras and Lenses (67)

Is this always the case for the chief ray? Meaning, does any ray passing through the center of the lens continue outward in the same direction? If so, it seems like that would be an easier ray to trace, conceptually, than the parallel rays (although of course you would still have to trace at least one of them to compute an intersection). Or does this just happen in this particular case? Are there cases when the chief ray wouldn't behave like this?

reid69 in Lecture 16: Cameras and Lenses (11)

I think it works pretty much the same way, if you just imagine turning the image 90 degrees clockwise. I'm pretty sure this implies that the ratio of deltaWidth/width is just exactly the same as the ratio of deltaHeight/height, meaning that the deltas are not exactly the same, but are in the same proportion to their respective measurements.

reid69 in Lecture 16: Cameras and Lenses (41)

I'm confused about why there are so many points of light on the path. It seems like they correspond to light sources on the tips of the wings of the plane and one in the middle, but if this is a long exposure photo, shouldn't they just be blurred out into lines? They really look like artifacts of a camera shutter, as if the camera is taking a shot at an even spacing as the plane moves on the path, or at least somehow putting more emphasis on these particular snapshots. You can even see that the spacing is wider as the plane is moving vertically, as if it's moving faster during that section before leveling out. But this doesn't make any sense to me if the aperture is open for the entirety of the shot.

ravenseattuna in Lecture 15: Advanced Topics in Material Modeling (52)

I was wondering if there were any other similarities between difficult to render approximations to something easier. Could something that's typically difficult to render like maybe water be approximated to something else?

ravenseattuna in Lecture 15: Advanced Topics in Material Modeling (92)

Lots of procedurally generated terrain such as in No Man's Sky and Minecraft use Perlin Noise in order to create their varied landscapes: https://en.wikipedia.org/wiki/Perlin_noise

ravenseattuna in Lecture 15: Advanced Topics in Material Modeling (22)

Water simulation is a constantly developing field. Disney had to develop new techniques just to better simulate the water in Moana: https://phys.org/news/2017-01-mathematicians-ocean-life-disney-moana.html

GregoryD2017 in Lecture 16: Cameras and Lenses (74)

This was one of the hardest concepts for me to grasp, and this video did a great job separating the concepts to make it simpler. https://www.youtube.com/watch?v=4CoEsqePADw The following article also did a great job contrasting the lenses that photographers actually have to purchase (zoom lens, prime lens): https://www.nikonusa.com/en/learn-and-explore/a/tips-and-techniques/understanding-focal-length.html

susan-lin in Lecture 15: Advanced Topics in Material Modeling (70)

I think you're right -- I feel like the BSSRDF is "blurring / smoothing the rougher details" because of how it changes the movement of light, hence reducing the harsh shadows (that would typically define the "rougher details" more)

susan-lin in Lecture 14: Intro to Material Modeling (30)

While I understood that isotropic materials diffuse light uniformly and anisotropic materials have orientations (causing it to scatter light in an oriented way), I'm a little confused about what the equation in this slide is trying to convey?

denniscfeng in Lecture 16: Cameras and Lenses (8)

On modern photosensor arrays, there are actually twice the number of green-filtered sensors than blue and red ones (as visible in the first picture). This is known as a Bayer filter https://en.wikipedia.org/wiki/Bayer_filter. The main reason for this is to better mimic the human eye's sensitivity to the color green.

denniscfeng in Lecture 16: Cameras and Lenses (33)

I'm not sure with the professor's suggestion to use digital zoom, since it is effectively the same as taking a full-resolution image and then cropping it later. In fact, you might want to do just that instead of using digital zoom, so you can capture a wider amount of the view and later, in post, play around with framing/cropping your subject. Plus, cropping can be of arbitrary size and shape.

jessicamindel in Lecture 15: Advanced Topics in Material Modeling (85)

This is fascinating! In music, there's a type of sound design called granular synthesis which splits an audio sample into 10s to 1000s of tiny sub-samples called grains, and then manipulates the playbacks and individual effects chains of these grains to yield a unique new sound. I wonder what a similar pipeline might look like in generating a granular material from a more macro-scale material--perhaps it would act as an incinerator or shredder of sorts. As @clarkd2017 asked, I also wonder if this would make the resulting grains shinier--or whether the size, sampling method, etc., of each grain would make a drastic difference.

denniscfeng in Lecture 16: Cameras and Lenses (17)

I think only digital zoom loses resolution/image quality when zooming in, since it is equivalent to simply taking the full-resolution "unzoomed" image and cropping in the part that you want zoomed. However, with optical zoom, the full resolution of the sensor is used to capture a small field of view due to the manipulation of light by the lens.

jessicamindel in Lecture 15: Advanced Topics in Material Modeling (79)

@jadesingh I think what's difficult about the image on the right, if rendered via other methods, is that the buildup of fibers along narrow folds in the fabric is what creates higher opacity, which is also highly dependent on the viewing angle. If a material could be modeled which maps perhaps the number of overlapping fibers in one location to the opacity of the surface, this might work well--but I'm not sure how projections on the material and view angle dependence would change, or whether it would be an accurate approximation. Nonetheless, an interesting challenge!

jessicamindel in Lecture 15: Advanced Topics in Material Modeling (22)

A few interesting related materials that connect this to human perception of a render this detailed:

  • This incredible data visualization by Refik Anadol emulates waves, but uses a collection of voxels instead of continuous, precise noise to build up something undeniably reminiscent of the ocean. I'm inclined to say that it's the motion in this piece that lends it life--on its own, the model might look less like waves, though the colors align well. I wanted to offer this as an interesting comparison point: the human mind can lend connection and narrative to something so deeply with so much less context than a realistic image offers, and it almost develops more character.

  • This HCI paper describes what it calls "experiential fidelity," wherein what makes a display high-resolution is not the literal quality of the image, but the way in which anticipation, expectations, and delight build up for the user before, during, and after an immersive sensory experience (the paper focuses on VR). This is another interesting counterpoint to such precise detail: the mind builds an unseen world with such a rich perspective almost immediately, where rendering a rich image takes so much fine tuning.

jessicamindel in Lecture 15: Advanced Topics in Material Modeling (19)

I'm suddenly reminded of k-means clustering, as described in EE 16B: the direction of the cluster tells us a lot about how clean or streamlined a texture or dataset is. It would be interesting to then reverse-engineer this--to create something that takes a k-means cluster, interprets it with opacity data based on the density of points in the point cloud to mimic the fabric-like forms of the p-NDF, and then produces a normal map to yield a material representative of data sampled from an entirely different dimension (audio, text, pressure, weather data, strange modes of human input, etc.). Would such an inversion be possible with a probabilistic model like this one?

akyang in Lecture 15: Advanced Topics in Material Modeling (71)

how much more compute is needed for bssrdf compared to brdf? is it always a lot more or does it really depend on the objects rendered?

akyang in Lecture 16: Cameras and Lenses (14)

A larger focal length narrows the field of view. So objects that are smaller (farther objects) in an image shot with a smaller focal length are magnified when shot with a larger focal length because that object now occupies the entire field of view.

akyang in Lecture 16: Cameras and Lenses (33)

It's cool that difference camera settings can have really different results and make whatever you're shooting look better. Like how longer focal lengths can make people with narrower faces look wider and flatter, and vice versa

leekaling in Lecture 10: Ray Tracing (33)

What if the ray is perpendicular to the normal vector and the line is ON the plane? What would the value of t be in this case?

GregoryD2017 in Lecture 15: Advanced Topics in Material Modeling (56)

Unreal Engine 4 uses volumetric fog to create natural god rays. It looks similar to the image described above, but artists are able to mimic the sharp cut of light through the sky by using the same concept of participating media. Here's a link to a tutorial: https://www.youtube.com/watch?v=Akb4P71KL0s

GregoryD2017 in Lecture 15: Advanced Topics in Material Modeling (90)

Usually, you can tweak the results based on what the artist is looking for. Unreal Engine 4 works on procedural material and you can see from the screen cap of the UI, that there are numerous fields the artist or game maker can tweak to their liking. https://wiki.unrealengine.com/Procedural_Materials. What's even more impressive is that UE4 is completely online, so unlike how we do renders offline, and look at the rendered still, game engines do this real time

buzzonetwo in Lecture 16: Cameras and Lenses (105)

@jinwoopark1673 Usually that happens to people who have astigmatism (and are frequently also nearsighted like me). There does seem to be some connection between astigmatic lenses and bokeh with regards to how lens focus lights - https://www.bhphotovideo.com/explora/photography/tips-and-solutions/understanding-bokeh

buzzonetwo in Lecture 16: Cameras and Lenses (51)

@c-chang I think if you are capturing a scene with high probability of motion blur, like a fast paced sport or birds flying, you would need a higher ISO, since the shutter speed speed is going to be really low for a sharp image. The tradeoff would be grainy noise probably.

buzzonetwo in Lecture 15: Advanced Topics in Material Modeling (59)

I'm kinda confused about how this ray is being traced through the medium. From out perspective, are we tracing to an intersection at the medium, then doing the random walk with intermittent points connecting to a light source inside the medium? Where is the light source established, since I thought the phase function isn't constant and the scattering inside won't be uniform.

evan1997123 in Lecture 16: Cameras and Lenses (19)

I'm a little bit confused. I thought that with this specific way, we would have blur in the front and not blur in the back. However, even thought it's different that the other pointy ones, it is still a similar blur? How would we have blur in the front but not blue in the back?

evan1997123 in Lecture 15: Advanced Topics in Material Modeling (51)

Could we have possibly used a similar system to how we stop an avoid raytracing to infinite. Like we can just have a chance of stopping the tracing with a random Variable?

Chengjie-Z in Lecture 10: Ray Tracing (20)

I think using the formula in this slide can compute t, b_1 and b_2. Then we need to determine whether they are in proper scope.

evan1997123 in Lecture 14: Intro to Material Modeling (23)

I thought the main reason why Ren would bring up this image is because the Earth is somewhat rough, even though it looks smooth from space. We know there are mountains and buildings and small surfaces that make it look different. What's this gotta do with the ocean?

JiaweiChenKodomo in Lecture 16: Cameras and Lenses (110)

Even if the lens is circular, it is able to render a scene of any shape. Theoretically, if you have an infinite scene, a lens is still able to render it on an infinite film. Therefore, the shape of the lens is not a factor to determine the shape of a sensor.

c-chang in Lecture 16: Cameras and Lenses (51)

What would be the point of really ever going past ISO 400 or 800, let's say? I've heard the general rule of thumb (could be totally wrong) when shooting photos to stay within ISO 800, and to adjust all the other controls at that point.

c-chang in Lecture 15: Advanced Topics in Material Modeling (41)

@chen-eric Really interesting point! It makes me think of the new movie The Call of the Wild where the dog is completely CGI, and clearly possesses human traits. The dog looks realistic enough... but something just feels off about it, and there's quite a bit of backlash from viewers about that. I wonder if they used the human hair rendering?

c-chang in Lecture 15: Advanced Topics in Material Modeling (12)

This kind of stuff really makes me think about material design and designing as a whole, and the inverse situation where we can render something graphically that is actually really hard to replicate in real life.

JiaweiChenKodomo in Lecture 16: Cameras and Lenses (81)

Mathematically speaking, a point light is a Dirac Delta function. The convolution of the Delta function with a kernel function will return the kernel function, centered at the location of the point light.

kwsong in Lecture 16: Cameras and Lenses (13)

Chiming in, I think the always-improving quality on cell phone cameras is quite impressive given their size limitations. My partial understanding of this is that phones compensate with more sophisticated image processing that brings the quality up sufficiently for the casual photo-snapper. For instance, this article (old) cites how phones use accelerometer/gyroscope data to more accurately provide image stabilization: https://www.wired.com/2015/12/smartphone-camera-sensors/ . Blur is a pretty dominant issue among point-and-shooters, and if you have data about how a camera was moving to create a certain blur, you can more accurately/easily remove that blur!

kwsong in Lecture 15: Advanced Topics in Material Modeling (41)

maybe it's just the shading, but the human hair rendering (both here and on a previous slide) looks almost too glossy to me. It'd be interesting to see how many people would actually accept such renderings as-is as "realistic enough" to pass for a photo, vs. how many people would realize that they are computer-generated.

kwsong in Lecture 15: Advanced Topics in Material Modeling (28)

Agreed that that would be nice for clarity! I would say that I think the macbook surface would still benefit noticeably from wave optics, though. The human eye can pick up a lot of visual deltas that arise from the small microsurface variations on a surface, the slight imperfections of a material (no material is perfectly diffuse or glossy), etc. Without these, I'm guessing we might feel that the image is "missing" something