cezhang commented on slide_020 of Intro to Color Science 2 ()

@James Monitors are not necessarily designed to meet SRGB for display. However, in order to have fair comparisons across different monitor, as you mentioned, we have an ‘agreed’ version of what RGB should look like. Different monitor could then be calibrated to that RGB standard, but may not be the default.

Logikable commented on slide_032 of Intro to Color Science 2 ()

There is something known as the McCollough effect. It is a (rather seriously dangerous) aftereffect that causes colourless lines to appear to have colour. It doesn't take long to take effect, but can have semi-permanent damage on vision. It can affect the vision detrimentally for people who work with text (that includes us students). Please be careful.

Tom Scott has a great video [1:42] on the topic.

Logikable commented on slide_022 of Intro to Color Science 1 ()

I think it's interesting to think about why humans (and many other mammals) evolved to have 3 types of cone cells. Clearly, one is insufficient, because it would be impossible to differentiate between the two sides of the curve.

However, why is it that 2 is insufficient? Suppose we only had the S and M cone cells. Note that the S cone cell has no sensitivity for light above a certain wavelength. In that case, all colours with a wavelength above ~520nm look approximately the same. See the Protanope visible spectrum here. The same is true if we removed the M cone cell (deuteranopia).

If we remove the S cone cell, while the M and L response curves span most of the visible spectrum, it becomes increasingly difficult to differentiate between colours on the left and right end. Note in the image that tritanopes see reddish colours both when the actual colour is red and when it is violet.

So why do we need to differentiate between all of these colours? Research suggests that red, green, and blue are the most useful colours to differentiate for human survival.

Logikable commented on slide_045 of Intro to Color Science 1 ()

I imagine the reason why most colour gamuts are comprised of only 3 colours is because the benefit of having more is insignificant. Like seen on this slide, as you get further away from the bottom left corner of the xy chromaticity diagram, the larger the areas of "identical" colours becomes.

As such, common colour gamuts like sRGB and Adobe RGB only cover 40-50% of the CIELAB colour space. Do note that there also exist colour gamuts that cover a high percentage of the CIELAB space - see the ProPhoto colour space. This allows for "imaginary colours".

That being said, you can find higher dimensional colour models in printing - see the CMYK model, Pantone's discontinued Hexachrome, and their Solid-to-Seven Set, having 4, 6, and 7 base inks, respectively. Do note that these are subtractive colour models, as in the base colour is white, and any dye applied will subtract wavelengths.

rnl commented on slide_081 of Smooth Curves and Surfaces ()

@smol_cactus If you look at slide 66, if the middle 2 points of each cubic bezier curve are collinear with the same third point, it makes an "A frame" shape. This means it is C2 continuous.

rnl commented on slide_004 of Geometry Processing ()

@Anna My intuition says that since the eighths and sixteenths add up to 1, it preserves the "power" (for lack of better term) of the vertex positions. If it did not add to 1 the approximation will drift a lot from the original geometry.

@smol_cactus I like your username and also, new vertices are the 3 new vertices created at the midpoint of the original triangle that is split into 4 triangles, as seen on the top of the slide. The old vertices are the 3 original vertices of the original triangle.

@tmmydngyn I think its easier to think of it through the dot product, from the $2*(\omega_i\cdot n)n$ part. By definition of dot product $cos\theta=\omega_i\cdot n$ since both $\omega_i$ and $n$ are unit vectors. Also recall that the dot product gives the scalar projection of $\omega_i$ onto $n$ which is of length $|\omega_i|cos\theta$. So the $cos\theta$ actually makes the projection the correct length which happens to be $1/2 * (\omega_o + \omega_i)$ due to both omegas being the same length and angle from $n$. (If $\omega_o$ was at a different $\theta_o$ or length from $\omega_i$ you can geometrically reason out that $\omega_i\cdot n$ is not going to be $1/2 * (\omega_o + \omega_i)$.)

alpan commented on slide_026 of Intro to Color Science 2 ()

Here are a few interesting articles comparing different color spaces (adobe rgb vs srgb):

https://fstoppers.com/pictures/adobergb-vs-srgb-3167 https://fstoppers.com/pictures/adobergb-vs-srgb-3167

It seems like one of the main reasons to use sRGB and not another color space with a better gamut is simply because most monitors and printers are designed to work with sRGB, so it is unlikely that the extra vibrancy of adobe RGB won't even show up on screens or prints. It also seems like Adobe RGB is a bit more complex to use for a very marginal increase in quality.

alpan commented on slide_023 of Intro to Color Science 2 ()

What determines the shape of the chromaticity plot — is there a conceptual reason why it curves so much on the left side and has a straight edge on the right side? Also why doesn't the straight edge also represent pure spectral colors like the rest of the edges?

alpan commented on slide_045 of Intro to Color Science 1 ()

Would we able to reproduce more colors if we changed the primary color set to something different from RGB (using more than 3 colors, for example)? What would the color set have to be to be able to have a gamut that includes all possible colors, if that is even possible?

James commented on slide_020 of Intro to Color Science 2 ()

What are we referring to when we say "standardized"? Does this mean there's some agreement on what a set of RGB values should look like (across any monitor)? If so, does that mean each monitor is calibrated to try and meet said standard? How?

James commented on slide_026 of Intro to Color Science 2 ()

When described like this, it seems like it's strictly detrimental to have a color space that takes up less area on this graph (so say, NTSC is strictly better than sRGB). What are the cons or costs of having a color space take up as much of the chromaticity diagram as possible?

lingqi commented on slide_014 of Introduction to Material Modeling ()

Yes. In that case, the reflectance is a Spectrum or RGB less than one.

tmmydngyn commented on slide_013 of Introduction to Material Modeling ()

Can someone elaborate on why the cos term is necessary? Does this have something to do with Lambert's law? I'm not understanding how $n * cos(\theta)$ is half of $w_o + w_i$.

James commented on slide_007 of Intro to Color Science 1 ()

I really like using the classic "why is the sky blue" question as a way to organize some of the concepts introduced in this lecture.

The immediate answer usually given is "because of rayleigh scattering". High energy, short wavelengths like blue scatter more in the atmosphere, giving the sky its overall blue tint. But then the follow-up is, "why isn't the sky violet?", since violet has an even shorter wavelength than blue. There are two main reasons for this:

  1. If you look at the spectral power distribution of sunlight, you can see that it contains more blue than violet. So the sun simply doesn't emit as much violet wavelengths as it does blue ones.
  2. The spectral response curves for all three types of cone cells are relatively low around violet's wavelengths (380-450). So our eyes are less sensitive to violet wavelengths than they are to blue ones.

whyalex commented on slide_014 of Introduction to Material Modeling ()

In this slide, are we assuming that the surface reflects all wavelengths of light? (In the project, we're told that the BSDF for an ideal mirror is "reflectance / cosine" instead of just 1/cosine). The reflectance takes into account for example, if you have a blue mirror, it'll absorb other wavelengths and only reflect the blue ones, right?

philkuz commented on slide_004 of Geometry Processing ()

^ old position, my apologies for the lack of clarity.

renng commented on slide_026 of Cameras and Lenses 2 ()

@jaymo: It's $A$ shown on the previous slide.

renng commented on slide_044 of Monte Carlo Integration ()

@purrtato: it's not a normalization term, but rather an algebraic term related to change of basis for the integral. We are changing basis here from integration over directions on the hemisphere earlier in lecture, to integration over the surface of all light sources.

renng commented on slide_044 of Monte Carlo Integration ()

@whyalex: thanks for examining the details! The differential solid angle is differential area on a sphere divided by radius squared. The $\cos\theta'$ term projects the surface patch here, onto a sphere centered at $p$. This is needed because the surface patch is, in general, not tangent to that sphere.

renng commented on slide_044 of Monte Carlo Integration ()

@alpan: you can think of the integral domain being the combined surface area of all lights in the scene.

renng commented on slide_056 of Cameras and Lenses 1 ()

No rolling shutter in my eyes, at least that I have noticed! I am not entirely sure the neuroscience, but there wouldn't be structured readout pattern as in image sensors. Retinal neurons fire independently in response to light.

renng commented on slide_031 of Cameras and Lenses 2 ()

@whyalex. Good question. That bullet is true, approximately, with an assumption that we are at low magnification (small fraction of 1, as in most portrait or landscape photography). Why is it true? Try plugging some realistic example focal lengths and subject distances into that thin lens equation to see why!

It turns out that for these situations, the lens-sensor gap is numerically close to the focal length. So if we double the focal length, we are doubling the magnification (approximately). To decrease the size of the subject we need to move twice as far away.

renng commented on slide_034 of Cameras and Lenses 1 ()

Good observation @xxlinnchen! Think of linear perspective; it makes closer things appear larger. So when you are close to a person, that person's nose, which is closer to you, will appear larger. For this reasons, it is generally considered more flattering to shoot portraits with a long focal length and from a distance. Also, this will cause a blurrier background, which helps to make the person stand out -- also good for a portrait.

jaymo commented on slide_026 of Cameras and Lenses 2 ()

Is the absolute aperture diameter supposed to be D?

purrtato commented on slide_023 of Global Illumination 1 ()

I think it has to do with the angle that its coming in from - giving it a weaker appearance around the edges - look at the cos(theta) term

smol_cactus commented on slide_008 of Geometry Processing ()

A note on notation in the second bullet point: E is indeed E = (3/2) * T, in case there is any confusion about T being in the denominator...

purrtato commented on slide_044 of Monte Carlo Integration ()

What is the intention behind trying to normalize the light? Is the outcome supposed to be less noisy?

smol_cactus commented on slide_081 of Smooth Curves and Surfaces ()

What does it mean for "A frames ensuring C2 continuity"?

smol_cactus commented on slide_004 of Geometry Processing ()

Not entirely sure what "new vertices" and "old vertices" mean here.

philkuz commented on slide_034 of Geometry Processing ()

I wanted to mention that you can think of an individual Q as a dyad of the form $$ww^\top$$ and that for an individual $Q$ it'd be faster to multiply by the vectors that form the dyad, as that cuts the calculations in half. However, as mentioned in Slide 38, we end up summing multiple Qs together which are not easily represented as dyads and thus it's favorable to represent the Q as a matrix.

purrtato commented on slide_038 of Smooth Curves and Surfaces ()

@cezhang - oh YES i didnt make the connection. Thanks

cezhang commented on slide_038 of Smooth Curves and Surfaces ()

@purrtato Would slide 22 answer your question?

cezhang commented on slide_053 of Smooth Curves and Surfaces ()

@pixelz No, this slide is correct. Why do you think it's the other way?

purrtato commented on slide_038 of Smooth Curves and Surfaces ()

Where does the Hermite Matix derive from? the catmull rom inputs make a lot of sense but I dont fully understand the hermite matrix

pixelz commented on slide_053 of Smooth Curves and Surfaces ()

Would this slide have t and (1-t) flipped as well?

dunkin_donuts commented on slide_018 of Geometry Processing ()

Just answering questions 1. there are 4 extraordinary vertices after the first subdivision 2. Two of the extraordinary vertices have valence of 5, the other two have valence of 3. (The outer two original vertices have 5, inner two have 3). 3. There are zero non-quad faces.

hwl commented on slide_006 of Cameras and Lenses 3 ()

So I think C is the cutoff that we choose for the "sharpness" that we allow. Essentially we pick some C value above which the amount of blur is not acceptable. Then we use this C value to calculate the farest and nearest objects that are allowed to get the depth of field. Thus, C is fixed once we pick it.

purrtato commented on slide_070 of Sampling & Antialiasing ()

Can we get verification on this answer? I dont entirely understand the normalized dimensions aspect

aalzanki commented on slide_034 of Cameras and Lenses 2 ()

Is this a real photo or a rendering? I ask because the blurred dragon on the right Hand looks a bit off to me. I might just be wrong but if this is a rendering isn't the right dragon too sharp? For example, take a look at this photo: https://www.colourbox.com/image/dandelion-on-green-grass-blur-background-image-11503201

It seems like the dragon is not as blurred or blended into the background as those flowers. Am I wrong, or is this an actual affect that is caused by something I am unaware of?

aalzanki commented on slide_056 of Cameras and Lenses 1 ()

Is this affect true with our eyes as well? As in, do our eyes observe the light across the world space simultaneously or is it just reading the light at a fast enough rate for us not to notice the lag?

aalzanki commented on slide_088 of Advanced Topics in Material Modeling ()

How does the relationship between the individual grains look like? Is it a bunch of triangles that look like grains when you zoom in? Or is it actual individual grains connected in a linked list fashion?

dunkin_donuts commented on slide_062 of Texture Mapping ()

Just to answer this question, if we assume we're halving width and length each level, the total storage should be ~$\frac{4}{3}N$ = O(N), where N is the area of level 0 mipmap.

richardmeng commented on slide_033 of Introduction to Material Modeling ()

@dunkin_donuts So if you see the light is reflected from the center towards the perimeter, that means the pan is brushed in a circular way. i.e: http://www.beka-cookware.com/titan-non-stick-forged-aluminium-fry-pan