You are viewing the course site for a past offering of this course. The current offering may be found here.

Comments

Gabe-Mitnick in Lecture 6: Rasterization Pipeline (68)

[deleted]

LucasArmand in Lecture 26: Image Processing (25)

Can we add a denoising process like this into our raytracer to allow us to render at lower samples per pixel/light source while getting better results?

LucasArmand in Lecture 12: Monte Carlo Integration (43)

This is importance sampling like we did in the raytracing project. How can we generate an importance sampling scheme for more complicated scenes, like for points that are strongly illuminated by the reflection of a light source? Would we do a course sampling of all directions, and then produce an importance function based on the preliminary results of the impact of each of those sampled directions?

LucasArmand in Lecture 11: Radiometry & Photometry (28)

Since the Earth is a sphere, the total irradiance on the surface should be the same year round, with only the location of highest intensity changing. I wonder, do oceans and land absorb/reflect light differently? And if so, does the world have a summer (Northern hemisphere summer vs. Southern hemisphere summer) where more light is absorbed due to geographical differences? My guess would be that ocean's absorb more energy, and that the Southern hemisphere is more ocean by % surface area, so then the Southern hemisphere summer absorbs more energy than the Northern hemisphere.

LucasArmand in Lecture 9: Intro to Ray-Tracing & Accelerating Ray-Scene Intersection (6)

It's interesting how the time-reversibility of physics allows us to cast light backwards and still generate physically accurate results. Is it even possible for this implementation to fail at simulating some feature due to it's reversed nature?

LucasArmand in Lecture 9: Intro to Ray-Tracing & Accelerating Ray-Scene Intersection (54)

Since we are organizing in 3D space, why do we not use octrees as a data structure for acceleration? My guess would be that we cannot guarantee that a ray stays within an octant of an octree once it enters, so we cannot prune the tree for each ray as well as with a KD-tree

LucasArmand in Lecture 8: Mesh Representations and Geometry Processing (35)

When subdividing, we have the effect of "smoothing out" some course mesh, as we see in this slide. What I'm wondering is whether or not the net effect of this "smoothing" can be described in one mathematical expression. For example, if we perform loop-subdivision on a cube, is there some way that we can explicitly define the effect of the "smoothing" (corner radius, spatial curvature, etc.) beyond the description of the algorithmic steps taken to reach that smoothed state? Or, is the effect of the smoothing only able to be defined from those algorithmic steps and nothing else.

LucasArmand in Lecture 7: Intro to Geometry, Splines, and Bezier Curves (24)

Do the physical constrains that these hooks have on the spline have any relation to the mathematical constraints that we use to construct Bezier and other curves? It seems like the hooks are just used to constrain different points along the spline to different positions, and the physical properties of the spline naturally allow it to form a curve. Could we simulate a curve that exactly emulates the behavior of a real spline?

LucasArmand in Lecture 4: Transforms (106)

How does the size of the pinhole affect the image produced on the wall? I would imagine that increasing the size of the pinhole would result in a brighter but blurrier image, since more light is allowed in, but from conflicting angles and positions at each point in the wall. How does this relate to modifying the different properties of a lens? Does a larger pinhole = a larger aperture?

LucasArmand in Lecture 3: Sampling and Aliasing (54)

This strikes as very interesting, since convolutional neural networks which are traditionally used in image classification problems use convolution to generate inputs for FFNs. Is there then some connection between the multiplication of frequency domains to the idea of image classification?

sberkun in Lecture 25: Image Sensors (74)

10 is common to the decibel scale; it's designed such that a x10 increase corresponds with +10 dB (hence the name decibel).

However, decibel is measured in terms of power, and the pixel values are a magnitude. Conventionally power is proportional to magnitude squared (doesn't make much sense in this context, but it did in EE120 when it was in terms of voltage), so the SNR is really

10log10((μσ)2)10 \log_{10}((\frac{\mu}{\sigma})^2)

which turns into the expression on the slide when you pull the 2 out.

Noppapon in Lecture 28: Conclusion (10)

remind me of plants vs some bees in cs61a!

Noppapon in Lecture 28: Conclusion (8)

The different depths of objects in this image are cool

Noppapon in Lecture 25: Image Sensors (74)

I am wondering the same thing as @patrickz. After finding information about the conversion online, some websites say the constant is 20 whereas the others say 10.

Noppapon in Lecture 26: Image Processing (49)

After looking at how Bilateral Filter works, I believe the technique would be useful for tasks like image smoothing or noise reduction.

patrickrz in Lecture 26: Image Processing (22)

I was curious about why the low frequency parts of the image get more noisy. It makes sense as the high pass filter that others have talked about is applied to the entire image. Thus, even small bits of noise in the original image will be amplified in the sharpened version.

patrickrz in Lecture 26: Image Processing (25)

Found this slide to be especially insightful just because you can denoise/add noise in photography editing apps like Lightroom. One can check out https://www.youtube.com/watch?v=qaccekBhS6E for more information

abdtyx in Lecture 28: Conclusion (13)

Wow! This image contains elements from the five assignments. What a creative design!

abdtyx in Lecture 28: Conclusion (9)

These cubes look super great! I'm really impressed by the transparent effects and shadows in this image!

joeyhou0804 in Lecture 28: Conclusion (0)

This course is very rewarding and enjoyable. Thank you very much, professors and course staff, for this wonderful semester!

joeyhou0804 in Lecture 28: Conclusion (4)

So there is a part B class for computer graphics? I wonder what will the course contents be in that class.

joeyhou0804 in Lecture 26: Image Processing (56)

I wonder how will these results change when the size of the neighborhood search window keeps increasing.

joeyhou0804 in Lecture 26: Image Processing (15)

I remember noticing a similar type of artifact in video streaming where there are block-like patterns. I remember, on the other hand, that this type of artifact is different from this compression artifacts.

red-robby in Lecture 28: Conclusion (4)

Which class, if any, explores the implementation of rendering systems in detail? Or, which classes should I take if I wanted further guidance on implementing an efficient, general purpose renderer?

red-robby in Lecture 28: Conclusion (4)

Though not strictly related to graphics, we had briefly discussed simulating flocking birds (and group simulations generally) in relation to physical simulation. Does anybody know of any classes that expand on these ideas? I'm particularly interested in the simulation of traffic.

mignepo in Lecture 28: Conclusion (14)

The microfacet metal effect is really cool on a humanoid character! Reminds me of a game that's being developed right now called Skate Story. The model in the preview is similarly polygonal and faceted, but I think contains more of a transparent and crystalline/iridescent effect.

mignepo in Lecture 26: Image Processing (23)

@nathalyspham I think other applications for edge detection could include object tracking for computer vision. Reducing an image to just edges allows you to see your tracked image a little more clearly!

joeyzhao123 in Lecture 23: Color Science (122)

Speaking of color reproduction, is there some way to easily make our monitors look the same without buying specific tools to measure? I know when moving from my laptop to desktop, there is a visible change in how it looks.

joeyzhao123 in Lecture 26: Image Processing (23)

https://www.youtube.com/watch?v=uihBwtPIBxM Here is a cool video explaining an edge detection method!

andrewhuang56 in Lecture 23: Color Science (162)

Follow-up question, but some ellipses of equal area are still not congruent (with different major-to-minor axis ratios). Does this have any significance?

anzeliu in Lecture 26: Image Processing (14)

We can use zig-zag reordering and run-length encoding to further compress our image representation. Run-length encoding is a lossless compression technique that converts a list of values into a list of runs of values. This way, we can remove more redundancy in the quantized DCT coefficients.

Veriny in Lecture 20: Fluid Simulation (34)

What factors contribute to a specific material's yield point? Is it possible to deform a structure back to its original form?

Veriny in Lecture 18: Intro to Physical Simulation (44)

How do we cope with adding more joints and more degrees of freedom? Doing the math, even just adding a third joint makes the equations much more complicated.

Veriny in Lecture 18: Intro to Physical Simulation (19)

To add on to jacklishufan, the goal of PCA isn't to make a classification of "male" or "female", but rather to find patterns in the structure, in this case a blend shape.

Veriny in Lecture 25: Image Sensors (24)

How are these prisms machined? It almost seems like a case where simulation is easier because it's not as subject to manufacturing issues.

justin-shao in Lecture 28: Conclusion (14)

The image looks so cool! It's quite the combination of good rigging and artistic expression.

daniel-man in Lecture 28: Conclusion (10)

I feel like the lighting on the inside of the peashooter's mouth adds to this effect that the material is very slightly transparent; unsure if this is intentional but I think it's really cool since real-life pea skins are slightly transparent as well.

daniel-man in Lecture 28: Conclusion (8)

The specular highlights on the visors really add to the lighting effect on the characters, and I like how the extra shadow on the wall implies the presence of another character watching the pair.

anzeliu in Lecture 28: Conclusion (5)

I am one of the facilitators for UCBUGG and it is offered every semester! We go through every stage in the animation pipeline: storyboards, modeling, shading, rigging, animation, lighting, and rendering using Autodesk Maya.

camilapicanco in Lecture 28: Conclusion (9)

The black spots are black reflections from the object. If you want to get rid of it, you can increase the glossy rays and transmission rays.

ZiqiShi-HMD in Lecture 20: Fluid Simulation (5)

If we simply remove all the springs in our project 4 cloth simulation while keeping the self collision feature can we end up with something similar to the fluid grid?

camilapicanco in Lecture 28: Conclusion (10)

Great work! Which material did you use for the peashooter leaves?

ZiqiShi-HMD in Lecture 20: Fluid Simulation (48)

Is this video available anywhere? I'd assume you can fracture a crystal while turning it liquid block by block, while for a non-crystal you can tune down the viscosity for the whole object all at once.

ZiqiShi-HMD in Lecture 21: Virtual Reality (133)

For decreasing the head tracking latency, we might be able to take advantage of the human body having internal latencies. Is it possible to win some time through BCI technologies? Recent research with EEG allows crude limb action reconstruction, but they didn't present data about latency and the accuracy is also pretty horrible (accuracy on the magnitude of 10cm). But I'm thinking if intrusive implants can achieve better results than EEG.

camilapicanco in Lecture 28: Conclusion (8)

I agree about the lighting. I also think that everything is positioned just at the perfect place for the shot.

ZiqiShi-HMD in Lecture 21: Virtual Reality (66)

How do we do eye tracking in practice? I'm thinking about using a calibration system, where we ask the user to fixate on objects displayed at different distances/positions and calibrate their eye position at that moment to be the standard eye position for that distance/position. Then in the actual run, we regress the attended area's distance and position with that as a reference. Is this close to what we do in practice?

camilapicanco in Lecture 28: Conclusion (12)

Congrats! This art made me smile when I first saw it on the voting website. What did you have to do to make the Pokemon's face not look too stretched?

camilapicanco in Lecture 28: Conclusion (13)

I feel that everything in this logo represents this class. How hard was it to create the logo using pixel art?

camilapicanco in Lecture 28: Conclusion (14)

Amazing work! I was really impressed by the richness of the image and your creativity. I actually missed the bird during the voting stage and I'm glad I got to see it now.

sharhar in Lecture 19: Intro to Physical Simulation (25)

@Noppapon, yes, I think your understanding is correct. If we allow the triangles to move through 3D space, then the "sheet" of triangles would definitely bend. It's only when we constrain their movement to 2D that we see the triangles behave as a cohesive mesh.

sharhar in Lecture 19: Intro to Physical Simulation (5)

@geos98, GPUs are definitely used for particle simulations because they can do many things in parallel. However, unlike a CPU, the GPU can really only do things in parallel if the operations are "similar" enough. in practice, this is because the GPU has massive SIMD instructions that processes like 32 numbers at once, meaning that it can do 32 parallel computations on each core since all the computations are identical, with the only difference being the input data. This works very well for particle simulations where at each time-step we need to calculate the exact same math equations millions of times with different inputs.

sharhar in Lecture 23: Color Science (50)

I am no expert, but from my understanding the idea is that the cones in our eye get used to the intensity of the color, so they "correct" for it by reducing their sensitivity to counteract the signal they are getting. Then, when the white images is shown, you see the opposite color because your eye is "uncalibrated" and the sensitivity of the cones is out of balance.

sharhar in Lecture 23: Color Science (40)

I think a part of it is the gradient, but another part is that by seeing all the slides before this one, our brain was already preconditioned to view those squares as different colors. If we were just shown this image with no other context, I don't think the effectiveness of the illusion here would be as good as it is.

sharhar in Lecture 25: Image Sensors (16)

Does the detector give each electron the same amount of energy as it gets "pushed" by the photon, or is there some distribution? And if so, does this depend on the frequency of the incoming photons?

sharhar in Lecture 25: Image Sensors (10)

To add on to the above comment: quantum mechanics had been discovered by Max Plank 5 years before Einstein applied it to the photoelectric effect. Furthermore, the effect itself has been known about for decades. The big contribution that Einstein made was in realizing that Max Planks quantum mechanics could describe the phenomenon. Before that point, all physicists (including Max Plank) thought that the math of quantum mechanics was just made up nonsense that had no real physical backing. Einstein was the one that proved that the math of quantum mechanics really did describe reality.

sharhar in Lecture 26: Image Processing (25)

One important aspect of an auto-encoder is that the internal "hidden space" has fewer dimensions than the input data. This means that the model is forced to perform dimensionality reduction on the data, which helps it generalize patterns and remove noise.

sharhar in Lecture 26: Image Processing (22)

Yeah, if you look closely at the sky in the sharpened image, it is very noisy. This is because the high frequency noise in the image also gets amplified (along with the image details themselves)

starptr in Lecture 28: Conclusion (10)

Love the ceiling light source, it adds a playful effect

starptr in Lecture 28: Conclusion (4)

@Zc0in does CS280 cover CS294-164 or does it focus less on color theory?

longh2000 in Lecture 26: Image Processing (54)

Textures must be overlapping for this synthesis to work realistically.

longh2000 in Lecture 26: Image Processing (46)

When looking at the bilateral filter, it looks just like a convolution except for that the parameter of convolution filter is based on the intensity and the spatial difference within the neighboring pixels. This method seems to be outputting cartoon like images which can be quite useful in cartooning characters.

haileyswpa in Lecture 28: Conclusion (8)

The lighting is really cool! It's definitely what sells the image in my opinion, because it sets up the sinister atmosphere so well!

omaryu17 in Lecture 28: Conclusion (5)

UCBUGG sounds like a really cool class to take! From a quick peek at the syllabus, it seems very exciting and a good way to apply skills learned in this course. Hopefully it's offered in the fall.

omaryu17 in Lecture 28: Conclusion (4)

The first month and a half of 194-26 covers similar topics mentioned in 184. However, the projects are super rewarding and you can really make some cool results. The projects I was able to create in that class have been unrivaled so far. I'm also looking forward to taking Computational Color as I found the color science portion of this course to be very interesting.

haileyswpa in Lecture 28: Conclusion (4)

What are the main differences between this class (184/284A) and 284B? Is it more worthwhile to take 284B or to try out the other classes?

waleedlatif1 in Lecture 26: Image Processing (42)

One impressive feature of the human body is that our eyes are also able to compute a form of the Fourier transform. The photoreceptors in our eyes detect light and transmit these signals to the brain, with the high spatial frequency cells being in the retina and the lower ones in the periphery. Moreover, as we move through the levels of processing in our eyes, some are directionally or color selective which allow them to detect lines and edges.

waleedlatif1 in Lecture 26: Image Processing (18)

One interesting aspect of DCT, which is computed for each 8x8 block of pixels is that Discrete Cosine Transform works best when the adjacent pixels are similar, and in a photograph with blocks of pixels this is often the case. Thus, the JPEG compression algorithm overall is best for smoother tonal variations rather than ones with sharp edges or high contrast.

alvin-xu-5745 in Lecture 28: Conclusion (5)

UCBUGG was a great experience that I highly recommend to anyone interested in the mesh and animation side of this class. In hindsight, it would have been very cool to see the application of what we learned with the context from this class, especially as we discussed things like mesh integrity and lighting.

alvin-xu-5745 in Lecture 28: Conclusion (10)

What kinds of materials were used in this scene? The way the lighting is done on top of these materials is rather interesting, especially in the shading of the leaves.

madssnake in Lecture 25: Image Sensors (81)

increasing the gain for an image also makes any existing hot pixels more obvious

madssnake in Lecture 25: Image Sensors (32)

we saw earlier how overexposure can mean loss of detail because the oversaturation in the bright parts of the images, and here is an example of how underexposure also has a loss of detail but in the shadows of the image

LuxuFate in Lecture 28: Conclusion (10)

Did you use the same material for the eyes of the peashooter as the body?

LuxuFate in Lecture 28: Conclusion (5)

UCBUGG works with Maya while Decal on VR and GDD works with Unity!

LuxuFate in Lecture 28: Conclusion (8)

The lighting really gives the red among us imposter vibes

LuxuFate in Lecture 28: Conclusion (0)

Thank you professors and course staff for such an amazing and intriguing semester!

abdtyx in Lecture 28: Conclusion (0)

Thank you course staff for an amazing and interesting semester! I've got a really fantastic experience in computer graphics!

abdtyx in Lecture 28: Conclusion (14)

This image is awesome! It combines two techniques of shading in one image! I'm really curious about how to implement this.

egbenedict in Lecture 28: Conclusion (0)

Thank you to course staff for an amazing semester!

egbenedict in Lecture 28: Conclusion (10)

Epic lighting!

mooreyeel in Lecture 28: Conclusion (8)

red makes you do worse on exams??

mooreyeel in Lecture 28: Conclusion (9)

that is so realistic

rsha256 in Lecture 25: Image Sensors (18)

@patrickrz I don't think so, i think that depends on contrast/saturation/etc, green just helps capture more details & reduce noise since human eyes are most sensitive to green

vhlee7 in Lecture 28: Conclusion (10)

It looks really good! How many triangles are in the mesh?

vhlee7 in Lecture 28: Conclusion (9)

This is really "cool". Are the darker areas a reflection of something?

mcjch in Lecture 26: Image Processing (47)

I think the difference is suppose to be in the textures of the liquids since anisotopic filtering would be able to better capture the dynamic texture of liquid because it provides different levels of filtering in different directions. The hotpot texture appears more detailed and clear than the tea.

mcjch in Lecture 26: Image Processing (3)

I found this interesting visualization of the YCbCr space online: https://gpsanimator.com/gif1/colorCube50_YCbCr.html. The creator developed multiple models that helped me understand the different values (Y, Cb, Cr) as well as a 3D mapping of the color space.

ld184 in Lecture 28: Conclusion (10)

The pea shooter looks really good! What base mesh did you use?

ld184 in Lecture 28: Conclusion (9)

This ice cube looks really nice, and I was wondering how the subsurface scatter model BRDF created the darker contents of the ice cube, if the light is simply refracted away from the camera.

ld184 in Lecture 28: Conclusion (4)

What is the relative difficulty of the graduate level courses when compared to this class in terms of workload?

ld184 in Lecture 28: Conclusion (8)

These characters looks a little interesting! Seems to be inspired from a certain game...

ld184 in Lecture 28: Conclusion (5)

+1 on taking UCBUGG, the class gives a lot of hands on experience with modeling, which is a lot of graphics application.

EthanZyh in Lecture 26: Image Processing (30)

The radial basis function is defined on the whole R^2 plane, while we only take a subset, so in my opinion the kernel sums up to less than 1. I am wondering why the kernel in this slide sums up to exactly 1 and how do we get it.

EthanZyh in Lecture 26: Image Processing (25)

CS182 introduced a neural network structure called autoencoder. One type of autoencoder is denoising autoencoder. It takes noisy image as input and encodes it into hidden space. Then decoding the hidden layer gives the denoised version of input image.

Unicorn53547 in Lecture 26: Image Processing (42)

An interesting discovery in deep learning articheture, convs are more like high-pass filters and Multi-head Selt-Attentions serve more as low-pass filter. Ppaer details here This actually mathces intuition in some sense, as CNN normally focuses on local high frequency features, and Vit style arch puts attention on global low freq features. But actual design is much more complicated than intuition. Correct me if my intuition is wrong.

Unicorn53547 in Lecture 26: Image Processing (22)

Sharpening is basically done by adding a signal proportional to a high-pass filtered version of the image to the original one.

wangdotjason in Lecture 26: Image Processing (24)

Is this technique similar to Anisotropic Diffusion (developed by Berkeley's Jitendra Malik!), where an area's color gradient magnitude and direction determine its diffusion?

wangdotjason in Lecture 26: Image Processing (16)

JPEG is a lossy image compression format that is designed to efficiently compress photographic images, which have smooth color gradients and complex patterns. However, text is different from photographic images in that there is typically very high contrast, making the compression look blurry and distorted.

tylerhyang in Lecture 28: Conclusion (13)

I really like this design! I like how many of the project components made it onto the design (the flag itself is from Project 1 texture mapping, Project 2 teapot, Project 3 bunny). Great work!

patrickrz in Lecture 25: Image Sensors (74)

Is there significance to the SNR having a constant of 20?

tylerhyang in Lecture 5: Texture Mapping (114)

My team and I are currently using 3D procedural noise as a way to influence world generation for our final project. It is very interesting how we can take something like Perlin Noise to use as a scaffold for pseudorandom feature generation