Can we add a denoising process like this into our raytracer to allow us to render at lower samples per pixel/light source while getting better results?
Can we add a denoising process like this into our raytracer to allow us to render at lower samples per pixel/light source while getting better results?
This is importance sampling like we did in the raytracing project. How can we generate an importance sampling scheme for more complicated scenes, like for points that are strongly illuminated by the reflection of a light source? Would we do a course sampling of all directions, and then produce an importance function based on the preliminary results of the impact of each of those sampled directions?
Since the Earth is a sphere, the total irradiance on the surface should be the same year round, with only the location of highest intensity changing. I wonder, do oceans and land absorb/reflect light differently? And if so, does the world have a summer (Northern hemisphere summer vs. Southern hemisphere summer) where more light is absorbed due to geographical differences? My guess would be that ocean's absorb more energy, and that the Southern hemisphere is more ocean by % surface area, so then the Southern hemisphere summer absorbs more energy than the Northern hemisphere.
It's interesting how the time-reversibility of physics allows us to cast light backwards and still generate physically accurate results. Is it even possible for this implementation to fail at simulating some feature due to it's reversed nature?
Since we are organizing in 3D space, why do we not use octrees as a data structure for acceleration? My guess would be that we cannot guarantee that a ray stays within an octant of an octree once it enters, so we cannot prune the tree for each ray as well as with a KD-tree
When subdividing, we have the effect of "smoothing out" some course mesh, as we see in this slide. What I'm wondering is whether or not the net effect of this "smoothing" can be described in one mathematical expression. For example, if we perform loop-subdivision on a cube, is there some way that we can explicitly define the effect of the "smoothing" (corner radius, spatial curvature, etc.) beyond the description of the algorithmic steps taken to reach that smoothed state? Or, is the effect of the smoothing only able to be defined from those algorithmic steps and nothing else.
Do the physical constrains that these hooks have on the spline have any relation to the mathematical constraints that we use to construct Bezier and other curves? It seems like the hooks are just used to constrain different points along the spline to different positions, and the physical properties of the spline naturally allow it to form a curve. Could we simulate a curve that exactly emulates the behavior of a real spline?
How does the size of the pinhole affect the image produced on the wall? I would imagine that increasing the size of the pinhole would result in a brighter but blurrier image, since more light is allowed in, but from conflicting angles and positions at each point in the wall. How does this relate to modifying the different properties of a lens? Does a larger pinhole = a larger aperture?
This strikes as very interesting, since convolutional neural networks which are traditionally used in image classification problems use convolution to generate inputs for FFNs. Is there then some connection between the multiplication of frequency domains to the idea of image classification?
10 is common to the decibel scale; it's designed such that a x10 increase corresponds with +10 dB (hence the name decibel).
However, decibel is measured in terms of power, and the pixel values are a magnitude. Conventionally power is proportional to magnitude squared (doesn't make much sense in this context, but it did in EE120 when it was in terms of voltage), so the SNR is really
10log10((σμ)2)
which turns into the expression on the slide when you pull the 2 out.
remind me of plants vs some bees in cs61a!
The different depths of objects in this image are cool
I am wondering the same thing as @patrickz. After finding information about the conversion online, some websites say the constant is 20 whereas the others say 10.
After looking at how Bilateral Filter works, I believe the technique would be useful for tasks like image smoothing or noise reduction.
I was curious about why the low frequency parts of the image get more noisy. It makes sense as the high pass filter that others have talked about is applied to the entire image. Thus, even small bits of noise in the original image will be amplified in the sharpened version.
Found this slide to be especially insightful just because you can denoise/add noise in photography editing apps like Lightroom. One can check out https://www.youtube.com/watch?v=qaccekBhS6E for more information
Wow! This image contains elements from the five assignments. What a creative design!
These cubes look super great! I'm really impressed by the transparent effects and shadows in this image!
This course is very rewarding and enjoyable. Thank you very much, professors and course staff, for this wonderful semester!
So there is a part B class for computer graphics? I wonder what will the course contents be in that class.
I wonder how will these results change when the size of the neighborhood search window keeps increasing.
I remember noticing a similar type of artifact in video streaming where there are block-like patterns. I remember, on the other hand, that this type of artifact is different from this compression artifacts.
Which class, if any, explores the implementation of rendering systems in detail? Or, which classes should I take if I wanted further guidance on implementing an efficient, general purpose renderer?
Though not strictly related to graphics, we had briefly discussed simulating flocking birds (and group simulations generally) in relation to physical simulation. Does anybody know of any classes that expand on these ideas? I'm particularly interested in the simulation of traffic.
The microfacet metal effect is really cool on a humanoid character! Reminds me of a game that's being developed right now called Skate Story. The model in the preview is similarly polygonal and faceted, but I think contains more of a transparent and crystalline/iridescent effect.
@nathalyspham I think other applications for edge detection could include object tracking for computer vision. Reducing an image to just edges allows you to see your tracked image a little more clearly!
Speaking of color reproduction, is there some way to easily make our monitors look the same without buying specific tools to measure? I know when moving from my laptop to desktop, there is a visible change in how it looks.
https://www.youtube.com/watch?v=uihBwtPIBxM Here is a cool video explaining an edge detection method!
Follow-up question, but some ellipses of equal area are still not congruent (with different major-to-minor axis ratios). Does this have any significance?
We can use zig-zag reordering and run-length encoding to further compress our image representation. Run-length encoding is a lossless compression technique that converts a list of values into a list of runs of values. This way, we can remove more redundancy in the quantized DCT coefficients.
What factors contribute to a specific material's yield point? Is it possible to deform a structure back to its original form?
How do we cope with adding more joints and more degrees of freedom? Doing the math, even just adding a third joint makes the equations much more complicated.
To add on to jacklishufan, the goal of PCA isn't to make a classification of "male" or "female", but rather to find patterns in the structure, in this case a blend shape.
How are these prisms machined? It almost seems like a case where simulation is easier because it's not as subject to manufacturing issues.
The image looks so cool! It's quite the combination of good rigging and artistic expression.
I feel like the lighting on the inside of the peashooter's mouth adds to this effect that the material is very slightly transparent; unsure if this is intentional but I think it's really cool since real-life pea skins are slightly transparent as well.
The specular highlights on the visors really add to the lighting effect on the characters, and I like how the extra shadow on the wall implies the presence of another character watching the pair.
I am one of the facilitators for UCBUGG and it is offered every semester! We go through every stage in the animation pipeline: storyboards, modeling, shading, rigging, animation, lighting, and rendering using Autodesk Maya.
The black spots are black reflections from the object. If you want to get rid of it, you can increase the glossy rays and transmission rays.
If we simply remove all the springs in our project 4 cloth simulation while keeping the self collision feature can we end up with something similar to the fluid grid?
Great work! Which material did you use for the peashooter leaves?
Is this video available anywhere? I'd assume you can fracture a crystal while turning it liquid block by block, while for a non-crystal you can tune down the viscosity for the whole object all at once.
For decreasing the head tracking latency, we might be able to take advantage of the human body having internal latencies. Is it possible to win some time through BCI technologies? Recent research with EEG allows crude limb action reconstruction, but they didn't present data about latency and the accuracy is also pretty horrible (accuracy on the magnitude of 10cm). But I'm thinking if intrusive implants can achieve better results than EEG.
I agree about the lighting. I also think that everything is positioned just at the perfect place for the shot.
How do we do eye tracking in practice? I'm thinking about using a calibration system, where we ask the user to fixate on objects displayed at different distances/positions and calibrate their eye position at that moment to be the standard eye position for that distance/position. Then in the actual run, we regress the attended area's distance and position with that as a reference. Is this close to what we do in practice?
Congrats! This art made me smile when I first saw it on the voting website. What did you have to do to make the Pokemon's face not look too stretched?
I feel that everything in this logo represents this class. How hard was it to create the logo using pixel art?
Amazing work! I was really impressed by the richness of the image and your creativity. I actually missed the bird during the voting stage and I'm glad I got to see it now.
@Noppapon, yes, I think your understanding is correct. If we allow the triangles to move through 3D space, then the "sheet" of triangles would definitely bend. It's only when we constrain their movement to 2D that we see the triangles behave as a cohesive mesh.
@geos98, GPUs are definitely used for particle simulations because they can do many things in parallel. However, unlike a CPU, the GPU can really only do things in parallel if the operations are "similar" enough. in practice, this is because the GPU has massive SIMD instructions that processes like 32 numbers at once, meaning that it can do 32 parallel computations on each core since all the computations are identical, with the only difference being the input data. This works very well for particle simulations where at each time-step we need to calculate the exact same math equations millions of times with different inputs.
I am no expert, but from my understanding the idea is that the cones in our eye get used to the intensity of the color, so they "correct" for it by reducing their sensitivity to counteract the signal they are getting. Then, when the white images is shown, you see the opposite color because your eye is "uncalibrated" and the sensitivity of the cones is out of balance.
I think a part of it is the gradient, but another part is that by seeing all the slides before this one, our brain was already preconditioned to view those squares as different colors. If we were just shown this image with no other context, I don't think the effectiveness of the illusion here would be as good as it is.
Does the detector give each electron the same amount of energy as it gets "pushed" by the photon, or is there some distribution? And if so, does this depend on the frequency of the incoming photons?
To add on to the above comment: quantum mechanics had been discovered by Max Plank 5 years before Einstein applied it to the photoelectric effect. Furthermore, the effect itself has been known about for decades. The big contribution that Einstein made was in realizing that Max Planks quantum mechanics could describe the phenomenon. Before that point, all physicists (including Max Plank) thought that the math of quantum mechanics was just made up nonsense that had no real physical backing. Einstein was the one that proved that the math of quantum mechanics really did describe reality.
One important aspect of an auto-encoder is that the internal "hidden space" has fewer dimensions than the input data. This means that the model is forced to perform dimensionality reduction on the data, which helps it generalize patterns and remove noise.
Yeah, if you look closely at the sky in the sharpened image, it is very noisy. This is because the high frequency noise in the image also gets amplified (along with the image details themselves)
Love the ceiling light source, it adds a playful effect
@Zc0in does CS280 cover CS294-164 or does it focus less on color theory?
Textures must be overlapping for this synthesis to work realistically.
When looking at the bilateral filter, it looks just like a convolution except for that the parameter of convolution filter is based on the intensity and the spatial difference within the neighboring pixels. This method seems to be outputting cartoon like images which can be quite useful in cartooning characters.
The lighting is really cool! It's definitely what sells the image in my opinion, because it sets up the sinister atmosphere so well!
UCBUGG sounds like a really cool class to take! From a quick peek at the syllabus, it seems very exciting and a good way to apply skills learned in this course. Hopefully it's offered in the fall.
The first month and a half of 194-26 covers similar topics mentioned in 184. However, the projects are super rewarding and you can really make some cool results. The projects I was able to create in that class have been unrivaled so far. I'm also looking forward to taking Computational Color as I found the color science portion of this course to be very interesting.
What are the main differences between this class (184/284A) and 284B? Is it more worthwhile to take 284B or to try out the other classes?
One impressive feature of the human body is that our eyes are also able to compute a form of the Fourier transform. The photoreceptors in our eyes detect light and transmit these signals to the brain, with the high spatial frequency cells being in the retina and the lower ones in the periphery. Moreover, as we move through the levels of processing in our eyes, some are directionally or color selective which allow them to detect lines and edges.
One interesting aspect of DCT, which is computed for each 8x8 block of pixels is that Discrete Cosine Transform works best when the adjacent pixels are similar, and in a photograph with blocks of pixels this is often the case. Thus, the JPEG compression algorithm overall is best for smoother tonal variations rather than ones with sharp edges or high contrast.
UCBUGG was a great experience that I highly recommend to anyone interested in the mesh and animation side of this class. In hindsight, it would have been very cool to see the application of what we learned with the context from this class, especially as we discussed things like mesh integrity and lighting.
What kinds of materials were used in this scene? The way the lighting is done on top of these materials is rather interesting, especially in the shading of the leaves.
increasing the gain for an image also makes any existing hot pixels more obvious
we saw earlier how overexposure can mean loss of detail because the oversaturation in the bright parts of the images, and here is an example of how underexposure also has a loss of detail but in the shadows of the image
Did you use the same material for the eyes of the peashooter as the body?
UCBUGG works with Maya while Decal on VR and GDD works with Unity!
The lighting really gives the red among us imposter vibes
Thank you professors and course staff for such an amazing and intriguing semester!
Thank you course staff for an amazing and interesting semester! I've got a really fantastic experience in computer graphics!
This image is awesome! It combines two techniques of shading in one image! I'm really curious about how to implement this.
Thank you to course staff for an amazing semester!
Epic lighting!
red makes you do worse on exams??
that is so realistic
@patrickrz I don't think so, i think that depends on contrast/saturation/etc, green just helps capture more details & reduce noise since human eyes are most sensitive to green
It looks really good! How many triangles are in the mesh?
This is really "cool". Are the darker areas a reflection of something?
I think the difference is suppose to be in the textures of the liquids since anisotopic filtering would be able to better capture the dynamic texture of liquid because it provides different levels of filtering in different directions. The hotpot texture appears more detailed and clear than the tea.
I found this interesting visualization of the YCbCr space online: https://gpsanimator.com/gif1/colorCube50_YCbCr.html. The creator developed multiple models that helped me understand the different values (Y, Cb, Cr) as well as a 3D mapping of the color space.
The pea shooter looks really good! What base mesh did you use?
This ice cube looks really nice, and I was wondering how the subsurface scatter model BRDF created the darker contents of the ice cube, if the light is simply refracted away from the camera.
What is the relative difficulty of the graduate level courses when compared to this class in terms of workload?
These characters looks a little interesting! Seems to be inspired from a certain game...
+1 on taking UCBUGG, the class gives a lot of hands on experience with modeling, which is a lot of graphics application.
The radial basis function is defined on the whole R^2 plane, while we only take a subset, so in my opinion the kernel sums up to less than 1. I am wondering why the kernel in this slide sums up to exactly 1 and how do we get it.
CS182 introduced a neural network structure called autoencoder. One type of autoencoder is denoising autoencoder. It takes noisy image as input and encodes it into hidden space. Then decoding the hidden layer gives the denoised version of input image.
An interesting discovery in deep learning articheture, convs are more like high-pass filters and Multi-head Selt-Attentions serve more as low-pass filter. Ppaer details here This actually mathces intuition in some sense, as CNN normally focuses on local high frequency features, and Vit style arch puts attention on global low freq features. But actual design is much more complicated than intuition. Correct me if my intuition is wrong.
Sharpening is basically done by adding a signal proportional to a high-pass filtered version of the image to the original one.
Is this technique similar to Anisotropic Diffusion (developed by Berkeley's Jitendra Malik!), where an area's color gradient magnitude and direction determine its diffusion?
JPEG is a lossy image compression format that is designed to efficiently compress photographic images, which have smooth color gradients and complex patterns. However, text is different from photographic images in that there is typically very high contrast, making the compression look blurry and distorted.
I really like this design! I like how many of the project components made it onto the design (the flag itself is from Project 1 texture mapping, Project 2 teapot, Project 3 bunny). Great work!
Is there significance to the SNR having a constant of 20?
My team and I are currently using 3D procedural noise as a way to influence world generation for our final project. It is very interesting how we can take something like Perlin Noise to use as a scaffold for pseudorandom feature generation
Does the fact that there are more green pixels than red or blue pixels mean that when a camera captures green, it is the most "vibrant" color in the gamut in terms of how it appears on the final image?
Comments