Thanks to Steve Marschner too!
Could you potentially sample something like this by finding the gradient, then picking random points in the bounding box and using newton's method to map them to points on the torus?
Notice that the sky in the second picture is noisier. That's one of the drawbacks of sharpening.
Another form of monte carlo integration involves sampling (x, y) points and checking if y is under or over the lines. This is useful if you have an implicit representation of the function, like if the surface is defined by f(x, y) = 0. In this case, for y samples ranging from c to d, the integral is (b-a)(d-c)(fraction of samples under the surface).
They might have omitted that because it's true for discrete pdfs but not necessarily for continuous ones, whereas the rest of the slide generalizes to continuous pdfs.
I think the connection is close to the focii of parabola, since those focii are defined in terms of where rays coming from infinity converge. If we had mirrors instead of lenses, they would work exactly the same as the parabola focus. For lenses I think there's a connection but I don't remember it.
If you get messed up by the conflicting conventions, imagine setting f = infinity, which is the case for flat glass, and note that object and image are at the same location.
Note: picture not to scale. In reality the sun is large enough that we can't see the second earth on the other side.
While the undamped version can be solved with L = Lo + acos(wt), the damped version has a decay that makes L = Lo + ae^(-bt)cos(wt). Solving motion analytically lets you avoid actually simulating physics and guarantees accuracy.
Navier-Stokes works as long as the fluid is neutral. If it can have significant charges, you need to use MHD equations for an accurate simulation.
@zheryu I don't think that happens often, though I'm not entirely sure why.. I know that in physics at least you can represent almost any system with a Hamiltonian, which is a function of just position and momentum (and maybe time).
Also in physics terms (q, qdot) is called state space instead of phase space. Phase space refers to q and its canonical momentum, which is often harder to calculate but a lot more interesting. For example consider q = theta for a particle orbiting something. If the orbit isn't circular, theta dot might not be constant, but the canonical momentum (angular momentum in this case) will be.
@nagk I'd assume you set the difference between the direction the step goes and the direction of the velocity at the end of the step a a function for direction, then find the zero of that function using newton's method.
I would indeed. But that situation is more complicated than this one, since it has an inflection point in the middle whereas this vector field has a constant curve. I would think averaging the velocities at the start of the explicit euler step would work well in that case.
Then you have multiple variables in the equation though, which makes it harder to solve. It's more common for people to solve multiple variable DEs by reducing them to single variable higher order DEs. One example of this is calculating EM waves in media that allow currents to flow (e.g. metals, plasma, ions in water).
Some can be represented as gradient fields. In general though, velocity can't be represented as a gradient field. For example, there is no way to write a velocity field representing circular motion as the gradient of a different field (e.g. picture on the wikipedia link you posted).
Vector fields that cant be written as gradients of scalar fields can be written as a sum of a gradient field and a curl of a different vector field. This is useful for representing electric and magnetic fields in relativity.
How would you actually implement translucent materials like jade? I assume that some amount of light must be refracted through the figure, some is reflected, and the refracted light is tinted green.
What exactly causes the leaves in the forward scattering image to reflect light in a shiny way? Does most of the light reflect off the leaves isotropically?
When this movie came out, it was revolutionary because the dinosaurs looked so incredibly real. Even 23 years later, the CG holds up surprisingly well! In Jurassic Park, most of the dinosaurs were actually animatronics. However, Steven Spielberg felt the stop-motion movement was too jerky, and ended up using CGI to stitch the frames together.
Is this at all related to bump mapping?
Here's a code implementation of snell's law that calculates the resulting light direction based on vector inputs: http://steve.hollasch.net/cgindex/render/refraction.txt
The large blue light is speckled on the glossy surface because if you send a ray from the light to the glossy material, then there are only a few spots where it reflects perfectly. The probability that the rays intersect with the mirror lobe (which will be small if it is a mirror), is quite small. Thus, the glossy blue light is speckled.
Perfect specular reflection is basically a mirror.
In this image, the grid needs to be fine in order to capture detail on the tables. However, the courtyard space, which is quite empty, will waste a lot of grid storage and and makes grid traversal hugely inefficient. Thus, we want to use non-uniform spatial partitions so that we reduce the cost of traversing through the grid when searching for intersections with primitives.
Z-depth is also useful during surface reconstruction for fluid particle simulation. The depth will allow you to figure out which particles constitute the "surface" of the particle cloud.
Intuitively, this makes sense. As we sample more, more samples fill in the holes between outlier samples, making the image look more complete.
Light waves are electromagnetic waves that vibrate in different orientations. Polarization of light means that the wave vibrations lie on one plane. Unpolarized light can be polarized by reflection.
I'm having a hard time grasping the significance of the numerical precision. At distances this large, wouldn't the resulting image be negligibly small?
This shape is known as a cardioid. When light is reflected on the inner side of the metal ring, it will form a cardioid on the table.
This is due to some mathematical magic involving catacaustics of circles
The calculation uses the ratio of the subtended area for the sun/moon to the full sphere to figure out the projected area on the earth. Thus, it is seeing what fraction of the surface area of the earth the subtended area covers. So, multiply the total surface area 510 Mkm2 by the ratio (60 µsr; / 4π sr).
"The Great Picture" is 111 feet wide and 32 feet high, and was created in an abandoned F-18 hangar. The pinhole used to create this image was just under 6 mm in diameter, and the exposure time as 35 minutes. It took about 400 people to create.
For more information check out the wiki page:
In Loop Subdivision, we are approximating the new vertex positions instead of interpolating. Thus, there are no common vertices between the first and last representations.
Although parabolic lenses do converge axis-aligned rays to a single point, most lenses are spherical since they are much easier and cheaper to manufacture.
^Same, why must I graduate :[
Building off of Yumi's comment: Indeed, VR headsets do not provide much in the way of peripheral vision; this tunnel vision is one of many compounding sources of trouble that lead to simulation sickness in VR.
Indeed, modelling in clumps is the way it is done in most major studios. I couldn't find the images I saw of fur-model setups for Zootopia, but here is an image illustrating the control strands for each grouped lock of Merida's hair.
The image comes from this article, and I will quote: "Merida had 1500 hand placed curves which interpolate to some 111,000 curves at final render."
Rolling shutter artifacts are at higher frame rates and shutter speeds—since the shutter still sweeps across the sensor.But it' is only beneficial when all the lights are collected simultaneously.
Shallow focus is typically used to emphasize one part of the image over the rest.
One thing to note is that the each dash can sort of be thought of as a frame point for the ball.
When 2D animators do not have enough frames (aka screentime) to convey this sort of timing, they often use a technique called "smearing" that is responsible for those freaky still-frames that everyone loves.
How will this method of tracking a user's gaze deal with users who have lazy eye? Would it be able to render 2 gaze points, or is it restricted to only 1 gaze point?
The Scheimpflug principle is a geometric rule that describes the orientation of the plane of focus of an optical system (such as a camera) when the lens plane is not parallel to the image plane. It is commonly applied to the use of camera movements on a view camera [wiki]
How to determine the optimal pixel window dimension?
Exaggeration in 3D Modelling is highly encouraged and can be used to great effect as well since it makes the action more compelling to the viewer; the still frames are just as ridiculous looking though!
In addition, the ability to extend a limb is obviously physically limited in real life. The particular example above would require the body to either stretch in ways that look eerie or bend in places unrelated to the movement.
Implicit geometry makes ray tracing easier, but it makes sampling harder.