It's super trippy to think that the way you see a color may not be the same way someone else sees that color. Really makes you wonder how other people see the world...
It's super trippy to think that the way you see a color may not be the same way someone else sees that color. Really makes you wonder how other people see the world...
One really cool effect I learned in my photography class that can be created from long exposure photography is the ghost effect. This can be created by having a person standing in a area for a while, and then quickly moving out of the camera's view while the camera takes the picture. This creates a transparent image of the person, making them look like a ghost
Do people still use this in the modern day?
Tilt Brush is one of the funnest apps I've tried in VR.
From my experience, these can be a pain to set up (with getting the right distance between the sensors and between you and the sensors, etc.)
Bought myself a google cardboard, but after the initial novelty wore off, it just doesn't seem to be great enough that I want to pick it up and play in it again.
@bchee, what did he say is the cause of motion sickness?
@dtseng, on the oculus rift, they have a feature where you can adjust the distance between the lenses so that you can find the sweet spot where its focused for you
More information about challenges in virtual reality like motion sickness can be found in this article: https://en.wikipedia.org/wiki/Virtual_reality_sickness
Is it ever possible to see the "invisible" colors?
Ren talks about this during the 3rd Annual Berkeley AR/VR Symposium re. Oz Vision!
https://www.youtube.com/watch?v=pJ6YV9-qxmg
Pretty awesome!
Here's an interesting journal describing an exposure renderer. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0038586
Here're some amazing long-exposure photographs, and it also explains how to achieve such effects. https://create.adobe.com/2013/10/1/long_exposure_photography_of_toby_harriman.html
It is possible that a crazy alternative to numerical integration for simulation of a physical system could be to simulate a probabilistic model from statistical mechanics that exhibits the correct dynamics in its scaling limit, and then perturb this system slightly to match a real physical scenario
Here's an interesting guide to high-speed photography. "Points to remember: shoot in dark room; small aperture; manually focus; flashes" https://digital-photography-school.com/high-speed-photography-fundamentals/
One interesting way to simulate such traffic scenarios might be to simulate random particle systems that generate similar dynamics in certain limits. e.g. see http://simulations.lpsm.paris/asep/ and http://www.math.columbia.edu/department/thera/slides/Corwin.pdf
Fascinatingly, these models exhibit beautiful and complicated mathematical physical and mathematical phenomena. Some of their properties could be used to simulate real systems.
"Pure diffuse surfaces are only theoretical, but they makes a good approximations of what we can find in the real world." This blog offers a detailed explanation of hemisphere uniform sampling, the part discussing rejection sampling is particularly interesting. https://blog.thomaspoulet.fr/uniform-sampling-on-unit-hemisphere/
"In practice, the vertex of this triangle is rarely located at the mechanical front of the lens, from which working distance is measured, and should only be used as an approximation unless the entrance pupil location is known." https://www.edmundoptics.com/resources/application-notes/imaging/understanding-focal-length-and-field-of-view/
I’m wondering what the differences between headsets from different companies
https://www.quora.com/Why-is-light-field-4d This discussion forum offers a detailed explanation of why light field is 4D. The light field is 5D, a 4D light field is our visual perception.
I found another interesting article (https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53) which explains CNN comprehensively with detailed graphs.
How do you build sensors that can accurately track the gaze of a person?
Do current display technologies have a high enough pixel density such that VR is not limited by them?
How is the lag regarding acquiring marker IDs accounted for
In the early days of video games and computer graphics, much of the assembly optimization was done by hand--Michael Abrash's Graphics Programming Black Book provides some incredible insight into the topic, and contains lessons about the benefits and pitfalls of optimizations that are still quite relevant today.
I was curious as to why this is the definition we define reality by. It seems to me like our reality is defined by a whole host of senses but we like to limit VR to only being defined by our sight. Is it because this is the easiest sense by which we're able to "switch" in and out of virtual and real reality or is it some other reason?
For head-mounted displays does it only matter that we engage our sight or is it more so that we are able to immerse ourselves better in that? I'd imagine that the inclusion of our other senses would obviously result in more immersion but what about something like taste or touch? Has there been any research or technology that tries to engage those senses because it seems like that would be the next huge step in VR.
I'm not confident, but the "Tex" blocks could be texture mapping units if it's just a representation of the processor itself. VRAM is definitely an important consideration when purchasing a GPU, however. On a related note, is there a reason why VRAM is GDDR5 versus the standard DDR4 for regular RAM?
Is there any ever any point at which us including more and more frames results in loss of efficiency or some sort of tradeoff elsewhere? I understand why we would get closer to the actual value with higher frame rates but I'd imagine that we'd have to have either extremely expensive hardware or something else lost in order to compensate for that?
The Unity demo from @fywu85 is incredibly impressive. I'm still in awe regarding the real-time aspect, although real-time when you know all the inputs before hand seems a step removed from real-time in a true video game sense where you can interact with the environment.
For accommodation, I get that the eye is focusing at objects at different distances, but what does "optical power" mean?
I remember going to a Maker Faire in 2012 and wearing a massive backpack and putting on some crude goggles and holding modified Wii remotes to participate in a makeshift VR experience. Just 3 years later at SIGGRAPH, I got to try the Oculus Crescent Bay Headset and its clear that progress is moving faster than ever.
While the passive is definitely a less complicated scenario of motion capture when compared to the active, I think the retroreflective markers present obstacles in clearly identifying which marker is being identified and the noise in collection could lead to motion disruptions.
It seems that blocking the visible light spectrum makes it so there is less noise when trying to identify the tracking dots for motion capture. My main question is whether it is possible to build a sensor that excludes the visible light spectrum without the need for a filter
Even though the study of photography and cameras is relatively new (around 200 years old), the study of optics is very old field. In fact, an Arab scholar named Ibn Al-Haytham (945-1040), invented the camera obscura. This was a precursor to the pinhole camera and demonstrates how light can project an image onto a flat surface.
Why is affine transformation called “affine”? Affine means preserving parallel relationships. After an affine transformation, an object preserves its points, lines, planes, etc, preserving its set of parallel lines. This geometric distortion is useful in correcting non-ideal cameral angles.
Subsurface scattering is used describe the light scattering effect that may occur as the light ray passes through a translucent or semi translucent surface. This scattering effect is oftent times used in video games to create realistic material. In fact, Unreal Engine 4 actually offers a special shading tool called subsurface scattering. It is specifically designed for material interaction involving skin or wax.
Why is the light suddenly dark in this image, whereas it was illuminated in the direct illumination image? Is this direct + one bounce, or just the one-bounce illumination isolated?
I wonder whether, for particular applications, it's standard to use one mesh type over the other (i.e. gaming, movies). It'd be interesting to learn more about the specific advantages/disadvantages of triangular mesh vs quad mesh.
I found this video to be a good visual introduction to mipmapping, magnification, and minification: https://www.youtube.com/watch?v=8OtOFN17jxM
I thought this Medium post about a custom acceleration structure project was pretty cool and gave me some insight about tackling this in projects: https://medium.com/@bromanz/how-to-create-awesome-accelerators-the-surface-area-heuristic-e14b5dec6160
What's the distinction between BRDF and BSDF?
One approach is a surface-oriented one, which operates directly on the surface and treats the surface as a set of points/ polygons in space. Another is parameterization-based which is computationally more expensive but works for coarse resolutions.
This articles talks about the math behind the forward and backwards euler methods.
http://web.mit.edu/10.001/Web/Course_Notes/Differential_Equations_Notes/node3.html
A principle here that wasn't discussed was "straight ahead and pose to pose animation", which begins with the first drawing and works drawing to drawing to the end of the scene, potentially losing size, volume, and proportions. However, this loss in definite characteristics is compensated in spontaneity, which is preferable for fast action scenes. The "pose to pose" component describes a method of planning out key drawings through various intervals in the scene.
A similar color space is HSL, which stands for hue, saturation, and lightness, the main difference between the two being that HSL is symmetrical to lightness and darkness, making it a bit more accurate for color approximations than HSV.
Another way to improve convolution runtime is to use dilated convolutions, which add an additional parameter to the 2D kernel called the dilation rate (the space between each pixel in the filter).
https://www.saama.com/blog/different-kinds-convolutional-filters/
I found an interesting research paper that discusses a new image-based rendering technique called "concentric mosaic" for VR apps; the concentric mosaic is basically a 3-D plenoptic function with viewpoints constrained on a plane, allowing users to move freely in a circular region and observe parallax and lighting changes without recovering photometric/geometric scene models:
https://ieeexplore.ieee.org/document/1386244
I'm also curious about what the social impact of VR chat might be, assuming that it gains popularity and becomes a main method of communication. There are several cases of people leaving rude messages and such anonymously online (which they would not necessarily verbalize or say in real life), and I wonder if that might change if there exists the virtual equivalent of "face-to-face" in these interactions instead, or whether it might actually end up being worse.
VR painting definitely appears to be something that's an in-between for painting and sculpting; as noted in previous comments above, it seems to be very versatile, and incredibly useful for designing 3D objects efficiently, given the volumetric painting capabilities.
If we're talking about actual human eyes, then yes, mis-calibration between the eyes has long-term effects. Strabismus is the condition in which the eyes do not properly align when looking at an object. It can lead to a condition in which the input from one eye is essentially ignored, meaning you can lose depth perception and other binocular cues to some extent (but perhaps it is worth it to keep consistency in visual input?) For more info: https://en.wikipedia.org/wiki/Strabismus
On the mention of the possibility of the virtual camera's movements being uncomfortable -"virtual reality motion sickness" is something that definitely exists and has been noted and observed. More info here: https://www.wareable.com/vr/vr-headset-motion-sickness-solution-777
To add something else to the comment above, it would be really cool to combine this sort of interactive tool with stuff like 3D models and simulation; for example, you design 3D clothes which can then be placed onto 3D models and simulated with different kinds of material to see what works best. Probably extremely complicated, but intriguing nonetheless!
I was reading about these markers and part of the algorithm involves detecting candidate markers in the environment and then accepting or rejecting them. So it seems the speed/lag of such a system would rely heavily on the specifics of the environment, for example if there were a lot of square shapes around (like a particular wallpaper pattern) that could potentially cause there to be much more lag or even corruption if the patterns were too similar.
@orkun1675 I think this is because there's a sqrt(lambda) variance with lambda photons and when lambda is smaller (i.e. the photo is darker), sqrt(lambda) is closer to lambda, causing significant noise. Increasing exposure increases lambda and also increases sqrt(lambda) but the relative noisiness is decreased.
I wonder what additional kinds of noise get introduced when you get a multiple camera array like this. Counteracting that would be an interesting problem.
It is certainly feasible to see within the next few years. Perhaps an innovative use case is needed to capture the public interest.
I wonder if we can still think of these as thins lenses.
To follow up on the point about the surface normals, anisotropic surfaces have unequal physical properties along different axes. This helped me understand anisotropic vs isotropic a bit more: http://www.neilblevins.com/cg_education/aniso_ref_real_world/aniso_ref_real_world.htm
I thought this article by Adobe on the different ways to implement keyframe interpolation with Adobe After Effects (https://helpx.adobe.com/after-effects/using/keyframe-interpolation.html) was a pretty cool example of how this is actually done in practice.
Pretty cool article on this history of motion-capture movies: https://screencrush.com/motion-capture-movies/
Since light field camera is a multi-camera array, there may be a tradeoff between the cost of adding lens in the array and getting the more accurate light field.
Motion capture is good at face and fingers or subtle expressions.
I remember that Playstation requires its VR software to run at least 60 fps to ensure VR experience
Slide 22 of this lecture actually raises and answers questions relevant to ^
Note that Google Tango is retired, in favor of ARCore
How can we add stability and reduce errors in our sensor readings that all these calculations are based on? What additional sensors might we use to improve the quality of our pose + roll/pitch/yaw movement readings?
It would be interesting to see a study of the "rolling" effects/inaccuracies in these sensor measurements due to spatial movements by the wearer. Just like when taking a photo from a camera that sequentially captures from a left to right pattern gives us a "slanted" sensor image, would we see analogous effects here?
Markers like these are frequently used for sensor calibration and orientation as well, since they present easy to recognize patterns for cameras
One interesting side effect of google cardboard/daydream type technologies is that you can directly see on the smartphone what your 2D screen is rendering in order to give your eyes the immersive VR perception. It basically looks like the smartphone screen is split in half, with each half looking like a "fish eye" lens of the scene in front of the virtual camera, with a minor difference in position for each eye (giving depth perception)
It does not always have to be the center, the new HTC vive will actually track your eye movement and focused on rendering the part where you are looking at.
In VR, to perceive same resolution from a 4K screen actually need a much higher resolution. I think the below forum discuss it really well: https://forums.oculusvr.com/community/discussion/61106/what-resolution-per-eye-in-vr-to-make-it-look-like-a-1080p-monitor
You probably won't notice it in demo, but when actually it is working in a quite room, the spinning wheels are making some noise and shaking which would amplify the sound.
From my personal experience, this setup is proving the best tracking quality. especially the signal(IR) would not be blocked from human body since it track from two sides. Also, only one light house would also work, but lower quality.
If it only detect IR, does that means it is also capable of capturing motions, not just the headset?
I have both Google cardboard, HTC Vive and Samsung Galaxy. And I am pretty sure that cardboard definitely have a worse experience. But the cost is so low that I think people should really tried it before consider buying a real one.
I feel like the answer is yes, and the more cameras we have the better job we can do
@tyleryath Maybe someday when we can directly connect our brains to computers, all our senses can be fully immersed in VR. I think for now, though, even if latency and resolution are taken care of, we need a more intuitive input method for experiences to be better.
It's interesting to remember that AR glasses have largely failed in the market thus far; wondering if maybe Apple can change that!
This is along the lines of what Facebook has been talking about doing. https://www.wired.com/story/facebook-oculus-codec-avatars-vr/ I am also very curious about the transmission of facial expressions and body language in these more serious situations as the person above mentioned. The video of the face chat in that link is interesting and made me think about the difficulty of the problem of getting the microexpressions right.
So would Google Glass have been like augmented reality then? I'm curious why they got so much push back to the point that they stopped selling there product when there is clearly still interest in AR. This picture specifically would hit some of the same markers the public complained about with Google Glass.
How long will it take for these changes to get implemented? I hear conflicting opinions on whether or not VR will be the next "platform" similar to what mobile phones did in the early 2000s. I would imagine these improvements would need to be implemented in order for that to happen. Is it feasible within the next few years?
If anyone is interested in getting a clear visual understanding of whats going on in a CNN, check out this video: https://www.youtube.com/watch?v=BFdMrDOx_CM
What are some other requirements for a truly immersive VR experience?
One eye captures a roughly circular area. Because we have two overlapping fields of vision for depth perception, between our two eyes we capture a roughly elliptical shape
In computer games and modern game consoles the FOV normally increases with a wider aspect ratio of the rendering resolution.
The visual field of the human eye spans approximately 120 degrees of arc. However, most of that arc is peripheral vision.
Our actual FOV is 114 degrees. Binocular vision covers 114 degrees horizontally of the visual field in humans; the remaining peripheral 40 degrees on each side have no binocular vision because only one eye can see those parts of the visual field
this isn't really scientific but thought it was funny: https://edgy.app/how-lab-animals-are-using-vr
Interesting article that says adding adding a virtual nose reduces motion sickness. https://www.wired.com/2015/04/reduce-vr-sickness-just-add-virtual-nose/
I'm curious if it's possible to wear glasses with VR headsets since most people don't have 20/20 vision. Would this affect how the headset focuses/computes what images to show?
Do any VR headsets today incorporate foveated rendering? Or is this a new concept in VR research? I'm curious as to the side-effects of this if the eye moves very fast (perhaps too fast for the VR headset to respond).
I'm also confused by this. This almost seems worse as it would appear like the image is flashing. Is there a certain frame rate that prevents this from being noticeable?
Is this effect noticeable enough on modern VR headsets to cause discomfort? I'm curious about what refresh rates would cause this to be distracting/uncomfortable for the user.
Is this function just used to theorize about "the set of everything we see". Seems like it is too data and computationally intensive to compute or even approximate
VR systems range dramatically in the quality of their hardware from full gaming computers with dedicated graphics (HTC Vive) to just on your phone (google cardboard). Perhaps, since latency is not as important this allows for smaller systems without dedicated hardware
This is a really cool experience when used with a phone with a nice display (newer galaxies work well), especially for a short period of time. Its amazing how much a few lenses and some software can do!
It’s crazy to see how far VR has gone from the earliest immersive technology such as Sensorama in the 60’s which attempted to give users the experience of riding a motorcycle by simulating noise, wind, smell and view. There is some pretty interesting current research being done in this kind of immersive technology called Augmented Reality Flavors where people can change the taste of something they’re eating. Heres a demo from the University of Tokyo: https://www.youtube.com/watch?v=qMyhtrejct8&feature=youtu.be
Helped work on the YI Halo https://vr.google.com/jump/ a couple of years ago. The cameras were basically just like (cheap knockoff) go-pro's in a ring with the video capture very synchronized without that much fancy stuff going on hardware-wise!
Comments