This is a technical discussion on how to use render textures and screen space shaders in order to reduce nausea in VR. Reader should have general understanding of programming concepts and Unity. I learned basic shaders by reading this excellent shader tutorial series by Alan Zucconi. You can find my updated shader code on github here under MIT license.
One of my favorite experiences in VR is Google Earth. Seriously, though — Who wouldn’t love flying through the world like superman, visiting any destination, from the comfort of your living room? One of the reasons Google is able to pull off dynamic movement is that they pay close attention to the factors that make users sick in VR. A few months ago, I spent a few weeks prototyping and building a quick and dirty version of Google Earth’s flight mechanisms. In the following few discussion, I will go one of the techniques Google Earth VR uses to reduce simulator sickness: Field of View (FOV) reduction
FOV reducation has become a popular technique used by VR games to reduce motion sickness. In addition to being seen in Google Earth VR, you may have also noticed this technique in Ubisoft’s Eagle Flight. If you are interested in the intricacies of why this works, check out the scientific study Columbia released on the subject here. The gist of it is that simulator sickness is caused by vestibular disconnect between your inner ear and visual sensory systems, similar to how vertigo and motion sickness affect some people. A lot of the visual information that can make you sick in VR is caused by the movement you perceive in your peripheral vision. Therefor, by reducing the player’s peripheral vision developers can significantly reduce the nausea the player experiences.
You can see an example of Google Earth cutting out peripheral vision below. (Source: Youtube, Nerd Plays..https://www.youtube.com/watch?v=6NsI6XgzSn8)
So how does Google do this? Well I’m not exactly sure! But I jerry-rigged a ghetto version myself in Unity, and it works pretty well, so I thought I’d share that today.
Here’s the abstract of how my system works:
- Two cameras (one for the left eye, and one for the right eye) for the “background” world. These cameras will have culling masks set such that they only render the background world’s scene objects.
- Two render textures that the two cameras render to. One render texture for the left eye, and one texture for the right eye.
- One script, which is attached to the camera, that blends the two background render textures with the default VR view by passing the inputs into a screenspace shader.
- Finally — One Shader (probably poorly written given my shader skillz) written in CG, that blends between the textures and returns an image.
Below is the rough outline for how I approached on the Oculus Rift. Note, this is not meant to be a step by step tutorial because of time constraints, but it should definitely be enough to get you started. Vive has a slightly different implementation due to how the projection matrices work for each of the platforms, but if there is any interest, let me know and I’d be happy to walk through it.
- Create two render textures, one for each eye.
- Set the resolution of these render textures to the per eye resolution of the HMD you are using.
- If you have any objects you’d like to render outside of the scene horizon, create them, and give them a special layer.
- I used the standard OVR Camera Rig. I also wrote my shader in a way that works with single pass rendering. Since you are adding two additional background cameras, we want to save as many render and draw calls as possible.
- Attach the background cameras to the “Left Eye Anchor” and the “Right Eye Anchor”
- You’ll need to set the FOV of each of these cameras manually to 96 degrees. (Oculus does this at runtime)
- Set the culling mask of the camera to nothing, or the layer you set in the first step
- Set the cameras to render to the render textures you define
- As of Unity 5.5, there was no easy way (that I knew of) to tell unity to send camera texture information from both eyes onto one texture, and therefore no way to have just one texture/one camera for both eyes. Hopefully someone finds a more efficient solution later.
- I programatically define when the view begins to fade from the game world to the “background” world — although you could just as easily do this with an Alpha Mask. I wanted to be able to easily change the fade parameters to optimize the FOV that would allow users to see the most without feeling nauseous.
- You’ll want the script to contain variables for each render texture, as well as the start fade, and end fade values to pass into the shader.
- I dynamically generate the material at runtime based off of the shader, float values of the fade parameter, and the render textures in the Awake() call.
- In OnRenderImage, Blit between the source texture (the default camera’s render texture) and the shader that you will use instead to render to the HMD.
You can find the updated shader code on github here under MIT license. Feel free to use and distribute, but if you do find some value out of it, please feel free to send me a message :). Love to hear if I’ve made an impact in someone’s dev cycle.
- I use a fragment shader to blend between the two render textures and the regular VR camera view.
- I look at the distance from the origin that the point we are looking at is, an linearly interpolate between the BG camera and center camera according to the fade parameters that are passed into the shader from the script.
- unity_StereoEyeIndex can be used to return which eye is being rendered (0 for the left eye and 1 for the right eye)– this is be very useful for choosing which render texture to use (you can lerp between the two)
You can combine the above techniques with your locomotion system (detecting changes in velocity in your character controller, etc.) to activate this reduced FOV mode when users are moving. It works pretty well in practice for me.