Oculus Rift Dev: Day #4

8. Rendering to the Oculus Rift

The Oculus Rift requires split-screen stereo with distortion correction for each eye to cancel the distortion due to lenses. Two ways of doing it:


  • SDK distortion rendering approach: library takes care of timing, distortion rendering, and buffer swap. Developers provide low level device and texture pointers to the API, and instrument the frame loop with ovrHmd_BeginFrame and ovrHmd_EndFrame.
  • Client distortion rendering: distortion rendered by the application code. Distortion rendering is mesh-based. The distortion is encoded in mesh vertex data rather than using an explicit function in the pixel shader. The Oculus SDK generates a mesh that includes vertices and UV coordinates used to warp the source render target image to the final buffer. The SDK also provides explicit frame timing functions used to support timewarp and prediction.
8.1 Stereo Rendering concepts
IPD: Inter pupillar distance. 65mm on average
Reprojection stereo rendering technique: (left and right views generated from a single fully rendered view), is usually not viable with an HMD because of significant artifacts at object edges.

Distortion: The lenses in the Rift magnify the image to provide a very wide field of view (FOV), to counteract pincushion distortion, the software must apply equal barrel distorsion, plus chromatic aberration correction (color separation at the edges caused by the lens). 

The Oculus SDK takes care of all necessary calculations when generating the distortion mesh, with the right parameters (they depend on lens characteristics and eye position).

When rendering for the Rift, projection axes should be parallel to each other and the left and right views are completely independent of one another.The two virtual cameras in the scene should be positioned so that they are pointing in the same direction (maching real world direction) and with an separated by a IPD (adding the ovrEyeRenderDesc::HmdToEyeViewOffset translation vector to the translation component of the view matrix or using ovrHmd_GetEyePoses which performs this
calculation internally and returns eye poses)

8.2 SDK distortion rendering

The Oculus SDK provides SDK Distortion Rendering. Developers render the scene into one or two render textures, passing them into the API. Beyond that point, the Oculus SDK handles the rendering of distortion.

Steps:

1. Initialization
  • Modify your application window and swap chain initialization code to use the data provided in the ovrHmdDesc struct e.g. Rift resolution etc.
  • Compute the desired FOV and texture sizes based on ovrHMDDesc data.
  • Allocate textures in an API-specific way.
  • Use ovrHmd_ConfigureRendering to initialize distortion rendering, passing in the necessary API specific device handles, configuration flags, and FOV data.
  • Under Windows, call ovrHmd_AttachToWindow to direct back buffer output from the window to the HMD.
2. Frame Handling
  • Call ovrHmd_BeginFrame to start frame processing and obtain timing information.
  • Perform rendering for each eye in an engine-specific way, rendering into render textures.
  • Call ovrHmd_EndFrame (passing in the render textures from the previous step) to swap buffers and present the frame. This function will also handle timewarp, GPU sync, and frame timing.
3. Shutdown
  • You can use ovrHmd_ConfigureRendering with a null value for the apiConfig parameter to shut down SDK rendering or change its rendering parameters. Alternatively, you can just destroy the ovrHmd object by calling ovrHmd_Destroy.

Comentarios

Entradas populares