The Oculus Rift requires split-screen stereo with distortion correction for each eye to cancel the distortion due to lenses. Two ways of doing it:
- SDK distortion rendering approach: library takes care of timing, distortion rendering, and buffer swap. Developers provide low level device and texture pointers to the API, and instrument the frame loop with ovrHmd_BeginFrame and ovrHmd_EndFrame.
- Client distortion rendering: distortion rendered by the application code. Distortion rendering is mesh-based. The distortion is encoded in mesh vertex data rather than using an explicit function in the pixel shader. The Oculus SDK generates a mesh that includes vertices and UV coordinates used to warp the source render target image to the final buffer. The SDK also provides explicit frame timing functions used to support timewarp and prediction.