Holographic rendering enables your app to draw a hologram in a precise locations in the world around the user. Holograms are objects made of sound and light, and rendering enables your app to add the light.
The HoloLens displays allow you to add light to the world. Black pixels will be fully transparent, while brighter pixels will be increasingly opaque. Because the light from the displays is added to the light from the real world, even white pixels are somewhat translucent.
While stereoscopic rendering provides one depth cue for your holograms, adding grounding effects can help users see more easily what surface a hologram is near. One grounding technique is to add a glow around a hologram on the nearby surface and then render a shadow against this glow. In this way, your shadow will appear to subtract light from the environment. Spatial sound can be another extremely important depth cue.
HoloLens continually tracks the position and orientation of the user's head relative to his surroundings. As your app begins preparing its next frame, the system predicts where the user's head will be in the future at the exact moment that the frame will show up on the displays. Based on this prediction, the system calculates the view and projection transforms to use for that frame. Your application absolutely has to use these transforms to produce correct results; if system-supplied transforms are not used, the resulting image will not align with the real world, leading to user discomfort.
Note that to accurately predict when a new frame will reach the displays, the system is constantly measuring the effective end-to-end latency of your app's rendering pipeline. While the system will adjust to the length of your rendering pipeline, you can further improve hologram stability by keeping that pipeline as short as possible.
When rendering a frame, the system will specify the back-buffer viewport in which your application should draw. This viewport will often be smaller than the full size of the frame buffer. Regardless of the viewport size, once the frame has been rendered by the application, the system will upscale the image to fill the entirety of the displays.
For applications that cannot reliably render at 60hz, reducing the viewport size can be used as a mechanism to reduce rendering latency at the cost of increased pixel aliasing.
The rendering frustum, resolution, and framerate in which your app is asked to render, may also change from frame to frame and may differ across the left and right eye. For example, when mixed reality capture (MRC) is active, one eye may be rendered with a larger FOV or resolution.
For any given frame, your app must render using the view transform, projection transform, and viewport resolution provided by the system. Additionally, your application must never assume that any rendering/view parameter remains fixed from frame-to-frame.
Windows Holographic introduces the concept of the holographic camera. Holographic cameras are similar to the traditional camera found in 3D graphics texts: they define both the extrinsic (position and orientation) and intrinsic camera properties (ex: field-of-view) used to view a virtual 3D scene. Unlike traditional 3D cameras, the application is not in control of the position, orientation, and intrinsic properties of the camera. Rather, the position and orientation of the holographic camera is implicitly controlled by the user's movement. The user's movement is relayed to the application on a frame-by-frame basis via a view transform. Likewise, the camera's intrinsic properties are defined by the device's calibrated optics and relayed frame-by-frame via the projection transform.
In general your app will be rendering for a single stereo camera. However your app must render for all active cameras. Cameras may be stereo or mono and may come and go for the duration of your app.
A robust rendering loop should support multiple cameras and should support both mono and stereo cameras.
When rendering medical MRI or engineering volumes, volume rendering techniques are often used.