The HolographicSpace class is your portal into the holographic world. It controls full-screen rendering, provides camera data, and provides access to spatial reasoning APIs. You will create one for your app's CoreWindow.
Creating the holographic space object is the first step in making your Windows Mixed Reality app. Traditional Windows apps render to a Direct3D swap chain created for the core window of their application view. This swap chain is displayed to a slate in the holographic UI. To make your application view holographic rather than a 2D slate, create a holographic space for its core window instead of a swap chain. Presenting holographic frames that are created by this holographic space puts your app into full-screen rendering mode.
Look for this code in the SetWindow method in AppView.cpp:
m_holographicSpace = HolographicSpace::CreateForCoreWindow(window);
The current holographic space is used in multiple places in the DirectX template:
Next, we'll dive into the setup process that SetHolographicSpace is responsible for in the AppMain class.
Your app's holographic content lives in its holographic space, and is viewed through one or more holographic cameras which represent different perspectives on the scene. Now that you have the holographic space, you can receive data for holographic cameras.
Your app needs to respond to CameraAdded events by creating any resources that are specific to that camera, like your back buffer render target view. Register this function before the app creates any holographic frames in AppView::SetWindow:
m_cameraAddedToken = m_holographicSpace->CameraAdded += ref new Windows::Foundation::TypedEventHandler<HolographicSpace^, HolographicSpaceCameraAddedEventArgs^>( std::bind(&AppMain::OnCameraAdded, this, _1, _2) );
Your app also needs to respond to CameraRemoved events by releasing resources that were created for that camera.
m_cameraRemovedToken = m_holographicSpace->CameraRemoved += ref new Windows::Foundation::TypedEventHandler<HolographicSpace^, HolographicSpaceCameraRemovedEventArgs^>( std::bind(&AppMain::OnCameraRemoved, this, _1, _2) );
The event handlers must complete some work in order to keep holographic rendering flowing smoothly, and so that your app is able to render at all. Read the code and comments for the details: you can look for CameraAdded and CameraRemoved in your main class to understand how the m_cameraResources map is handled by DeviceResources.
Right now, we're focused on AppMain and the setup that it does to enable your app to know about holographic cameras. With this in mind, it's important to take note of the following two requirements:
1. For the CameraAdded event handler, the app can work asynchronously to finish creating resources and loading assets for the new holographic camera. Apps that take more than one frame to complete this work should request a deferral, and complete the deferral after loading asynchronously; a PPL task can be used to do asynchronous work. Your app must ensure that it's ready to render to that camera right away when it exits the event handler, or when it completes the deferral. Exiting the event handler or completing the deferral tells the system that your app is now ready to receive holographic frames with that camera included.
2. When the app receives a CameraRemoved event, it must release all references to the back buffer and exit the function right away. This includes render target views, and any other resource that might hold a reference to the IDXGIResource. The app must also ensure that the back buffer is not attached as a render target, as shown in DeviceResources::ReleaseResourcesForBackBuffer. To help speed things along, your app can release the back buffer and then launch a task to asynchronously complete any other work that is necessary to tear down that camera. The holographic app template includes a PPL task that you can use for this purpose.
Your app's content must be positioned in a spatial coordinate system in order to be rendered. The system provides two primary frames of reference which you can use to establish a coordinate system for your holograms.
There are two kinds of reference frames in Windows Holographic: reference frames attached to the device, and reference frames that remain stationary as the device moves through the user's environment. The holographic app template uses a stationary reference frame by default; this is one of the simplest ways to render world-locked holograms.
Stationary reference frames are designed to stabilize positions near the device's current location. This means that coordinates further from the device are allowed to drift slightly with respect to the user's environment as the device learns more about the space around it. There are two ways to create a stationary frame of reference: acquire the coordinate system from the spatial stage, or use the default SpatialLocator. If you are creating a Windows Mixed Reality app for immersive headsets, the recommended starting point is the spatial stage, which also provides information about the capabilities of the immersive headset worn by the player. Here, we show how to use the default SpatialLocator.
The spatial locator represents the Windows Mixed Reality device, and tracks the motion of the device and provides coordinate systems that can be understood relative to its location.
m_locator = SpatialLocator::GetDefault();
Create the stationary reference frame once when the app is launched. This is analogous to defining a world coordinate system, with the origin placed at the device's position when the app is launched. This reference frame doesn't move with the device.
SpatialStationaryFrameOfReference m_referenceFrame = m_locator->CreateStationaryFrameOfReferenceAtCurrentLocation();
All reference frames are gravity aligned, meaning that the y axis points "up" with respect to the user's environment. Since Windows uses "right-handed" coordinate systems, the direction of the –z axis coincides with the direction the device is facing when the reference frame is created.
NOTE: When your app requires precise placement of individual holograms, use a SpatialAnchor to anchor the individual hologram to a position in the real world. For example, use a spatial anchor when the user indicates a point to be of special interest. Anchor positions do not drift, but they can be adjusted. By default, when an anchor is adjusted, it eases its position into place over the next several frames after the correction has occurred. Depending on your application, when this occurs you may want to handle the adjustment in a different manner (e.g. by deferring it until the hologram is out of view). The RawCoordinateSystem property and RawCoordinateSystemAdjusted events enable these customizations.
Rendering world-locked holograms requires the device to be able to locate itself in the world. This may not always be possible due to environmental conditions, and if so, the user may expect a visual indication of the tracking interruption. This visual indication must be rendered using reference frames attached to the device, instead of stationary to the world.
You app can request to be notified if tracking is interrupted for any reason. Register for the LocatablilityChanged event to detect when the device's ability to locate itself in the world changes. From AppMain::SetHolographicSpace:
m_locatabilityChangedToken = m_locator->LocatabilityChanged += ref new Windows::Foundation::TypedEventHandler<SpatialLocator^, Object^>( std::bind(&AppMain::OnLocatabilityChanged, this, _1, _2) );
Then use this event to determine when holograms cannot be rendered stationary to the world.