Gaze is the first form of input and is used for targeting within holographic apps. Gaze tells you where the user is looking in the world and lets you determine their intent. In the real world, you'll typically look at an object that you intend to interact with. This is the same with gaze.
HoloLens uses the position and orientation of your user's head, not their eyes, to determine their gaze vector. You can think of this vector as a laser pointer straight ahead from directly between the user's eyes. As the user looks around the room, your application can intersect this ray, both with its own holograms and with the spatial mapping mesh to determine what virtual or real-world object your user may be looking at.
On HoloLens, interactions should generally derive their targeting from the user's gaze, rather than trying to render or interact at the hand's location directly. Once an interaction has started, relative motions of the hand may be used to control the gesture, as with the manipulation or navigation gesture.
As a holographic app developer, you can do a lot with gaze:
Most apps should use a cursor (or other auditory/visual indication) to give the user confidence in what they're about to interact with. You typically position this cursor in the world where their gaze ray first interacts an object, which may be a hologram or a real-world surface.
Once the user has targeted a hologram or real-world object using their gaze, their next step is to take action on that object. On HoloLens, the primary ways for a user to take action are through gestures or voice.
Your session has expired. Please sign-in again to continue. Unfortunately any unsaved changes will be lost.