Motion controllers

Windows Mixed Reality motion controllers
Motion controllers are hardware accessories that allow users to take action in mixed reality. An advantage of motion controllers over gestures is that the controllers have a precise position in space, allowing for fine grained interaction with digital objects. For Windows Mixed Reality immersive headsets, motion controllers are the primary way that users will take action in their world.

Device support

Feature HoloLens Immersive headsets
Motion controllers ✔️

Hardware details

Windows Mixed Reality motion controllers offer precise and responsive tracking of movement in your field of view using the sensors in the immersive headset, meaning there is no need to install hardware on the walls in your space. These motion controllers will offer the same ease of setup and portability as Windows Mixed Reality immersive headsets. Our device partners plan to market and sell these controllers on retail shelves this holiday.

Features:

  • Optical tracking
  • Trigger
  • Grab button
  • Thumbstick
  • Touchpad

Gazing and pointing

There are two key models for interaction, gaze and commit and point and commit:

  • With gaze and commit, users target an object with their gaze and then use select interactions.
  • With point and commit, a user can aim a pointing-capable motion controller at the target object and use select interactions.

Apps that support motion controllers should enable both gaze-driven and pointing-driven interactions to give users choice in what input device they use.

Interactions: Low-level spatial input

The core interactions across hands and motion controllers are Select, Menu, Grasp, Touchpad, Thumbstick, and Home.

  • Select is the primary interaction to activate a hologram, consisting of a press followed by a release. For motion controllers, you perform a Select press using the controller's trigger. Other ways to perform a Select are by speaking the voice command "Select". The same select interaction can be used within any app. Think of Select as the equivalent of a mouse click, a universal action that you learn once and then apply across all your apps.
  • Menu is the secondary interaction for acting on an object, used to pull up a context menu or take some other secondary action. With motion controllers, you can take a menu action using the controller's menu button. (i.e. the button with the hamburger "menu" icon on it)
  • Grasp is how users can directly take action on objects at their hand to manipulate them. With motion controllers, you can do a grasp action by squeezing your fist tightly. A motion controller may detect a Grasp with a grab button, palm trigger or other sensor.
  • Touchpad allows the user to adjust an action in two dimensions along the surface of a motion controller's touchpad, committing the action by clicking down on the touchpad. Touchpads provide a pressed state, touched state and normalized XY coordinates. X and Y range from -1 to 1 across the range of the circular touchpad, with a center at (0, 0). For X, -1 is on the left and 1 is on the right. For Y, -1 is on the bottom and 1 is on the top.
  • Thumbstick allows the user to adjust an action in two dimensions by moving a motion controller's thumbstick within its circular range, committing the action by clicking down on the thumbstick. Thumbsticks also provide a pressed state and normalized XY coordinates. X and Y range from -1 to 1 across the range of the circular touchpad, with a center at (0, 0). For X, -1 is on the left and 1 is on the right. For Y, -1 is on the bottom and 1 is on the top.
  • Home is a special system action that is used to go back to the Start Menu. It is similar to pressing the Windows key on a keyboard or the Xbox button on an Xbox controller. You can go home by pressing the Windows button on a motion controller. Note, you can also always return to Start by saying "Hey Cortana, Go Home". Apps cannot react specifically to home actions, as these are handled by the system.

Composite gestures: High-level spatial input

Both hand gestures and motion controllers can be tracked over time to detect a common set of high-level composite gestures. This enables your app to detect high-level tap, hold, manipulation and navigation gestures, whether users end up using hands or controllers.

See also