Training
Module
Add Azure AI services to your mixed reality project - Training
This course explores the use of Azure speech services by integrating to a hololens2 application. You can also deploy your project to a HoloLens.
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Voice is one of the key forms of input on HoloLens. It allows you to directly command a hologram without having to use hand gestures. Voice input can be a natural way to communicate your intent. Voice is especially good at traversing complex interfaces, because it lets users cut through nested menus with one command.
Voice input is powered by the same engine that supports speech in all Universal Windows Apps. On HoloLens, speech recognition will always function in the Windows display language configured in your device Settings.
When you're using voice commands, head or eye gaze is the typical targeting mechanism, whether with a cursor to "select" or to channel your command to an application you're looking at. It may not even be required to show any gaze cursor ("see it, say it"). Some voice commands don't require a target at all, such as "go to start" or "Hey Cortana."
Feature | HoloLens (1st gen) | HoloLens 2 | Immersive headsets |
Voice input | ✔️ | ✔️ | ✔️ (with microphone) |
HoloLens (1st gen)
Even without specifically adding voice support to your app, your users can activate holograms simply by saying the system voice command "select". This behaves the same as an air tap on HoloLens, pressing the select button on the HoloLens clicker, or pressing the trigger on a Windows Mixed Reality motion controller. You'll hear a sound and see a tooltip with "select" appear as confirmation. "Select" is enabled by a low-power keyword detection algorithm, which means you can say it anytime with minimal battery life impact. You can even say "select" with your hands at your side.
HoloLens 2
To use the "select" voice command in HoloLens 2, you first need to bring up the gaze cursor to use as a pointer. The command to bring it up is easy to remember--just say, "select".
To exit the mode, use your hands again by air tapping, approaching a button with your fingers, or using the system gesture.
Image: Say "select" to use the voice command for selection
You can say "Hey Cortana" to bring up Cortana at any time. You don't have to wait for her to appear to continue asking her your question or giving her an instruction. For example, try saying "Hey Cortana, what's the weather?" as a single sentence. For more information about Cortana and what you can do, ask her! Say "Hey Cortana, what can I say?" and she'll pull up a list of working and suggested commands. If you're already in the Cortana app, select the ? icon on the sidebar to pull up this same menu.
HoloLens-specific commands
HoloLens has a "see it, say it" model for voice input, where labels on buttons tell users what voice commands they can say as well. For example, when looking at an app window in HoloLens (1st gen), a user can say "Adjust" command to adjust the position of the app in the world.
Image: A user can say the "Adjust" command, which they see in the App bar to adjust the position of the app
When apps follow this rule, users can easily understand what to say to control the system. While gazing at a button in HoloLens (1st gen), you'll see a "voice dwell" tooltip that comes up after a second if the button is voice-enabled and displays the command to speak to "press" it. To reveal voice tooltips in HoloLens 2, show the voice cursor by saying "select" or "What can I say" (See image).
Image: "See it, say it" commands appear below the buttons
There are many voice commands you can say while gazing at a hologram to quickly do manipulation tasks. These voice commands work on app windows and 3D objects you've placed in the world.
Hologram manipulation commands
On HoloLens 2, you can also create more natural interactions in combination with eye-gaze, which implicitly provides contextual information about what you are referring to. For example, you could look at a hologram and say "put this" and then look over where you want to place it and say "over here". Or you could look at a holographic part on a complex machine and say: "give me more information about this".
Some commands, like the commands for fast manipulation above, can be hidden. To learn about what commands you can use, gaze at an object and say, "what can I say?". A list of possible commands pops up. You can also use the head gaze cursor to look around and reveal the voice tooltips for each button in front of you.
If you want a complete list, just say, "Show all commands" anytime.
Rather than typing with air taps, voice dictation can be more efficient to enter text into an app. This can greatly accelerate input with less effort for the user.
Voice dictation starts by selecting the microphone button on the keyboard
Anytime the holographic keyboard is active, you can switch to dictation mode instead of typing. Select the microphone on the side of the text input box to get started.
Consider adding voice commands to any experience that you build. Voice is a powerful way control the system and apps. Because users speak with different kinds of dialects and accents, proper choice of speech keywords will make sure your users' commands are interpreted unambiguously.
Below are some practices that will aid in smooth speech recognition.
Voice input is a natural way to communicate our intents. Voice is especially good at interface traversals because it can help users cut through multiple steps of an interface. A user might say "go back" while looking at a webpage, instead of having to go up and hit the back button in the app. This small time saving has a powerful emotional effect on user’s perception of the experience and gives them a small amount superpower. Using voice is also a convenient input method when we have our arms full or are multi-tasking. On devices where typing on a keyboard is difficult, voice dictation can be an efficient alternative way to input text. Lastly, in some cases when the range of accuracy for gaze and gesture are limited, voice can help to disambiguate the user's intent.
How using voice can benefit the user
While voice input is great for many different applications, it also faces several challenges. Understanding both the advantages and challenges for voice input enables app developers to make smarter choices for how and when to use voice input and to create a great experience for their users.
Voice input for continuous input control Fine-grained control is one of them. For example, a user might want to change their volume in their music app. She can say "louder", but it's not clear how much louder the system is supposed to make the volume. The user could say: "Make it a little louder", but "a little" is difficult to quantify. Moving or scaling holograms with voice is similarly difficult.
Reliability of voice input detection While voice input systems become better and better, sometimes they may incorrectly hear and interpret a voice command. The key is to address the challenge in your application. Provide feedback to your users when the system is listening and what the system understood clarifies potential issues understanding the users' speech.
Voice input in shared spaces Voice may not be socially acceptable in spaces that you share with others. Here are a few examples:
Voice input of unique or unknown words Difficulties for voice input also come when users are dictating words that may be unknown to the system, such as nicknames, certain slang words, or abbreviations.
Learning voice commands While the ultimate goal is to naturally converse with your system, often apps still rely on specific pre-defined voice commands. A challenge associated with a significant set of voice commands is how to teach them without overloading the user and how to help the user to keep them.
When Voice is applied properly, the user understands what they can say and get clear feedback the system heard them correctly. These two signals make the user feel confident in using Voice as a primary input. Below is a diagram showing what happens to the cursor when voice input is recognized and how it communicates that to the user.
1. Regular cursor state
2. Communicates voice feedback and then disappears
3. Returns to regular cursor state
For applications that want to take advantage of the customized audio input processing options provided by HoloLens, it's important to understand the various audio stream categories your app can consume. Windows 10 supports several different stream categories and HoloLens makes use of three of these to enable custom processing to optimize the microphone audio quality tailored for speech, communication, and other, which can be used for ambient environment audio capture (that is, "camcorder") scenarios.
All this audio processing is hardware accelerated which means the features drain a lot less power than if the same processing was done on the HoloLens CPU. Avoid running other audio input processing on the CPU to maximize system battery life and take advantage of the built-in, offloaded audio input processing.
HoloLens 2 supports multiple languages. Keep in mind that speech commands will always run in the system's display language even if multiple keyboards are installed or if apps attempt to create a speech recognizer in a different language.
If you're having any issues using "select" and "Hey Cortana", try moving to a quieter space, turning away from the source of noise, or by speaking louder. At this time, all speech recognition on HoloLens is tuned and optimized specifically to native speakers of United States English.
For the Windows Mixed Reality Developer Edition release 2017, the audio endpoint management logic will work fine (forever) after logging out and back in to the PC desktop after the initial HMD connection. Before that first sign out/in event after going through WMR OOBE, the user could experience various audio functionality issues ranging from no audio to no audio switching depending on how the system was set up before connecting the HMD for the first time.
With MRTK, you can easily assign voice command on any objects. Use MRTK's Speech Input Profile to define your keywords. By assigning SpeechInputHandler script, you can make any object respond to the keywords defined in the Speech Input Profile. SpeechInputHandler also provides speech confirmation label to improve the user's confidence.
Training
Module
Add Azure AI services to your mixed reality project - Training
This course explores the use of Azure speech services by integrating to a hololens2 application. You can also deploy your project to a HoloLens.