Create a UWP app that identifies faces in a photo and determine the emotions in those photos using Microsoft’s Cognitive Services API.
All of the sample code is available to download, but as an exercise, this tutorial will take you through the complete steps to create this app from scratch.
Make sure your device is running and set up and you have Visual Studio installed. See our get started page to set up your device.
You will need your device’s IP address when connecting to it remotely.
Create a new project with (File | New Project…)
In the New Project dialog, navigate to Universal as shown below (in the left pane in the dialog: Templates | Visual C# | Windows Universal).
Select the template Blank App (Universal Windows)
Note that we call the app CogntiveServicesExample. You can name it something different, but you will have to adjust sample code that references CognitiveServicesExample as well.
If this is the first project you create, Visual Studio will likely prompt you to enable developer mode for Windows 10
Since the IoT extension SDK is not added to projects by default, we’ll need to add a reference so that namespaces like Windows.Devices.Gpio will be available in the project. To do so, right-click on the References entry under the project, select “Add Reference” then navigate the resulting dialog to Universal Windows->Extensions->Windows IoT Extensions for the UWP. Check the box and click OK.
Open the NuGet Package Manager
In Solution Explorer, right click your project and then click “Manage NuGet Packages”.
Install the Packages
In the NuGet Package Manager window, select nuget.org as your Package Source and search for Newtonsoft.Json, Microsoft.ProjectOxford.Common, and Microsoft.ProjectOxford.Emotion,. Install all three packages. When using a Cognitive Services API, you need to add the corresponding NuGet package.
Open MainPage.xaml and replace the existing code with the following code to create the window UI:
To view the entire UI, change the dropdown in the top left corner from ‘5" Phone’ to ‘12" Tablet’.
In the UI mock up, double click on the “Detect Emotions” button. You will see a “Click=”button_Clicked” added into the button in your XAML code. You will also be redirected to the .xaml.cs file with a new function called “button_Clicked()” created for you. This function will handle the Cognitive Services calls after a user presses the button.
Visit the Azure Cognitive Services Page and click on “Get API Key” next to the Emotion API label; use your Microsoft account to sign in.
You should now see two API keys available for use for 30 days.
If you already used the Emotion API’s free trial, you can still use the APIs for free with an Azure account. Sign up for one, then head to the Azure Portal and create a new Cognitive Services Resource with the fields as shown below.
After it deploys, click on the “Show access keys…“ link under the “Essentials” window to see your access keys.
Open MainPage.xaml.cs. At the top of the , directly under the “using” statements and before the “namespace CognitiveServicesExample” line, add the following Cognitive Services namespaces.
These allow us to use the Cognitive Services APIs in our code, along with some other necessary imaging libraries.
Add the following global variables to the MainPage class (as below)
The subscriptionKey allows your application to call the Emotion API on Cognitive Services, and the BitmapImage stores the image that your application will upload.
Add the following method to the same class:
This function instantiates an instance of the Emotion API and attempts to open the URL passed as an argument (an image URL), scanning it for faces. It searches the faces it finds for emotions and returns the resulting Emotion objects. These contain detailed results, including the likelihood of each emotion and the bounding box of the face. See the documentation for more details.
Add the async keyword to the button_Clicked method Visual Studio created for you. Then, add the following code to that function:
This code reads the string from the text input box on the form and makes sure it’s a URL. It retrieves the image from that URL, pastes it in the canvas, and gets the detected emotions from the image using the UploadAndDetectEmotions method defined previously. It then calls a few helper functions to output the results of the Cognitive Services analysis.
You’ll notice that the above code has errors, since we have not added those helper functions yet. Let’s add them in:
The first method outputs the score for all emotions Cognitive Services can detect. Each score falls between 0 and 1 and represents the probability that the face detected is expressing that emotion.
The second and third method determines which emotion is most prevalent. It then outputs these results as a string to a Panel next to the image.
The fourth method places a rectangle around each face detected in the image. Since UWP does not allow apps to draw shapes yet, it uses a blue rectangle in the Assets folder with a transparent background instead. The app places each rectangle image at the starting coordinates of the Rectangle provided by Cognitive Services and scales it to the approximate size of the Cognitive Services rectangle.
Download the face rectangle and add it to your Assets folder within your project
Make sure the app builds correctly by invoking the Build | Build Solution menu command.
Since this is a Universal Windows Platform (UWP) application, you can test the app on your Visual Studio machine as well: Press F5, and the app will run inside your machine.
Change the URL for a different image, or just click “Detect Emotion” to run the Emotion Recognizer with the default image. After a few seconds, the results should appear in your app window as expected: the image with rectangles on it on the left and more detailed emotion output for each face on the right.
In this case, the order is based on depth: faces closer to the front will be first, and faces farther away will be last in the list.
Close your app after you’re done validating it
To deploy our app to our IoT Core device, you need to provide your machine with the device’s identifier. In the PowerShell documentation, you can find instructions to chose a unique name for your IoT Core device. In this sample, we’ll use that name (though you can use your IP address as well) in the ‘Remote Machine Debugging’ settings in Visual Studio.
If you’re building for Minnowboard Max, select x86 in the Visual Studio toolbar architecture dropdown. If you’re building for Raspberry Pi 2 or 3 or the DragonBoard, select ARM.
In the Visual Studio toolbar, click on the Local Machine dropdown and select Remote Machine
At this point, Visual Studio will present the ‘Remote Connections’ dialog. Put the IP address or name of your IoT Core device (in this example, we’re using ‘my-device’) and select Universal (Unencrypted Protocol) for Authentication Mode. Click Select.
Couple of notes:
You can use the IP address instead of the IoT Core device name.
You can verify and/or modify these values navigating to the project properties (select ‘Properties’ in the Solution Explorer) and choose the ‘Debug’ tab on the left:
Now you’re ready to deploy to the remote IoT Core device. Press F5 (or select Debug | Start Debugging) to start debugging our app. You should see the app come up in IoT Core device screen, and you should be able to perform the same functions you did locally. To stop the app, press on the ‘Stop Debugging’ button (or select Debug | Stop Debugging).
Congratulations! Your app should now be working!