Since the IoT extension SDK is not added to projects by default, we’ll need to add a reference so that namespaces like Windows.Devices.Gpio will be available in the project. To do so, right-click on the References entry under the project, select “Add Reference” then navigate the resulting dialog to Universal Windows->Extensions->Windows IoT Extensions for the UWP. Check the box and click OK.
Add the NuGet Packages
Open the NuGet Package Manager
In Solution Explorer, right click your project and then click “Manage NuGet Packages”.
Install the Packages
In the NuGet Package Manager window, select nuget.org as your Package Source and search for Newtonsoft.Json, Microsoft.ProjectOxford.Common, and Microsoft.ProjectOxford.Emotion,. Install all three packages. When using a Cognitive Services API, you need to add the corresponding NuGet package.
Set up the User Interface
Add in the XAML
Open MainPage.xaml and replace the existing code with the following code to create the window UI:
To view the entire UI, change the dropdown in the top left corner from ‘5" Phone’ to ‘12" Tablet’.
Generate the button event handler
In the UI mock up, double click on the “Detect Emotions” button. You will see a “Click=”button_Clicked” added into the button in your XAML code. You will also be redirected to the .xaml.cs file with a new function called “button_Clicked()” created for you. This function will handle the Cognitive Services calls after a user presses the button.
Register for Cognitive Services
Sign in to Cognitive Services
Visit the sign in page and use your Microsoft account to sign in.
Get the product key
Click on the keys you would like to receive. For this application, we only need the Emotion API. Click Emotion - Preview and then click “Subscribe” at the bottom of the page. The next page should contain two keys for the Emotion API. Copy one of them to your clipboard.
Add the C# Code
Add in the namespaces
Open MainPage.xaml.cs. At the top of the , directly under the “using” statements and before the “namespace CognitiveServicesExample” line, add the following Cognitive Services namespaces.
These allow us to use the Cognitive Services APIs in our code, along with some other necessary imaging libraries.
Add in Global Variables
Add the following global variables to the MainPage class (as below)
The subscriptionKey allows your application to call the Emotion API on Cognitive Services, and the BitmapImage stores the image that your application will upload.
Add in the API-calling method
Add the following method to the same class:
This function instantiates an instance of the Emotion API and attempts to open the URL passed as an argument (an image URL), scanning it for faces. It searches the faces it finds for emotions and returns the resulting Emotion objects. These contain detailed results, including the likelihood of each emotion and the bounding box of the face. See the documentation for more details.
Add in the button event handler code
Add the async keyword to the button_Clicked method Visual Studio created for you. Then, add the following code to that function:
This code reads the string from the text input box on the form and makes sure it’s a URL. It retrieves the image from that URL, pastes it in the canvas, and gets the detected emotions from the image using the UploadAndDetectEmotions method defined previously. It then calls a few helper functions to output the results of the Cognitive Services analysis.
Add in the helper functions
You’ll notice that the above code has errors, since we have not added those helper functions yet. Let’s add them in:
The first method outputs the score for all emotions Cognitive Services can detect. Each score falls between 0 and 1 and represents the probability that the face detected is expressing that emotion.
The second and third method determines which emotion is most prevalent. It then outputs these results as a string to a Panel next to the image.
The fourth method places a rectangle around each face detected in the image. Since UWP does not allow apps to draw shapes yet, it uses a blue rectangle in the Assets folder with a transparent background instead. The app places each rectangle image at the starting coordinates of the Rectangle provided by Cognitive Services and scales it to the approximate size of the Cognitive Services rectangle.
Add in the rectangle resource
Download the face rectangle and add it to your Assets folder within your project
Build and Test your app locally
Make sure the app builds correctly by invoking the Build | Build Solution menu command.
Since this is a Universal Windows Platform (UWP) application, you can test the app on your Visual Studio machine as well: Press F5, and the app will run inside your machine.
Change the URL for a different image, or just click “Detect Emotion” to run the Emotion Recognizer with the default image. After a few seconds, the results should appear in your app window as expected: the image with rectangles on it on the left and more detailed emotion output for each face on the right.
In this case, the order is based on depth: faces closer to the front will be first, and faces farther away will be last in the list.
Close your app after you’re done validating it
Deploy the app to your Windows 10 IoT Core device
To deploy our app to our IoT Core device, you need to provide your machine with the device’s identifier. In the PowerShell documentation, you can find instructions to chose a unique name for your IoT Core device. In this sample, we’ll use that name (though you can use your IP address as well) in the ‘Remote Machine Debugging’ settings in Visual Studio.
If you’re building for Minnowboard Max, select x86 in the Visual Studio toolbar architecture dropdown. If you’re building for Raspberry Pi 2 or 3 or the DragonBoard, select ARM.
In the Visual Studio toolbar, click on the Local Machine dropdown and select Remote Machine
At this point, Visual Studio will present the ‘Remote Connections’ dialog. Put the IP address or name of your IoT Core device (in this example, we’re using ‘my-device’) and select Universal (Unencrypted Protocol) for Authentication Mode. Click Select.
Couple of notes:
You can use the IP address instead of the IoT Core device name.
You can verify and/or modify these values navigating to the project properties (select ‘Properties’ in the Solution Explorer) and choose the ‘Debug’ tab on the left:
Now you’re ready to deploy to the remote IoT Core device. Press F5 (or select Debug | Start Debugging) to start debugging our app. You should see the app come up in IoT Core device screen, and you should be able to perform the same functions you did locally. To stop the app, press on the ‘Stop Debugging’ button (or select Debug | Stop Debugging).