跳至主要内容

发现、联系、增长

Microsoft Reactor

加入 Microsoft Reactor 并实时与初创公司和开发人员互动

是否准备好开始使用 AI?  Microsoft Reactor 提供活动、培训和社区资源,以帮助初创公司、企业家和开发人员利用 AI 技术打造新业务。 快加入我们吧!

发现、联系、增长

Microsoft Reactor

加入 Microsoft Reactor 并实时与初创公司和开发人员互动

是否准备好开始使用 AI?  Microsoft Reactor 提供活动、培训和社区资源,以帮助初创公司、企业家和开发人员利用 AI 技术打造新业务。 快加入我们吧!

返回

Deploy a real-time inferencing model with AML Service, AKS & Container Instance

19 四月, 2022 | 12:30 下午 - 1:30 下午 (UTC) 协调世界时

  • 形式:
  • alt##Livestream直播

主题: 云开发

语言: 英语

In machine learning, inferencing refers to the use of a trained model to predict labels for new data on which the model has not been trained. Often, the model is deployed as part of a service that enables applications to request immediate, or real-time, predictions for individual, or small numbers of data observations. In this session you will learn how to deploy a real time inferencing pipeline.

The session will focus on Azure services and related products like Azure Machine Learning Servic, Azure Machine Learning SDK,Azure Kubernetes Service &Azure Container Instance.

What will you learn from the session :

a) Deploy a model as a real-time inferencing service.
b) Consume a real-time inferencing service.
c) Troubleshoot service deployment

Further Learning : https://aka.ms/MachineLearningServices

Speaker : Shivam Sharma

Speaker BIO- Shivam is an author, cloud architect, speaker, and Co-Founder at TechScalable. Being passionate about ever evolving technology he works on Azure, GCP, Machine Learning, Kubernetes & DevOps. He is also a Microsoft Certified Trainer. He architects’ solutions on Cloud as well on-premises using wide array of platforms/technologies.

Social Handle

LinkedIn - https://www.linkedin.com/in/shivam-sharma-9828a536/
Twitter - https://twitter.com/ShivamSharma_TS
Facebook - https://www.facebook.com/TSshivamsharma/

先决条件

Knowledge of Python

主讲人

如有疑问,请联系我们 reactor@microsoft.com