Shivam Sharma
TechScalable
검색, 연결, 성장
AI를 시작할 준비가 되셨나요? Microsoft Reactor는 스타트업, 기업가 및 개발자가 AI 기술에 대한 다음 비즈니스를 구축할 수 있도록 이벤트, 교육 및 커뮤니티 리소스를 제공합니다. 참여해 주세요!
검색, 연결, 성장
AI를 시작할 준비가 되셨나요? Microsoft Reactor는 스타트업, 기업가 및 개발자가 AI 기술에 대한 다음 비즈니스를 구축할 수 있도록 이벤트, 교육 및 커뮤니티 리소스를 제공합니다. 참여해 주세요!
19 4월, 2022 | 12:30 오후 - 1:30 오후 (UTC) 협정 세계시
항목: 클라우드 개발
언어: 영어
In machine learning, inferencing refers to the use of a trained model to predict labels for new data on which the model has not been trained. Often, the model is deployed as part of a service that enables applications to request immediate, or real-time, predictions for individual, or small numbers of data observations. In this session you will learn how to deploy a real time inferencing pipeline.
The session will focus on Azure services and related products like Azure Machine Learning Servic, Azure Machine Learning SDK,Azure Kubernetes Service &Azure Container Instance.
What will you learn from the session :
a) Deploy a model as a real-time inferencing service.
b) Consume a real-time inferencing service.
c) Troubleshoot service deployment
Further Learning : https://aka.ms/MachineLearningServices
Speaker : Shivam Sharma
Speaker BIO- Shivam is an author, cloud architect, speaker, and Co-Founder at TechScalable. Being passionate about ever evolving technology he works on Azure, GCP, Machine Learning, Kubernetes & DevOps. He is also a Microsoft Certified Trainer. He architects’ solutions on Cloud as well on-premises using wide array of platforms/technologies.
Social Handle
LinkedIn - https://www.linkedin.com/in/shivam-sharma-9828a536/
Twitter - https://twitter.com/ShivamSharma_TS
Facebook - https://www.facebook.com/TSshivamsharma/
Knowledge of Python
스피커
질문이 있는 경우 다음으로 문의하세요. reactor@microsoft.com